# No other launch stuff, no other programs asides from firefox running. This is the part that I think is telling me what to do but I'm not smart enough to understand: > If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF But I don't know how to set that setting or where the documentation is. I'm gonna go garden for a bit to spare everyone from my stupidity but I'll read through the threads I missed tonight to see if anyone had any ideas.
James Smith
Does WEB UI's work with AMD linux?
Owen Murphy
How do I upload a mask to the WebIU? I erased the part of the image I want masked, saved as a separate image, then uploaded it but nothing's changing in the part I want masked.
Camden Turner
Heres with the sdupscaler method, it fixed her left arm but lost quality
Thanks user Where is your directory located? The only other suggestion I have is to run --lowvram --always-batch-cond-uncond --opt-split-attention, but I really don't think you should need that on 8GB. I don't mean to sound like a broken record because I know you're using my launch parameters, but did you also follow my setup regarding directory location and such? I'm not sure how much of a difference it makes, but I remember reading at one point that it's recommended you run it from your C:\ drive directly if possible. >rentry.org/zfawb
I ran this at Steps: 100, Sampler: DPM2 a, CFG scale: 7, Seed: 830691178, Face restoration: CodeFormer, Size: 1024x1536, Denoising strength: 0.5. Original provided here for comparison. I'm going to work up a base guide in rentry for this method. If you wouldn't mind cataloging your experience, that'd be good information to add to it.
mech user here I haven't been keeping up, is it still the same as the first one you asked about or have you made any new discoveries or developments
its fine if he keeps posting the same image, we are learning how this shit works. people need to learn context.
Aaron Williams
The scifi landscapes I've been posting are all based on a textual inversion model I created. I've had good luck using it for styles, no real application trying things or people yet. We hit the image limit, which has been a continued problem
Anons are right though, if it's the same image, it should be grids of an X/Y plot. It's the same method, just testing different steps/samplers. I'm about to put a run through now myself, will post the grid once I do
John Mitchell
maybe you should post a fucking prompt, fine retard?
literally shut the fuck up if you're not even going to contribute even a single image or idea other than crying.
ok now I see it I want to play with this and lewd images. There were a couple really great goatse ones.
Asher Jones
so clever. do you come up with the idea to use electrical cables on your own?
Samuel Perez
and set masked content to original
Joseph Hill
Yes, see: > rentry.org/aikgx What I haven't personally tested is the difference that a large subject matter pool makes. They suggest only 3-5 images, but I have to imagine more would provide a better representation of the style Try following the directory setup I have and see if that does the trick
Jaxson Hall
why does the wd model have a tendency to make blood shot eyes? might make a negative prompt and see if i can mitigate it