For ancsstor samples it gives alternatives when you use >20, for any other sampler it just improves the image
Camden Jenkins
I've just followed rentry.org/voldy and have it working. To use the Anime model do I need to rename the wd-v1-2-full-ema.ckpt to model.ckpt instead of the base model or do I have both in the folder? Sorry for small brain!
Noah Gray
it's an image-based thread and they make a new one whenever the old hits image limit
Ryder Brown
how long until jannies start cracking down and deleting threads that have images like this in the OP?
With the colab textual inversion... how the hell do you download your results?
Connor Harris
It says I don't have python installed, but I do. Do I need to install it in the SD folder?
Landon Lee
someone mask Richard Stallman into this
Tyler Cox
That's one way of doing it, the easiest. But don't delete the original model, just rename that to something else.
A cleaner way of doing it so you can more easily launch with either model is to make use of the --ckpt command line argument followed by the full path to the ckpt file.
So you could have one batch file that has...
--ckpt c:\sd\model.ckpt
for the original model. Then another batch file for the waifu model like this
--ckpt c:\sd\waifu12.ckpt
Then you won't have to keep renaming files back and forth.
If you're not comfortable with batch files then that's ok and you can use your original method.
Jeremiah Hernandez
you cant do that
Carter Nelson
...
Josiah Rivera
They are in the folder on the left. Read the guide.
Mason White
I've literally been trying to do this since I started yesterday amazonian and tanned works pretty nice
Yeah, I had to uninstall and reinstall with add to PATH checked because I already had it installed beforehand.
Noah White
tell me how to get around it >RuntimeError: CUDA out of memory. Tried to allocate 31.75 GiB (GPU 0; 3.92 GiB total capacity; 2.34 GiB already allocated; 116.69 MiB free; 2.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF and I'll deliver t. nvidia NVIDIA Quadro M1000M user
keep waiting, it should give a pretty unambiguous "launched" message when it's actually done. you can check the python scripts and bat files if you want to confirm that. just give it like 30 minutes max probably
Joshua Lee
I didn't want to type this post from PC because when prediction mode happened my pc froze for a while
Andrew Jones
they already do. try making one with hope solo's pussy pic on /sp/
Josiah Cox
What settings should I be using to get the best results out of upscaling?
>RuntimeError: CUDA out of memory. Tried to allocate 14.84 GiB (GPU 0; 3.92 GiB total capacity; 1.76 GiB already allocated; 659.69 MiB free; 1.84 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Time taken: 1.42s
still aint workin brb gonna go to mcdonalds
Andrew Green
That’s Mikufag, known troll and shitposter from /ic/
Sebastian Cruz
>when prediction mode happened my pc froze for a while
Is this a new feature I've skipped somewhere in Win10/11?
Also, why the fuck would I, or you, need this if we got a physical keyboard in front of us? (unless you're disabled in some way)
David Williams
nope, can't be done, the image is just unsuitable for it, best to just forget about it and move on