I have found this, github.com/learning-at-home/hivemind, is it useful or not? Anyway we can implement it? Also if you can list out the hardware you have and whether you would contribute to training or not.
>Downloads: v1.4 link magnet:?xt=urn:btih:3a4a612d75ed088ea542acac52f9f45987488d1c&dn=sd-v1-4.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337
I wonder how soon until we have the GFPGAN for text
Owen Bell
I love how there's a mcdonalds right next to Hogwarts. Gotta have one every 2 miles.
Lincoln Morris
i could be mistaken since i don't have the capability to generate that size, but i've heard there can be repeated elements. i believe the training set was 512x512?
Late-night streaming stable diffusion! Come prompt smith, listen to chiptunes, remix other's creations with img2img, get inspired, and hang out with other cromagnons as we attempt to understand the technology given us. K sampling, GFPGAN, great for using from the toilet or bed!
Hey, it got more letters than usual. Also I keep thinking of that chart of all the models being asked to draw a kangaroo holding a sign that says "Welcome" and how the larger ones absolutely nail it. We'll get actual words and phrases eventually.
Andrew Edwards
oh for sure, its just funny. Itll probably happen sooner rather than later too at the rate all this has been going.
cohesive text would be super useful, especially if you could combine it with a text ai
Jacob Morris
So, why all these text-to-image generators shit themselves trying to make letters (and eyes)?
Dylan Clark
Should I have "Fix faces with GFPGAN" on even when the face is small or far away?
Chase Foster
nah, it really only works on large fucked up eyes in my experience. it just blurs small shit if anything.
I have no hard data to back this up, but I'd say basic drawing principle: general to specific. You always start with the broad composition and work inward. Observing the AI gradually reducing noise to meaningful things reveals a similar process. For a human artist this is done for practical purposes: details take time, and there's no point in carefully drawing out a hand only to discover you put it in the wrong place and have to start all over. So you work inward, general to specific. The AI seems to be doing something similar, but every inward step is another step in its processing, and its limited in how many steps it can take. It's like an artist pulled away before having the chance to finish, and in fact if you did that with a human, you might see similar scribbly "rough art" results on the details.
Elijah Martin
Alright I'm calling it a night. I'll try to find her tomorrow. Or maybe on to a new adventure.
Looking at how stable diffusion works, I think it has to be rebuilt from the ground up. As it stands the tags on the images it's using are too simple, they have to be much more descriptive for the AI to have some sort of basic coherence