/sdg/ - Stable Diffusion General

fire edition

Previously: Related:
/h/ /sdhb/: /vg/ /aids/: /mlp/ /ppp/: Emad's Twitter: twitter.com/EMostaque

Matrix server soon

I have found this, github.com/learning-at-home/hivemind, is it useful or not? Anyway we can implement it? Also if you can list out the hardware you have and whether you would contribute to training or not.

>Scripts:
github.com/hlky/stable-diffusion-webui

>Recent News:
Official updated weights have been released!: twitter.com/EMostaque/status/1561777122082824192

Emad announces plans for custom model training: twitter.com/EMostaque/status/1561780596107612161?t=HOAF1LBb09e1EMgZo9ROKA&s=19

Emad announces future anime oriented weights: twitter.com/EMostaque/status/1562192103823708162?t=vJRb0oHaE9jPEN-O5ibVzg&s=19

Emad announces animation Soon: twitter.com/EMostaque/status/1561778925906395140?t=0ZsWlpIXAIr-8_-1PviARA&s=19

>Stable diffusion Official discord:
discord.gg/stablediffusion

>Starter guides:
rentry.org/retardsguide - Main txt2img guide
rentry.org/kretard - K-Diffusion guide
rentry.org/img2img - img2img guide
rentry.org/tqizb - amd guide

>Downloads:
v1.4 link
magnet:?xt=urn:btih:3a4a612d75ed088ea542acac52f9f45987488d1c&dn=sd-v1-4.ckpt&tr=udp%3a%2f%2ftracker.openbittorrent.com%3a6969%2fannounce&tr=udp%3a%2f%2ftracker.opentrackr.org%3a1337

>Colab with new model
colab.research.google.com/drive/1AfAmwLMd_Vx33O9IwY2TmO9wKZ8ABRRa
>SAAS sites:
beta.dreamstudio.ai - Free, New accounts get 200 generations
pornpen.ai - NSFW Allowed, Free
dezgo.com - Contains Ads, Free

Attached: 00360.png (512x512, 507.86K)

Other urls found in this thread:

twitch.tv/stateofartist
twitter.com/AnonBabble

Attached: Mc_What_thefuck.png (512x768, 773.9K)

I wonder how soon until we have the GFPGAN for text

I love how there's a mcdonalds right next to Hogwarts. Gotta have one every 2 miles.

i could be mistaken since i don't have the capability to generate that size, but i've heard there can be repeated elements. i believe the training set was 512x512?

So are people finetuning the model already?

Attached: my nuts.png (1280x1280, 2.25M)

alright fr, can I make nudes on Colab without getting v&

Attached: 000023.3667495356.png (512x512, 314.64K)

Cute

Attached: wasp_girl.webm (704x448, 1.44M)

Oh, the image limit was on the thread, not me. Shows what a newfag I am (actually just the opposite, spend 90% of my time here lurking)

Attached: 00022.png (512x512, 414.4K)

MDonD
MDGOnad's
MOnad""s
MDonad's

Late-night streaming stable diffusion! Come prompt smith, listen to chiptunes, remix other's creations with img2img, get inspired, and hang out with other cromagnons as we attempt to understand the technology given us. K sampling, GFPGAN, great for using from the toilet or bed!

Just type !SD "Your Prompt Here" in chat

twitch.tv/stateofartist

(now 100% more comfy, 80% less horny)

Attached: 444-172353.png (512x712, 654K)

ayyyy my catter got used, fuck ya, hansom boi

Hey, it got more letters than usual. Also I keep thinking of that chart of all the models being asked to draw a kangaroo holding a sign that says "Welcome" and how the larger ones absolutely nail it. We'll get actual words and phrases eventually.

oh for sure, its just funny. Itll probably happen sooner rather than later too at the rate all this has been going.

cohesive text would be super useful, especially if you could combine it with a text ai

So, why all these text-to-image generators shit themselves trying to make letters (and eyes)?

Should I have "Fix faces with GFPGAN" on even when the face is small or far away?

nah, it really only works on large fucked up eyes in my experience. it just blurs small shit if anything.

Attached: 00424.png (512x512, 536.44K)

>Mcdonan

Attached: mcdonan.png (512x768, 757.26K)

And yet it's able to do young emma just fine.

Attached: hyper realistic painting of [young] emma watson 2004, hyper detailed face, anime, concept art, 4k, sharp focus on eyes, by WLOP, trending on artstation (23).png (512x640, 446.19K)

I have no hard data to back this up, but I'd say basic drawing principle: general to specific. You always start with the broad composition and work inward. Observing the AI gradually reducing noise to meaningful things reveals a similar process. For a human artist this is done for practical purposes: details take time, and there's no point in carefully drawing out a hand only to discover you put it in the wrong place and have to start all over. So you work inward, general to specific. The AI seems to be doing something similar, but every inward step is another step in its processing, and its limited in how many steps it can take. It's like an artist pulled away before having the chance to finish, and in fact if you did that with a human, you might see similar scribbly "rough art" results on the details.

Alright I'm calling it a night. I'll try to find her tomorrow. Or maybe on to a new adventure.

Attached: grid-00511-2942813994_A_full_body_watercolor_portrait_of_the_obese_Queen_of_All_Rats,_wearing_a_ladies'_crown_and_gown,_sitting_upon_her_royal_throne,.jpg (1024x1024, 975.14K)

THERE its fucking fixed
PLEASE test your pr before you submit it
last time im committing straight to master, faith in pull requests GONE

this was my bad though, forgot about it when i added the other samplers

Same here I think. Been cool hanging out and exchanging rodent-related media.Hope to see more in future.

Attached: 00048.png (512x512, 433.49K)

Looking at how stable diffusion works, I think it has to be rebuilt from the ground up. As it stands the tags on the images it's using are too simple, they have to be much more descriptive for the AI to have some sort of basic coherence