I have access to dall-e 2

I have access to dall-e 2
Trips decide what I make.

Attached: Screenshot_2022-08-11-14-04-47-150_com.opera.gx.jpg (720x1600, 405.7K)

"nigger"

nigger

A four leaf clover suspended in a vial

It's censored, nothing obvious like this will work.

Deez nuts

black slave swimming in kool aid

Two cats smoking in an alleyway

I got dubs… that’s gotta count for something, OP

Probably censor "slave" (I'm not OP but I don't want to waste my credits on that). I'd try something like "black laborer swimming in purple drink"

Trump smoking a crack pipe

shotgun wedding

I don't think OP is going to deliver, but I wanted to see this one so I did it myself.

Sample 1

Attached: DALL·E 2022-08-11 11.12.28 - four leaf clover suspended in a vial.png (1024x1024, 1.37M)

Sample 2

Attached: DALL·E 2022-08-11 11.12.23 - four leaf clover suspended in a vial.png (1024x1024, 1.33M)

This shit terrifies and fascinates me

Sample 3

Attached: DALL·E 2022-08-11 11.12.41 - four leaf clover suspended in a vial.png (1024x1024, 1.3M)

Sample 4

Attached: DALL·E 2022-08-11 11.12.49 - four leaf clover suspended in a vial.png (1024x1024, 1.29M)

AI painting an image

If it makes you feel any better, only one of the clovers was a four-leaf clover.

I only have 2 more credits left until the 21st and all the other suggestions here suck so that's probably I'll I'm going to post.

I know it's not perfect, but that last image where it knows to put the plant in water and that the water is constrained by the glass and refracts light different is just pretty nutty.

How long until people just use this instead of artists for various projects?

I've played with it pretty extensively over the last two months, and while it's certainly "not there yet" (there are many things out absolutely sucks at, and many things that require many passes to convince it to do how you want) it does indeed prove that AI can replace human artists in some applications, and, given the pace of the technology, suggests a bunch of obvious follow-up work.

The most impactful "small" change that I think we'll see in the next year or two is some form of "iterative refinement", probably by a series of natural language prompts. This is where artists excel and the AI can't really perform at all. Ie, take sample 1 and say "the clover should have four leaves, not three" and it'll try to change only that aspect of the image. There's not a good dataset for this, though, at least not as good as the caption dataset for CLIP, but in terms of the ML architecture, this is not something that would require anything really novel.

It'll also be a while before an AI can reach the same understanding of the larger context of the art, and actually contribute to the work creatively, but for many cases we already don't give artists this chance. Just like the AI, we toss a prompt over the fence, they generate a set of samples, and we say "finish that one, and change this".

Stuff like "refracting water semi-realistically" and "scene has a reasonable composition" are IMO very basic, low level stuff. It has tons of examples of that stuff and it doesn't vary much between works. The harder stuff is deeper understanding of how more rare concepts interact. For instance, I tried to get it to generate something like "an owl and a cat librarian reshelving books together" and or absolutely could not come up with a depiction of them working together to restock books. It can theme them as librarians in a library, with books around, but it doesn't have a deep enough understanding of how cats and owls move to do it right.