DALL-E is having traffic issues for some people

DALL-E is having traffic issues for some people.
You might find this useful.

pastebin.com/zcZrztjF

Attached: image_2022-06-08_180152302.png (2104x4120, 1.48M)

Other urls found in this thread:

pastebin.com/GuKD0mva
pastebin.com/USyNjCu0
stackoverflow.com/questions/48152674/how-to-check-if-pytorch-is-using-the-gpu
github.com/borisdayma/dalle-mini.git
github.com/patil-suraj/vqgan-jax.git
github.com/google/jax
github.com/cloudhan/jax-windows-builder
twitter.com/AnonBabble

It's very lazy. Someone might want to put in command line argument handling and other things.
But please remember that Brightwing will only be friends with nice people.

Attached: image_2022-06-08_180455902.png (1364x1442, 2.17M)

How big of a download is the pre-trained model?

>DALL-E is having traffic issues for some people.
please nigga. it's being deliberately crippled by the host.

Mini version about 1GB.
Big version about 10GB, from what I can see.

I only have a slow laptop right now, so I couldn't try out the big version.
I also haven't been able to train it further, only used current weights.

Can someone with a better rig test this maybe?

There were issues in the script, because I had modified it from a Jupyter notebook I had saved. I also cleaned up a few things.
When showing the images there will be a warning about parallelism being used after fork. It won't affect the output.

pastebin.com/GuKD0mva

You can pass in the terms on the command line now. Just save it as a .py file and call it with "script.py ".

Maybe. Whatever the reasons are, it's become unresponsive. Maybe they did throttle it. Either way, this way people can execute it locally. People who have a machine that's fast enough can also tie it into scripts. It will also take some load off their servers, so they should be grateful (protip: they won't be, but will rather pee their panties that Any Forums is using their models).

Attached: image_2022-06-08_195807134.png (1930x3970, 886.32K)

Sample. Input was: "pokemon on lsd"

I haven't compared it with the online demo, yet.
Someone who can try the big model might get better results. Please post them. Even just the small model takes ~20-25 minutes on my machine (yes, I have a bad machine, I know).

Attached: DALL-E: 2022-06-08 19:46:21.665008.png (256x256, 113K)

Input: "The Eiffel tower is landing on the moon"

DALL-E model revision: mini1-v0
VQGAN model: dalle-mini/vqgan_imagenet_f16_16384 (e93a26e7707683d349bf5d5c41c5b0ef69b677a9)

Attached: dalle 2022-06-08 19:16:19.750366.png (256x256, 95.65K)

Wait, I lied.

> wandb: Downloading large artifact mini-1:v0, 1673.43MB. 7 files... Done. 0:0:0

That's the precise size. Maybe not that bad nowadays. 10GB is a bit much for personal use, imho. But you can wipe it when you're done / bored with it.

"Any Forums likes to play with pokemon, but your mom yells at you for spilling spaghetti sauce on the dog"

Python Version: 3.9.10.final.0 (64 bit)
Cpuinfo Version: 8.0.0
Vendor ID Raw: GenuineIntel
Hardware Raw:
Brand Raw: Intel(R) Core(TM) i5-5287U CPU @ 2.90GHz
Hz Advertised Friendly: 2.9000 GHz

Runtime: 0:23:48

Attached: DALL-E: 2022-06-08 20:15:41.456511.png (256x256, 103.79K)

have a bumb, when I get home will try in my machine

Oh, thank you very much! That would be great.

If you have a fast machine, maybe you can also try the 10GB "mega" model. Just comment out this line:

DALLE_MODEL = "dalle-mini/dalle-mini/mega-1-fp16:latest"


I'd like to see what it looks like.

Here is the latest version, with some more cleanup: pastebin.com/USyNjCu0

I got rid of some of the JAX related warnings.

Pic: "the statue of liberty in space"

I'm a bit disappointed, but maybe the mega model is better. I'll keep checking this thread now and then.

Attached: dall-e - the statue of liberty in space.png (256x256, 98.42K)

Anyone know how to download a model and run off it permanently? The instructions they give always try to re-DL it every time.

The instructions I had (don't have the bookmark right now) used "wandb" (from the site "weights and biases"), just as in the script I posted. For me, it isn't re-downloading it once you let it finish all the way once.
It will keep displaying the message, but you should see that it's gone pretty quickly. There is still some delay due to other model initialization, which you might think that it's the model being downloaded again.

Attached: image_2022-06-08_213312439.png (1944x390, 397.25K)

Oh ok. I kinda gave up on it because it wont recognize my GPU.
Ive set up pytorch and tensor flow to recognize my card but it just refuses to pick it up.

Ive noticed that the playground model gives horrible results, I guess thats the 1gb. While the main app must be using mega.

What's the requirment.txt to install all modules?

>it wont recognize my GPU
It heard it can still be a pain with anything that is not nvidia.
stackoverflow.com/questions/48152674/how-to-check-if-pytorch-is-using-the-gpu

> I guess thats the 1gb
I think it is, yeah. It's about the same resolution and quality. Plus, with how much load they are having right now, they probably only use the small one for the demo.

I can only find this in my shell history

pip install git+github.com/borisdayma/dalle-mini.git install -q dalle-mini
pip install git+github.com/borisdayma/dalle-mini.git install -q git+github.com/patil-suraj/vqgan-jax.git


I am not sure what other things are needed, as I believe that I already had most machine learning packages installed.

Whatever JAX is only works on linux right now. I think that's the main thing

I have an NVIDIA card, still no luck. I did all the checks as well, it's all good from what I can tell.

I have it working under MacOS.
Is it not working with Windows?

Then I might try to get the model into a state that it works with JAX / with Windows.

github.com/google/jax

*without jax / with windows

There's also the WSL on windows , but that exasperates the GPU issue I'm having.

I assume MacOS has jax-lib since its unix. When I looked the devs stated they didn't have time to port it right now.

bear with me installing pytorch and checking my cuda
oh really? I don't have linux in this machine, I can live boot if it doesn't work on windows.

Attached: Screenshot_2249.png (1862x171, 22.26K)

github.com/cloudhan/jax-windows-builder

I don't know how much work that is.
If it's too much of a bother, I can understand if you don't want to take the time, but if you do get it to work, I would appreciate the effort.


> When I looked the devs stated they didn't have time to port it right now.
Ah, that's too bad. I thought I had a simple and easy to run script for everyone ready, but I guess Python packages aren't all that portable as I sometimes like to believe. If it's too messy, I'll look at ways to use their model with vanilla Torch somehow, which I think works fine on all major OS's.