I am thinking about studying deep learning, but for that I will necessarily need some real processing power

I am thinking about studying deep learning, but for that I will necessarily need some real processing power.

I have a GTX 970 and with that I cannot even run Stable Diffusion. I was checking the GPU market and it is kinda nuts.

What do you think is my best option?

Attached: file.png (2309x1152, 2.33M)

Other urls found in this thread:

tomshardware.com/news/geforce-rtx-3080-20gb-gpus-emerge-for-around-dollar575
twitter.com/AnonBabble

If you wait for the 4090, it has enough VRAM for training stable diffusion models

But the price will be insane. I dont want to pay 1000+$ for a GPU

Cloud computing (colab, kaggle, etc) unless you are rich. It is hard to justify buying a gpu for learning
But if you really want to go for it, then the most realistic choice right now is 3090 at around 1k or 3080 12gb at around 750, anything lower than that and you start to get vram problem

If you want to get into deep learning, you're going to have to buy an expensive GPU or rent compute power. You can probably do beginner stuff on your current GPU though. Look up the tutorials on Tensorflow's website.

How much would it cost me to rent the computing power?

$1000 is just the beginning
its a 500w card, it needs its own psu and your light bill is gonna be wild

shop around, I can tell you that the main providers are google colab, google cloud compute, aws lambda and there are a few smaller cheaper start ups like runpod and leadergpu

I got a 3060 12gb and it's pretty damn good with SD
dunno about "studying deep learning" though, I'm just making images

> currently doing a phd based on deep learning

A lot of the state of the art fancy stuff you see has been trained on massive data sets using generally very expensive hardware (e.g. nvidia dgx clusters).

From a learning perspective you can easily get away with 11 or 12gb vram gpu (1080ti is a good catch perhaps for the right price) which should allow you to have ample memory for reasonable sized models. Definitely more than enough to learn and implement some things.

You definitely shouldn't expect to be training state of the art stuff from scratch using anything reasonably priced

whatever you do , get nvidia
you need cuda cause rocm is a fucking nightmare to work with.

Honestly, you're good enough with what you have. If you need more vram for inference passes on a model, use colab or something, but for learning the ropes all you need is a system capable of running CUDA since you will not be able to train anything crazy even if you had a 3090 Ti. If you wanted to try creating stuff on the scale of GPT-3 or dall-e, it's not gonna happen. You need upwards of 6-800 gb of vram for something of that scale due to the way training is designed to work, it's not just a speed thing. Maximizing batch size is important for anything that needs to compute attention across samples, so smaller batch sizes will damage performance even if you had unlimited time.
Take 6.S191 from MIT opencorseware or IDL from cmu, both have all of their lectures on youtube. CMU's stuff is way more technical so I would suggest starting with that, MIT's is pretty shit from a foundational standpoint.

this, been finding it out for myself lately. Think I might grab an older quadro used

A K80 card goes for around $100 on ebay right now. The cuda core count is crap but has 24GB of vram between the 2 GPUs. Its old and the driver support is not good, but apparently 11.2.2 should work.

I have one I'm trying to setup as a secondary compute. Haven't gotten it working yet as the 2 systems I've tired to use it in had issues cause I'm trying to do something silly and use it on a blade server. Needs something newer than X5600 era with UEFI and large bar support.

Just use colab for prototyping and kaggle when you want to run shit for a long time.

You don't need a GPU for deep learning, it's bloat. Real men calculate model weights by hand with a pencil and paper.

I've got a 3070 Ti, it's one of the fastest GPUs for SD, but the VRAM is the limiting factor. I think you're better off buying a Colab subscription until the 40 series comes out. I don't think it's worth it spending a thousand bucks on a GPU with a shitty amount of VRAM.

with that attitude you better off just kill yourself
this shit cost money and killing yourself is basically free

choose -- faggot.

rent processing power
it's cheaper for now

tomshardware.com/news/geforce-rtx-3080-20gb-gpus-emerge-for-around-dollar575

If you really wanted to learn deep learning you would be using google collar until it kicked you off and then buying the most expensive GPU you can buy.

There is zero hardware you need to buy to get started, even talking about such matters is just procrastination.