Can't wait to buy these in bulk from underwater miners, gonna build a home machine learning station

can't wait to buy these in bulk from underwater miners, gonna build a home machine learning station

Attached: [email protected] (1200x540, 671.94K)

Other urls found in this thread:

voca.ro/12niAGQbVlOR
github.com/fatchord/WaveRNN
twitter.com/SFWRedditImages

I use onboard GPU for like 90% time, keep dedicated one on the shelf as I do not use it most days
Each new gen needs special GPU less and less

Its going to be class being able to get a GPU for normal prices again. I'm still using a 1070ti ffs.

Attached: 1637623636856.jpg (650x650, 98.59K)

i was training a tts voice with waveRNN and tacotron 2 on my old 1080ti, took more than a week of nonstop training. if i can get my hands on a few 3090s i can increase the batch size and train in parallel, the dream would be to get it down to a day or so for one voice

user what the fuck are you talking about? You physically remove the GPU from your PC on the reg, because reasons? Why? What the fuck.

What does a home machine learning station even mean?

Is this just shit software engineers say to sound cool and that what you're doing is basically completely pointless and is infact something that needs to be done with a team of 1000s of people much smarter than you, but you're very smart and do it at home by yourself of course

>blocks your path

Attached: 1655112599676.png (603x547, 320.11K)

Turn your mining rig into a home DALL-E instance instead of mining shitcoins.

Wen?
I need an upgrade, been waiting for a long time.

it's like building a NAS except instead of hoarding porn you're hoarding AI data.

Cope, brainlet.

I have a 3090, what cool shit can I do with it?
And how to get into it?

Attached: 1E54E8B8-510B-454A-A968-8D0710848F85.jpg (537x525, 47.11K)

read my reply, i like to train models at home

this is a trump voice i made from 20 minutes of audio samples, tacotron 2 finetuned for 35k steps from a pre-trained ljspeech model at 185k, and then trained wavernn vocoder for 660k steps. it's not perfect because not enough audio data samples, it takes ages to cut and transcribe the data and it's boring, and i stopped training the vocoder early because it was so slow so its still a bit noisy. but decent result.

voca.ro/12niAGQbVlOR

How to get into that? Sounds interesting and fun. Like how do I get started?

im a software developer so i just looked at the github repo and went from there, not sure how much knowledge you have already so if not much maybe start with some youtube tutorials

the repo i used is github.com/fatchord/WaveRNN

That’ll probably just affect next gen. Current gen cards are already selling for under MSRP used.

>home machine learning station
what dataset are you using?
i have 100,000 pepe dataset but my gpu is too shit to train a large model. i want train something like stylegan

What do you use to tie together multiple machines and treat them as if they’re one big GPU machine? Apparently Beowulf clusters don’t work well for this sort of thing.

i make my own datasets, usually cut from speeches or movies, like for trump it was easy because you have a lot of high quality speeches and transcripts already available
most of the work is cleaning, cutting, labelling, normalizing audio

>not sure how much knowledge you have already so if not much maybe start with some youtube tutorials
any you can recommend?