Is it time for open source to die?
infoworld.com
Is it time for open source to die?
works on my machine
Yes, open source should die, and it should be replaced by free software.
with all the woke meme changes openAI is doing to their product, we actually need more open source AI shit
OpenAI is the worst name possible
this desu
spbp
i did not remember asking a computer for it opnion .
REEEEE
WHY CAN'T I MAKE MONEY ON THIS CODE SHOULD ONLY EXIST FOR COMMERCIALISM
REEEEEEEEEEEEEEEEEEEEEEEEEE
The latest Meta's NLP model can be run in a Azure VM, on a A100. You could probably too with OpenAI's but (((they))) make you believe no consumer can, in reality their training data doesn't take more than 1TB, nothing inaccesible or expensive for (not poorfags) consumers. They don't open-source their GPT models because no one can run them, it is because they want you to pay subcriptions.
VQGAN is better than the autoencoder of DALLE - dVAE - and you can run it in google colab with a T4.
Source: I made my thesis on top of VQGAN
When will people make the GNU of ML then?
Yeah just let companies Jew you out of the sea of possibilities of ML just because they can. What could go wrong?
so, youre going to just lay down and let the megalo-incumbent overlords have an easier path to rule?
cuck.
you need data for for ml to be a viable solution to a problem. while there are many free data sets out there none of them can be used for anything practical.
kek
what a dumb, or just dishonest, statement
This unironically sounds like a good use case for some crowdsourced blockchain thingy for once.
The issue isn't the ML algo itself, the issue is getting access to datasets.
Open Source is working by design but not intention. The bigger problem IMO is the fact that I don't think you need open source on the data sets but there has to be a way to get access to the same types of data and etc. to replicate the performance for peer review. I consider it wholy inadequate and dumb that for peer review, no one can get the same data set to be able to match the ML performances described by the papers on these techniques yet it's accepted by the AI community for some asinine reason collectively. All papers with rare exceptions that don't do this or need it like the self-training ML paper Deepmind wrote for their 2nd iteration of AlphaGo should be providing their data sets.
>Yandex drops cool chatbot model
>to run it you need at very least 80GB of VRAM
Cool i guess
When they got asked do they have any plans to share smaller models(which they have), they refused, which basically confirms they are just throwing model into public waste
yes
This but unironically.
>Open source should be replaced by something that's literally the same but using a worse term
No
>REEEE EVERYTHING SHOULD BE FREE, NO ONE SHOULD EVER HAVE TO PAY
Commie cope