The 100 TB era

Imagine this.
The minimum HDD capacity is 100 TB.

File hosts etc offer 100 TB free accounts.

imagine what you can back up this way.

Are you not hyped for it ?

Attached: file.jpg (800x1000, 215.65K)

>imagine what you can back up this way.
Porn.

Yea or other files like you know the channels you archive because they get purged all the time.

>the same files repeated billions of times

Mass waste.

>>the same files repeated billions of times
HUH ?

How in your own files ?
Or across different peoples collections ?

>Mass waste.
It is named redundancy and is a good thing you retard.

>pic
This is what happens when you don't use ECC RAM and ZFS.

>The minimum HDD capacity is 100 TB.
>File hosts etc offer 100 TB free accounts.
So the average video game would be like 2TB then?

Are you kidding me? That's barely enough for a decent text editor.

>So the average video game would be like 2TB then?
Probably.
Who cares ? Redownload this crap from steam.

>The 1.2 Terabyte Internet Data Usage Plan provides you with 1.2 terabytes (TB) of Internet data usage each month as part of your monthly Xfinity Internet service. If you choose to use more than 1.2 TB in a month without being on an unlimited plan, we will automatically add blocks of 50 GB to your account for an additional fee of $10 each.
Welp that's your data limit shot for a couple of months.

Backups and conventional filesystems will be dead by then, you'll use storage pooling with something triple redundant "in the cloud" as warm disks.
You'd have a local cache of about 100TB, if a 100TB SSD inhabits the same place as a 1TB ssd currently.

I get 2TB per month in 3rd world land here. 10$ per month.

A large video game if we're talking about the new trend of 100gb games would be about 10TB.
That'll be a lot less games than what currently hit 100GB, and most games then will be 100GB.
I suspect anything properly above 100GB with modern lossless compression will be AI enhanced games.
Applications genuinely haven't grown larger, it's just that developers have learned bad habits about frameworks/engines from webdev and gaming.

We're just gonna see bigger average file sizes

WRONG, when you write to disk it's a synchronous operation, it's FAR more likely corruption will occur over the network than during the few tens of uS it takes to flush to disk, and you're writing two copies or one copy plus N parity blocks, all of which can verify the original content is correct. This is because ZFS hooks the malloc symbol and friends, the data is checksumed at the boundary between NFS/whatever and the file system driver DMA operation, the data is only copied once, from system RAM to the HDD cache.
This also means on reading it back, if there is corruption then ZFS will reconstruct the file on the fly so you would never see OPs pic from a properly configured ZFS file system. ECC simply doesn't come into it, the only possible way you would see this is if you had a file sitting in ARC for such a long time it did get struck by cosmic rays, the thing is this is only temporary and dropping the file from the cache will get you your original file back. And before you start spouting bullshit, ZFS NEVER uses partial/consolidated block regions from the ARC to rewrite sectors, they are reconstructed from the slog and are again synchronous block operations.
You ECC shills should get the rope, imagine avoiding a perfectly good fs because some fastman on Any Forums drank the MS technology consultant koolaid.

Attached: DpQ9YJl.png (700x700, 20.78K)

>he doesn't transmit his data over the network with a checksum
ngmi

> 100 TB
So gaymes will increase their size with 8K pre-rendered videos and other stupid bloat to expand to several 10 TB in size, Windows will become bloated enough to occupy 16 TB, and Linux will become bloated enough to occupy some 10 GB.

Average photo taken with a phone is 500MB, songs are 3GB+ each. It'll all be shit. Photos from DSLRs compressed as JPEG already look good enough, songs sound good enough, 4k movies are already crisp enough, we don't need to waste bandwidth to handle larger and larger files for no benefit.
You could stop developing HDD/SSD size today and I'd be fine. Make them live forever, make them faster, it's enough massive data expansion.

>filesystems
Imagine writing this zoomer crap.
>Backups ... will be dead by then,
YIKES retard YIKES.

>use storage pooling
And this is different from file systems how ?

Attached: file.jpg (600x514, 36.09K)

Sir American is a 4 world country.

The weather this morning is sunny yet with fire.

Attached: file.webm (320x400, 2.92M)

>Linux will become bloated enough to occupy some 10 GB.
Sir we need to support all printers especially the ones made in the 1970s !

Tell me wizard man, when I installed ubuntu i had to choose between LVM and ZFS, which one of them is superior for a home computer with multiple drives (no raid)