How do you guys keep your files healthy and defend against data corruption...

How do you guys keep your files healthy and defend against data corruption? I'm thinking of using parchive to give redundancy to every file

Attached: 1646946642744.png (1000x1412, 558.08K)

I don't care

if it rots it rots, everything can be replaced or forgotten.

fpbp

>How do you guys keep your files healthy and defend against data corruption?
my proxmox/truenas zfs vm/file server.

wipe your own ass and suck your own dick user. I have M-Disk blurays as my bulk backup.

Attached: 1652963105144.jpg (1005x677, 57.48K)

3-2-1

ZFS
btrfs
snapraid

"datarot" is on the same level as faggots who wear static wristbands or retards who still think SSD's wear out after a year.

you are wasting your time.

have any of you actual homosexuals ever seen any evidence of datarot on your arrays?
Autists should be genocided

a zfs pool is the only reliable way at the moment. other solutions exist but they're historically disasters. the bsd mastery series of books covers them very well. if you're a zoomer there's youtube channels like level1tech that cover zfs. if you have odd drive numbers/sizes then art of server has a youtube series called dark arts of zfs that's fantastic as well.

Attached: 1657177400256.png (647x447, 547.96K)

nakadashi ruiko-chan

cute tummy

btrfs raid6 which i scrub regularly

It just means your data has no value

>why yes, i produce nothing of value, how could you tell?

Seeing the size difference now isn't the reason to use ZFS. ZFS uses lossless compression, while btrfs is 'lossy'. What this means is that for each year the btrfs sits on your hard drive, it will lose roughly 100 bits, assuming you have SATA - it's about 150 bits on IDE, but only 7 bits on SCSI, due to rotational velocidensity. You don't want to know how much worse it is on CD-ROM or other optical media.

I started using btrfs in about 2001, and if I try to open any of the files I copied back then, even the stuff I grabbed at 7 bits, the bits just flip. The redundancy is terrible, the compression...well don't get me started. Some of those files have degraded down to 5 or even 4 bits. ZFS files from the same period still work great, even if they weren't stored correctly, in a cool, dry place. Seriously, stick to ZFS, you may not be able to see the difference now, but in a year or two, you'll be glad you did.

The fact that you larp about muh bitrot proves you will never create anything of value either.

every day i copy all my files to a new folder

I use ZFS, but since I backup my NAS to Backblaze (their cheap unlimited home plan, not b2) I've started doing the same, op. They use parity on their end, of course, I'm more worried about a tiny file being missed by their crappy backup app and breaking my Borg repo or something.
I haven't really found a good solution for parity though -- par2 is still the best choice, with parpar being the fastest software available for Linux.
I do something like:

mkdir ../par2
find . -type f -printf %Pn > ../par2/files.txt
parpar -s 25M -p 500M -r 7% -o ../par2/recovery.par2 -i ../par2/files.txt

You can also just run parpar on the entire folder, but I'm still getting around to writing an incremental script that will only backup files that haven't previously had parity calculated for them since even with parpar things are pretty slow. Adjust the block size down or up from 25M depending on the size of your files, and adjust the redundancy down or up from 7%. The 500M just caps the par2 file size at 500 megabytes, otherwise it ends up expanding the par2 files in powers of two and you end up with some big files.

>How do you guys keep your files healthy
I feed them a lot of veggies and I make them run at least five laps every day around the town. No problems so far!

files, not fillies

I would like to store my data inside Saten.