Nuked all backups

I just nuked all my backups (going from gentoo to windows 10), and it was completely avoidable.
>Open encrypted container to copy backup files to ventoy USB (on hd, to copy later)
>forget to actually copy container to USB
>maintain seperate backups on HDD, this usb was backup #2... no prob....

>windows 10 flips drive numbers, installs over backup hdd instead of sdd
>backup #1 gone
>backup #2 gone
that's it bros

>protonmail not linked to phone for security, no recovery
>nothing on android because my security
>all passwords completely unknowable,
offline password manager.

I was saved by veracrypt being able to recover the hdd miraculously despite windows dropping an entire installation on top of it. I lost a couple of files and that was it. It could have been much much worse.

This all could have been avoided with policy.
How do you manage this?
I'm thinking the only way is to literally print this out in a a handbook like I was being forced to do this at work in an emergency and force myself to abide to it.

Think this was also a problem of not having offline backups.
Either way; a warning anons; back up your data

Attached: backup.jpg (225x122, 7.58K)

Why the fuck did you install windows with more than one drive connected?
Laziness?

Nice blog, but I only give a shit about the software. How does Restic compare just putting everything in a Veracrypt volume which itself is on removable media and using Rsync to move everything over?

>How do you manage this?
Backing up to networked locations.

>Why the fuck did you install windows with more than one drive connected?
funny you say that as soon as I realized what I did I disconnected the HDD lol.
it was straight laziness.
All the software in the world won't save you from yourself.

>How does Restic compare just putting everything in a Veracrypt volume which itself is on removable media and using Rsync to move everything over?
Restic is incremental backup not a straight mirror. I made that the OP pic because I'm also question able about it.

I lost one of my password manager databases because it got corrupted (overwritten with windows).
The veracrypt drive it was inside though was recoverable despite this.
Obviously some file formats are more redudant than others. I wouldn't trust a straight restic backup either.

Do you mirror your incrementals? One fuck up and those 6 years of incremental backups are gone; it's still just one copy, but then how do you keep a fucked database from propagating? Incremental backups of incremental backups?

I need to unfuck this so it never happens again.
I'm thinking restic for incrementals and then offline straight mirrors once a month/week or something; with a full copy inside of restic and an additional copy of the data (on the same drive) just as regular files (the latest version)

use something like rsync.net for encrypted offsite backups
easy to push there with borg or restic
and they keep snapshots themselves, so if you accidentally do fuck up everything you can still recover

>follow link to rsync.net
>see mention of git-annex
>google git-annex
>sounds great, gonna use it
This is why I come back.

i take it though the base strategy should be multiple mirrors of incremental backups; but really what you need is multiple mirrors of incremental backups of incremental backups (backing up the restic db with restic) to safeguard against the database itself being fucked.

>Windows
you get what you fucking deserve

If restic 1 gets fucked, how do you stop propagation (your words) to restic 2 and then the mirrors?

restic backing up restic is probably overkill, the onlything i can think of to mitigate this is use multiple systems.

copy 1: original
copy 2: restic incremental
copy 3: borg incremental
copy 4: raw snapshot
copy 5: snapshot of copies 1-4

mirror 5 everywhere

We're now in schizo territory, user.

you can take this further though.
ultimitely... it's the same argument people have with SSDs
>you'll never actually hit the write capacity!
>samsung wrote a drive for 300 years straight!@#
just make a copy on write file system that never deletes anything. Once it's written it's never unwritten; and infact 3 copies of it are kept at all times.

>rsync into veracrypt container
>rclone veracrypt container to S3
Explain why you need more.

>accidentally all your originals
>automated rsync directories with deleted files into container
>rclone veracrypt container to S3
>all copies gone

>>automated rsync directories
I have the entire rsync process inside one zsh function. I manually backup when I feel like it with one key-stroke.

manual backup is a crutch though and you always run the chance of not doing it when you should have.

this should be a system that shouldn't fail when automated; either by a bot or some by some wageslave. incremental backups could be run on 10 minute intervals

protip: it's not a backup if you can access it from the same computer you're backing up
like a plugged in external hdd is not a backup, it's only a backup when it's physically inaccessible from that computer

>you always run the chance of not doing it when you should have
Your cron job that runs the exact same command has the same problem, genius. Are you backing up every 30 seconds?

i've actually considered this heavily.
was thinking about backing up vm's / seperate computers to a network drive, each computer with it's own folder with write access, then the server would incremental backup to a db somewhere else (not a network share, can't delete it).

instead of pushing to a remote you can have the remote pull instead too. was thinking about doing it that way; but then all the computers are going to have to be on a VPN or otherwise publicly accessible.

>Are you backing up every 30 seconds?
can I?
is there a file system that does this? Once I hit save once there should be zero chance of me loosing data from delete the file afterward.

like an lvm snapshot but on every single file on every single change.

you can't just loop restic; i'm sure there's a more efficient way for the kernel to notify that a file has changed.

if you can work out something where you can send backups to a remote machine but the machine you're only has no way to delete or otherwise damage existing data, that should be fine, too, but it has to be implemented carefully
the idea is that you should be prepared for the worst case, like ransomware which overwrites everything as root in any folder, /dev/sd*, /mnt, /media, you name it, would you be fucked?

>can I?
No

>i'm sure there's a more efficient way for the kernel to notify that a file has changed.
This nigga seriously wants webhooks for his filesystem.

>is there a file system that does this?
not him but actually yes, snapshots in btrfs for example are practically free, you can do them as often as you like within reason
like you could setup snapper or something to take a snapshot every minute for an hour (discarding older ones), and every hour for a day, every day for a week, etc back to as old as you like, this tiered format is pretty common

idk what a webhook is but you can use inotify to do things based on filesystem events

(free as in they take almost no resources or time to create)

>'practically' free
>as often as you like
>within reason
you're giving me whiplash

is this like ZFS?

well you probably wouldn't feel them every minute, but if you took them every second you might
zfs has snapshots, too, but i'm not as familiar with it, i've used zfs before but that was like, 12 years ago so i'm not going to assume it's the same now

also as for your specific example of backing up when you save a file
while i would not recommend it system-wide, if you have a critical documents folder, then you could absolutely set up an inotify-based script which makes a snapshot every time a file is closed, so you literally could have it so you can save a file and immediately delete it without losing the file

this is exactly what i need
i forgot btrfs does lvm type snapshots only its on a file system level.

i can write something what uses inotify; and then watch a folder and the moment anything changes snapshot it. you don't really need to store 1s level snapshots for weeks you can clean them up regularly.

i don't think restic pans out or scales this really needs to happen at the filesystem / block level

really this needs to sit on one file; the password manager file; which is apparently prone to corrupt or atleast doesn't like being overwritten, there's no way to repair.

one wrong save and it's gone. in this case i had backups i recovered but i still lost a week's worth of passwords for whatever garbage i made