Making sure restic backups are right
from namelivia@lemmy.world to selfhosted@lemmy.world on 25 Mar 23:33
https://lemmy.world/post/27394734
from namelivia@lemmy.world to selfhosted@lemmy.world on 25 Mar 23:33
https://lemmy.world/post/27394734
I am quite worried about losing information and not being able to recover it from the backups, so I am trying to nail the best automated way to make sure the backups are good.
Restic comes with a check
command, that according to the documentation here has this two “levels”:
- Structural consistency and integrity, e.g. snapshots, trees and pack files (default)
- Integrity of the actual data that you backed up
In plain words, I understand this as: The data you uploaded to the repository is still that data.
Now my question is, do you think this is enough to trust the backups are right?
I was thinking about restoring the backup in a temporary location and running diff
on random files to check the files match the source, but I don’t know if this is redundant now.
How do you make sure you can trust your backups?
threaded - newest
restic restore --dry-run
@Xanza’s suggestion is a good one. For me, it’s sufficient to fuse mount the backup and check a few files. It’s not comprehensive, but if a few files I know changed look good, I figure they all probably are.
Depends on what you’re backing up. Is it configs for applications, images, video, etc? If it’s application configs, you can set up those applications in a virtual machine and have a process run that starts the machine, restores the configs, and makes sure the applications start or whatever other tests you want. There are applications for doing that.
If it’s images or videos, you can create a script to randomly pick a few, restore them, and check the integrity of the files. Usually just a check of the file header (first few bytes of the file) will tell you if it’s an image or video type of file and maybe a check on the file size to make sure it’s not an unreasonably small size, like a video that’s only 100 bytes or something.
All this seems like overkill though in most scenarios.
Deja DUP has auto validation also. But besides “backup” I think everyone suggests using ZFS that auto heals bit rot. And don’t trust unplugged SSDs, they can suffer bit rot quickly if stored in a hot location
Trying to actually restore is the best way to ensure the backup works. But it’s annoying so I never do it.
I usually trust restic to do it’s job. Validating that files are there and are readable can be done with
restic mount
, and you’ve mentioned restic check.The best way to ensure your data is safe is to do a second backup with another tool. And keep your keys safe and accessible. A remote backup has no use of the keys burned down.
That isn't as useful as you would think. If your computer fails there are high odds you will restore to a fresh install of a newer OS and newer software/services versions. Which means that you really want/need to also test data/config migration.
OTOH, if you have backups odds are the data is there even if you never tested them. Testing you can restore is mostly about do you have everything backed up. Your backups can pass all the validation but if you accidentally configured them to only backup /tmp (or something else worthless) you may as well not have backups. Thus you should test that you can do a full restore just to make sure that the data you want is all there. I generally trust that backup software can restore all the data you pointed it at without problems even if you didn't test them - but I don't trust that you (or I) configured them to backup the right things.
I use Borg but every now and then I mount a backup and download a few files to make sure they work correctly.
I’ve so far only had to do this for real with my local zfs snapshots after messing up a config file or blowing away the wrong folder. Process to restore is essentially the same except I would mount the Borg repo instead of a local zfs snapshot