from CatLikeLemming@lemmy.blahaj.zone to selfhosted@lemmy.world on 10 May 18:27
https://lemmy.blahaj.zone/post/25788600
I’m planning on setting up a nas/home server (primarily storage with some jellyfin and nextcloud and such mixed in) and since it is primarily for data storage I’d like to follow the data preservation rules of 3-2-1 backups. 3 copies on 2 mediums with 1 offsite - well actually I’m more trying to go for a 2-1 with 2 copies and one offsite, but that’s besides the point. Now I’m wondering how to do the offsite backup properly.
My main goal would be to have an automatic system that does full system backups at a reasonable rate (I assume daily would be a bit much considering it’s gonna be a few TB worth of HDDs which aren’t exactly fast, but maybe weekly?) and then have 2-3 of those backups offsite at once as a sort of version control, if possible.
This has two components, the local upload system and the offsite storage provider. First the local system:
What is good software to encrypt the data before/while it’s uploaded?
While I’d preferably upload the data to a provider I trust, accidents happen, and since they don’t need to access the data, I’d prefer them not being able to, maliciously or not, so what is a good way to encrypt the data before it leaves my system?
What is a good way to upload the data?
After it has been encrypted, it needs to be sent. Is there any good software that can upload backups automatically on regular intervals? Maybe something that also handles the encryption part on the way?
Then there’s the offsite storage provider. Personally I’d appreciate as many suggestions as possible, as there is of course no one size fits all, so if you’ve got good experiences with any, please do send their names. I’m basically just looking for network attached drives. I send my data to them, I leave it there and trust it stays there, and in case too many drives in my system fail for RAID-Z to handle, so 2, I’d like to be able to get the data off there after I’ve replaced my drives. That’s all I really need from them.
For reference, this is gonna be my first NAS/Server/Anything of this sort. I realize it’s mostly a regular computer and am familiar enough with Linux, so I can handle that basic stuff, but for the things you wouldn’t do with a normal computer I am quite unfamiliar, so if any questions here seem dumb, I apologize. Thank you in advance for any information!
threaded - newest
.
I use borg backup. It, and another tool called restic, are meant for creating encrypted backups. Further, it can create backups regularly and only backup differences. This means you could take a daily backup without making new copies of your entire library. They also allow you to, as part of compressing and encrypting, make a backup to a remote machine over ssh. I think you should start with either of those.
One provider thats built for being a cloud backup is borgbase. It can be a location you backup a borg (or restic I think) repository. There are others that are made to be easily accessed with these backup tools.
Lastly, I’ll mention that borg handles making a backup, but doesn’t handle the scheduling. Borgmatic is another tool that, given a yml configuration file, will perform the borgbackup commands on a schedule with the defined arguments. You could also use something like systemd/cron to run a schedule.
Personally, I use borgbackup configured in NixOS (which makes the systemd units for making daily backups) and I back up to a different computer in my house and to borgbase. I have 3 copies, 1 cloud and 2 in my home.
I rsync a copy of it to a friends house every night. It’s straight forward, simple and free.
I rsync a copy to mom’s
Same — rsync to a pi 3 with a (single) ZFS drive at family’s house. Retain some daily/weekly/monthly snapshots.
I have a (free) VPS with static IPv4 which is how I connect everything.
Both the VPS and the remote site have limited network speed (I think 50Mbps for VPS), so the initial sync was done sneakernet (well…“airplane net”). Nightly rsync is no problem bandwidth-wise, and is mostly just any new videos I’ve uploaded to my local Immich instance.
My ratchet way of doing it is Backblaze. There is a docker container that lets you run the unlimited personal plan on Linux by emulating a windows environment. They let you set an encryption key so that they can’t access your data.
I’m sure there are a lot more professional and secure ways to do it, but my way is cheap, easy, and works.
I use backblaze as well, got an link to the docker container - that may save me a few dollar bucks a week and thus keep SWMBO happier
Probably a me problem but kept having problems with that docker on unraid, it’s just in the community apps ‘store’. The vm seemed to just crash randomly.
I switched over to their B2 storage and just use rclone to an encrypted bucket and it’s ~<$5/mo which I’m good with. Biggest cost is if I let it run too often and it spends a bunch of their compute time listing files to see if it needs to update them.
hub.docker.com/r/…/backblaze-personal-wine
What’s the container’s name? I was about to get backblaze and then was frustrated at the cost difference between the desktop personal plan and the one for deploying on my server
hub.docker.com/r/…/backblaze-personal-wine
As others have said, use tools like borg and restic.
Shop around for cloud storage with good pricing for your use-case. Many charge for different usage patterns, like restoring data or uploading.
Check out storj.io, I like their pricing - they charge for downloading/restore (IIRC), and I figure that’s a cost I can live with if I need to restore.
Otherwise I keep 3 local copies of data:
1 is live, and backed up to storj.io
2 is mirrored from 1 every other week
3 is mirrored from 1 every other week, opposite 2
This works for my use-case, where I’m concerned about local failures and mistakes (and don’t trust my local stores enough to use a backup tool), but my data doesn’t change a lot in a week. If I were to lose 1 week of changes, it would be a minor issue. And I’m trusting my cloud backup to be good (I do test it quarterly, and do a single file restore test monthly).
This isn’t an ideal (or even recommended approach), just works with the storages I currently have, and my level of trust of them.
A huge tape archive in a mountain. It’s pretty standard for geophysical data. I have some (encrypted) personal stuff on a few tapes there.
I have a rpi4 awith an external hdd at my parents house, which I connect via a wireguard vpn, mount and decrypt the external hdd and then it triggers a restic backup to a restic-rest server as append only.
The whole thing is done via a python script
I chose the rest-server because it allows “append only”, so the data can’t be deleted easily from my side of the vpn.
I use Proxmox PBS for all my backups. Datastore is on my file server at home. I sync the datastore daily to a little NAS at a family members house and to a super cheap storage VPS on the other side of the country. I also do a manual sync to an external drive that keep offline at home.
Any super important documents such as tax records, health related files, backup of the data volume from vaultwarden, or anything related to wills & estates get backed up as well to 2 USB thumb drives that are LUKS encrypted. I keep 1 in my go bag and another is hidden somewhere… Thumb drives get updated once a month, or sooner if anything major changes.
For storing the backups, I use a storage VPS. I got one from HostHatch a few years ago during Black Friday sales, with 10TB space for $10/month. Hetzner have good deals with their storage boxes, too - they offer 5TB space for $13/month if you’re in the USA (you need to add VAT if you’re in Europe).
A good rule of thumb is to never pay more than $5/TB/month, and during Black Friday it’s closer to $2/TB/month. The LowEndTalk forum has the best Black Friday deals.
I use Borgbackup for backups, and Borgmatic to handle scheduling them. Borgbackup is a fantastic piece of software.
Borgmatic has an “append only” mode which lets you configure particular SSH keys to only be able to add data to the backup, not delete it. Even if someone/something (ransomware, malicious users, etc) gains access to your system and tries to delete the backups, they can’t. Essentially, this is protection against ransomware.
This is a very common issue with other backup solutions - the client has full access to the backup, so malware on the client system could potentially delete all the backups.
I have two backup copies of most things. One copy on my home server and one copy on my storage VPS. If you do do multiple backups, Borgbackup recommend doing two separate backups rather than doing one then rsyncing it to another server.
RClone to a cloud storage (hetzner in my case). Rclone is easy to configure and offers full encryption, even for the file names.
As the data is only uploaded once, a daily backup uploads only the added or changed files.
Just as a side note: make sure you can retrieve your data even in case your main system fails. Make sure you have all the passwords/crypto keys available.
I do the same using rclone, partly encrypted partly just dump.
I use batch scripts ln cron_daily to start this
Next to paying for cloud storage, I know people who store an external hdd at their parent's or with friends. I don't do the whole backup thing for all the recorded TV shows and ripped bluerays... If my house burns down, they're gone. But that makes the amount of data a bit more manageable. And I can replace those. I currently don't have a good strategy. My data is somewhat scattered between my laptop, the NAS, an external hdd which is in a different room but not off-site, one cheap virtual server I pay for and critical things like the password manager are synced to the phone as well. Main thing I'm worried about is one of the mobile devices getting stolen so I focus on having that backed up to the NAS or synced to Nextcloud. But I should work on a solid strategy in case something happens to the NAS.
I don't think the software is a big issue. We got several good backup tools which can do incremental or full backups, schedules, encryption and whatever someone might need for backups.
Yeah me too, photos and videos I’ve recorded are the only things I’m bothered about. Backing up off-site all my arrrrr booty is redundant since I’ve shared it to a 2.1 ratio already and hopefully can download it again from people with larger storage than my family member has.
It’s how I handle backing up those photos / videos thou. I bought them a 512GB card and shoved that in a GLi AP they have down there which I sync my DCIM folder to (app was removed from Play Store since it didn’t need updating but Googles stupid policies meant it went RIP…), and I also backup that to the old Synology NAS I handed down to them. I suppose I could use Syncthing but I like that old app since the adage if it’s not broke don’t fix it applies.
Along with them having Tailscale on a Pi4 (on a UPS and is their/my backup TVHeadend server) and their little N100 media box I don’t even bother them with my meager photo collection and works good.
It really depends on what your data is and how hard it would be to recreate. I keep a spare HD in a $40/year bank box & rotate it every 3 months. Most of the content is media - pictures, movies, music. Financial records would be annoying to recreate, but if there’s a big enough disaster to force me to go to the off-site backups, I think that’ll be the least of my troubles. Some data logging has a replica database on a VPS.
My upload speed is terrible, so I don’t want to put a media library in the cloud. If I did any important daily content creation, I’d probably keep that mirrored offsite with rsync, but I feel like the spirit of an offsite backup is offline and asynchronous, so things like ransomware don’t destroy your backups, too.
Sure. With data that might be skipped, I meant something like the Jellyfin server, which probably consists of pirated TV and music or movie rips. Those tend to be huge in size and easy to recreate. With personal content, pictures and videos there is no chance of getting it back. And I'd argue with a lot of documents and data it's not even worth the hassle to decide which might be stored somewhere else, maybe in paper form... Just back them up, storage is cheap and most people don't generate gigabytes worth of content each month. For large data that doesn't change a lot, something like one or two rotated external disks might do it. And for smaller documents and current projects which see a lot of changes, we have things like Nextcloud, Syncthing and a $80 a year VPS or other cloud storage solutions.
Most of my work is with Macs, and even one server is running macOS, so for those who don’t know how it works ‘over there’, one runs Time Machine which is a versioning system keeping hourlies for a day, dailies for a week, then just weeklies after that. It accommodates using multiple disks, so I have a networked drive that services all the mac computers, and each computer also has a USB drive it connects to. Each drive usually services a couple of computers.
Backups happen automatically without interruption or drama.
I just rotate the USB drives out of the building into a storage unit once a month or so and bring the offsite drives back in to circulation. The timemachine system nags you for missing backup drives if it’s been too long, which is great.
It’s not perfect but very reliable and I wish everyone had access to a similar system, it’s very easy, apple got this one thing right.
I use LUKS and backup to a usb-drive that I have at home. I rsync those backups to my work once a week. Not everyone can backup to their office, but as others have said, backing up to a friend/family member's house is doable. The nice thing about rsync is that you can limit the bandwidth, so that even though it takes longer, it doesn't saturate their internet connection.
I use rsync.net
It’s not the lowest price, but I like the flexibility of access.
For instance, I was able to run rclone on their servers to do a direct copy from OneDrive to rsync.net, 400Gb without having to go through my connection.
I can mount backups with sshfs if I want to, including the daily zfs snapshots.
External drives that I keep in my office at work. Also cloud storage.
Same I just throw it in a desk at work. It’s encrypted anyway.
Same here. luks encrypted drive in my work locker.
Syncthing to a pi at my parents place.
Low power server in a friends basement running syncthing
But doesn’t that sync in real-time? Making it not a true backup?
Agreed. I have it configured on a delay and with multiple file versions. I also have another pi running rsnapshot (rsync tool).
How’d you do that?
Edit the share, enable file versioning, choose which flavor.
For the delay, I just reduce how often it checks for new files instead of instantaneously.
Have it sync the backup files from the -2- part. You can then copy them out of the syncthing folder to a local one with a cron to rotate them. That way you get the sync offsite and you can keep them out of the rotation as long as you want.
In theory you could setup a cron with a docker compose to fire up a container, sync and once all endpoint jobs are synced to shut down.
As it seemingly has an API it should be possible.
You could use scheduled snapshots to provide the backup portion.
Huh, that’s a pretty good idea. I already have a Raspberry Pi setup at home, and it wouldn’t be hard to duplicate in other location.
A pi with multiple terabytes of storage?
My most critical data is only ~2-3TB, including backups of all my documents and family photos, so I have a 4TB ssd attached which the pi also boots from. I have ~40TB of other Linux isos that have 2-drive redundancy, but no backups. If I lose those, i can always redownload.
using a meshVPN like tailscale or netbird would another option as well. it would allow you to use proper backup software like restic or whatever, and with tailscale on both devices, it would allow restic to be able to find the pi device even if the other person moved to a new house. (although a pi with ethernet would be preferable so all they have to do is plug it in to their new network and everything would be good. if it was a pi zero then someone would have to update the wifi password)
Funny you mention it. This is exactly what I do. Don’t use the relay servers for syncthing, just my tailnet for device to device networking.
you can rent a vps and set up encrypted backups
There’s some really good options in this thread, just remember that whatever you pick. Unless you test your backups, they are as good as not existing.
Is there some good automated way of doing that? What would it look like, something that compares hashes?
That very much depends on your backup of choice, that’s also the point. How do you recover your backup?
Start with a manual recover a backup and unpack it, check import files open. Write down all the steps you did, how do you automate them.
I don’t trust automation for restoring from backup, so I keep the restoration process extremely simple:
Do that once/year in a VM or something and you should be good. If things are simple enough, it shouldn’t take long (well under an hour).
How does one realistically test their backups, if they are doing the 3-2-1 backup plan?
I validate (or whatever the term used is) my backups, once a month, and trust that it means something 😰
Deploy the backup (or some part of it) to a test system. If it can boot or you can get the files back, they work.
For context, I have a single Synology NAS, so recovering and testing the entire backup set would not be practical in my case.
I have been able to test single files or entire folders and they work fine, but obviously I’d have no way of testing the entire backup set due to the above consideration. It is my understanding that the verify feature that Synology uses is to ensure that there’s no bit rot and that the file integrity is intact. My hope is that because of how many isolated backups I do keep, the chance of not being able to recover is slim to none.
Untill you test a backup it’s not complete, how you test it is up to you.
If you upload to a remote location, pull it down and unpack it. Check that you can open import files, if you can’t open it then the backup is not worth the dick space
Heh
Not dumb. I say the same, but I have a severe inferiority complex and imposter syndrome. Most artists do.
1 local backup 1 cloud back up 1 offsite backup to my tiny house at the lake.
I use Synchthing.
+1 syncthing
Weird down votes?
Right now I sneaker net it. I stash a luks encrypted drive in my locker at work and bring it home once a week or so to update the backup.
At some point I’m going to set up a RPI at a friend’s house, but that’s down the road a bit.
My automated workflow is to package up backup sources into tars (uncompressed), and encrypt with gpg, then ship the tar.gpg off to backblaze b2 and S3 with rclone. I don’t trust cloud providers so I use two just in case. I’ve not really been in the need for full system backups going off site, rather just the things I’d be severely hurting for if my home exploded.
But to your main questions, I like gpg because you have good options for encrypting things safely within bash/ash/sh scripting, and the encryption itself is considered strong.
And, I really like rclone because it covers the main cloud providers and wrangles everything down to an rsync-like experience which also pretty tidy for shell scripting.
I spend my days working on a MacBook, and have several old external USB drives duplicating my important files, live, off my server (Unraid) via Resilio to my MacBook (yes I know syncthing exists, but Resilio is easier). My off-site backups are to a Hetzner Storage Box using Duplicacy which is amazing and supports encrypted snapshots (a cheap GUI alternative to Borgbackup).
So for me, Resilio and Duplicacy.
I used to say restic and b2; lately, the b2 part has become more iffy, because of scuttlebutt, but for now it’s still my offsite and will remain so until and unless the situation resolves unfavorably.
Restic is the core. It supports multiple cloud providers, making configuration and use trivial. It encrypts before sending, so the destination never has access to unencrypted blobs. It does incremental backups, and supports FUSE vfs mounting of backups, making accessing historical versions of individual files extremely easy. It’s OSS, and a single binary executable; IMHO it’s at the top of its class, commercial or OSS.
B2 has been very good to me, and is a clear winner for this is case: writes and space are pennies a month, and it only gets more expensive if you’re doing a lot of reads. The UI is straightforward and easy to use, the API is good; if it weren’t for their recent legal and financial drama, I’d still unreservedly recommend them. As it is, you’d have you evaluate it yourself.
I’m running the same setup, restic -> b2. Offsite I have a daily rclone job to pull (the diffs) from b2. Works perfectly, cost is < 1€ per month.
NAS at the parents’ house. Restic nightly job, with some plumbing scripts to automate it sensibly.
This is mine exactly. Mine send to backblaze b2
Rclone to dropbox. ( was cheapest for 2tb at the time )
Put brand new drive into system, begin clone
When clone is done, pull drive out and place in a cardboard box
Take that box to my off-site storage (neighbors house) and bury it
(In truth I couldn’t afford to get to the 1 off-site in time and have potentially tragically lost almost 4TB of data that, while replacable, will take time because I don’t fucking remember what I even had lol. Gonna take the drives to a specialist tho cuz I think the plates are fine and it’s the actual reading mechanism that’s busted)
For this I use a python script run via cron to output an html directory file that lists all the folder contents and pushes it to my cloud storage. This way if I ever have a critical failure of replaceable media, I can just refer to my latest directory file.
My dad and I each have Synology NAS. We do a hyper sync backup from one to the other. I back up to his and vice versa. I also use syncthing to backup my plex media so he can mount it locally on his plex server.
Idrive has built in local encryption you can enable.
I also am liking drive for cloud backup storage.
I just rsync it once in a while to a home server running in my dad’s house. I want it done manually in a “pull” direction rather than a “push” in case I ever get hit with ransomware.
I’m just skipping that. How am I going to backup 48TB on an off-site backup?!
Get a tiny ITX box with a couple 20TB refurbished HDDs, stick it at a friend’s house
In theory. But I already spent my pension for those 64TB drives (raidz2) xD. Getting off-site backup for all of that feels like such a waste of money (until you regret it). I know it isn’t a backup, but I’m praying the Raidz2 will be enough protection.
Understanding the risks is half the battle, but we can only do what we can do.
Do you have to back up everything off site?
Maybe there are just a few critical files you need a disaster recovery plan for, and the rest is just covered by your raidz
I do backup like 1TB off-site. But it would still be a major blow if I lost the rest of it. I just try to live with that risk I’m fully aware exists.
Just a friendly reminder that RAID is not a backup…
Just consider if something accidentally overwrites some / all your files. This is a perfectly legit action and the checksums will happily match that new data, but your file(s) are gone…
That’s a great use-case for snapshots! RAID still isn’t a backup, but it can be quite robust.
I do weekly ZFS snapshots though and I’m very diligent on my smart tests and scrubs. I also have a UPS and a lot of power surge protection. And ECC Ram. It’s as safe as it gets. But having a backup would definitely be better, you’re right. I just can’t afford it for this much storage.
The cost of storage is always more than double the sticker price. The hidden fee is that you need a second and maybe a third one and a system to put it all in. Most our operational lab cost is backups. I can’t replace the data if it’s lost.
Only back up the essentials like photos and documents or rare media.
Don’t care about stuff like Avengers 4K that can easily be reaquired
a “poor mans” backup can be useful for things like this, movie/tv/music collections, and will only be a few MB instead of TB.
if things go south at least you can rebuild your collection in time. obviously if theres some rare files that were hard to get then you can backup those ones, but even at that it will probably still be a small backup
Yup. My “essential” backups are well under 1TB, the rest can be reacquired.
This is what I’m currently doing, I use backblaze b2 for basically everything that’s not movies/shows/music/roms, along with backing up my docker stacks etc to the same external drive my media’s currently on.
I’m looking at a few good steps to upgrade this but nothing excessive:
Not the cleanest setup but it’ll do the job. The media backup is definitely gonna be more of a 2-1-Pray system LMAO but at least the important things will be regularly taken care of
I don’t have Avengers 4K. It’s all just Linux ISOs.
You ought to only be 3-2-1ing you irreplaceable/essential files like personal photos, videos, and documents. Unless you’re a huge photography guy i can believe that takes up 48TB
Hetzner Storagebox
Just recently moved from an S3 cloud provider to a storagebox. Prices are ok and sub accounts help clean things up.
I use syncthing to push data offsite encrypted and with staggered versioning, to a tiny ITX box I run at family member’s house
The best part about sync thing is that you can set it to untrusted at the target. The data all gets encrypted and is not accessible whatsoever and the other side.
This is exactly what I’m about to do (later this week when I visit their house)
I’ve been using syncthing for years, but any tips for the encryption?
I was going to use SendOnly at my end to ensure that the data at the other end is an exact mirror, but in that case, how would the restore work if it’s all encrypted?
www.youtube.com/watch?v=hT373XZHNvk
You add another node to the untrusted node (or the reinstalled current node) and let it sync back.
alternatively there’s a decrypt command, you could go to the untrusted mode, copy the data off to a disk and decrypt it with the tool but I think that f’s up the structure.
Ah, ok if Tom Lawrence has made a video then I know it’ll work!
Thanks
I rent a cheap 4.5 TB, 60€/year VPS. Encryption synchronization is handled by Proxmox as I have Proxmox Backup Server installed on it. I am planning to change it and just buy a miniPC with a cheap HDD that I’ll place at my family’s place to save money
I use asustor Nas, one at my house south east US, one at my sister’s house northeast us. The asus os takes care of the backup every night. It’s not cheap but if you want it done right.
Both run 4 drives in raid 5. Pictures backup to the hdd and a raid 1 set of nvme in the nas. The rest is just movies and TV shows for plex so I don’t really care about those. The pictures are the main thing. I feel like that’s as safe I can be.
Veeam Backup&Replication with a NFR license for me.
My personal setup:
First backup: Just a back up to a virtual drive stored on my NAS
Offsite backup: Essentially an export of what is available and then creates a full or incremental backup to an external USB drive.
I have two of those. One I keep at home in case my NAS explodes. The second is at my work place.
The off-site only contains my most important pieces of data.
As for frequency: As often as I remember to make one as it requires manual interaction.
Our clients have (depending on their size) the following setups:
2 or more endpoints (excluding exceptions):
Veeam BR Server
First backup to NAS
Second backup (copy of the first) to USB drives (min. of 3. 1 connected, 2 somewhere stored in the business, 3 at home/off-site. Daily rotation)
Optionally a S3 compatible cloud backup.
Bigger customers maybe have mirroring but we have those cases very rarely.
Edit: The backups can be encrypted at all steps (first backup or backup copys)
Edit 2: Veeam B/R is not (F)OSS but very reasonable for the free community edition. Has support for Windows, mac and Linux (some distros, only x64/x86). The NFR license can be aquired relatively easy (from here and they didn’t check me in any way.
I like the software as it’s very powerful and versatile. Both geared towards Fortune>500 and small shops/deployments.
And the next version will see a full linux version both as a single install and a virtual appliance.
They also have a setup for hardened repositories.
I don’t 🙃
I bring 1 of my backup disks to my inlaws. I go there regularly so it’s a matter of swapping them when I’m there.
I tend to just store all my backups off-site in multiple geographically distant locations, seems to work well
So, you’re geocaching a bunch of USB drives then?
I’ve got a box of 800 flash drives I’ve picked up while hiking in the garage, I was wondering who’s they were.
/s, I cant afford a garage)
I use Linux, so encryption is easy with LUKS, and Free File Sync to drives that rotate to a safety deposit box at the bank for catastrophic event, such as a house fire. Usually anything from the last few months are still on my mobile devices.
I have a job, and the office is 35km away. I get a locker in my office.
I have two backup drives, and every month or so, I will rotate them by taking one into the office and bringing the other home. I do this immediately after running a backup.
The drives are LUKS encrypted btrfs. Btrfs allows snapshots and compression. LUKS enables me to securely password protect the drive. My backup job is just a btrfs snapshot followed by an rsync command.
I don’t trust cloud backups. There was an event at work where Google Cloud accidentally deleted an entire company just as I was about to start a project there.
I just use
restic
.I’m pretty sure it uses checksums to verify data on the backup target, so it doesn’t need to copy all of the data there.
It’s not all my data but I use backblaze for offsite backup. One of the reasons I can’t drop Windows. I don’t have anywhere I travel often enough to do a physical drop off and when I tried setting a file server up at my parents but they would break shit by fucking with their router every time they had an internet outage or moving it around (despite repeated being told to call me first).
Same - can sync snapshots from Truenas to Backblaze.
If you want to get real fancy you could stash an N40L cube server at your mom’s house where she will never find it and VPN back to your local network and replicate snapshots to it
I can’t stand the fact that they don’t support Linux
Objdct storage is anyway something I prefer over their app. Restic(/rustic) does the backup client side. B2 or any other storage to just save the data. This way you also have no vendor lock.
I have a storage VPS and use Borg backup with Borgmatic. In my case, I have multiple systems in different repos on the remote. There are several providers, such as hetzner, borgbase, and rsync.net that offer borg storage, in the event you don’t want to manage the server yourself.
Rsync to a Hetzner storage box. I dont do ALL my data, just the nextcloud data. The rest is…linux ISOs… so I can redownload at my convenience.
I also had been contenplating this for a while. The solution I implemented recently is:
The system itself is a RPI on NixOS. The system can be reproduced from the NixOS configuration. The NixOS configuration is stored on GitHub. Since I can reproduce the sdcard image (and full system) from the configuration I opted to not do any backup of the sdcard/system itself.
I’ve also opted to not use raid, as I can replace/add a RPI without too much hassle.
The real backups for me are for photos. Those are stored on a M.2 storage. A second (similar) RPI is placed at my dad’s place. The rpis run tailscale and syncthing. Syncthing syncs using staggered mode (stores 1 version for the last day/week/year) and the RPI at my dad is untrusted, so the backup files are sent/stored encrypted there.
This setup hasn’t run very long yet, so I won’t recommend it, but it seems to check quite a lot of boxes for me. Maybe it gives some ideas. I’m also interested what alternative solutions others came up with.
I built a near identical server for my parents and just sync my nextcloud folder to theirs using syncthing
I got my parents to get a NAS box, stuck it in their basement. They need to back up their stuff anyway. I put in 2 18 TB drives (mirrored BTRFS raid1) from server part deals (peeps have said that site has jacked their prices, look for alts). They only need like 4 TB at most. I made a backup samba share for myself. It’s the cheapest symbology box possible, their software to make a samba share with a quota.
I then set up a wireguard connection on an RPi, taped that to the NAS, and wireguard to the local network with a batch script. Mount the samba share and then use restic to back up my data. It works great. Restic is encrypted, I don’t have to pay for storage monthly, their electricity is cheap af, they have backups, I keep tabs on it, everyone wins.
Next step is to go the opposite way for them, but no rush on that goal, I don’t think their basement would get totaled in a fire and I don’t think their house (other than the basement) would get totaled in a flood.
If you don’t have a friend or relative to do a box-at-their-house (peeps might be enticed with reciprocal backups), restic still fits the bill. Destination is encrypted, has simple commands to check data for validity.
Rclone crypt is not good enough. Too many issues (path length limits, password “obscured” but otherwise there, file structure preserved even if names are encrypted). On a VPS I use rclone to be a pass-through for restic to backup a small amount of data to a goog drive. Works great. Just don’t fuck with the rclone crypt for major stuff.
Lastly I do use rclone crypt to upload a copy of the restic binary to the destination, as the crypt means the binary can’t be fucked with and the binary there means that is all you need to recover the data (in addition to the restic password you stored safely!).
What is the concern here?
Amazon AWS Glacier
Edit: I was downvoted for this, but it’s genuinely a more affordable alternative to Backblaze whose finances are questionable.
I have an external storage unit a couple kilometers away and two 8TB hard drives with luks+btrfs. One of them is always in the box and after taking backups, when I feel like it, I detach the drive and bike to the box to switch. I’m currently researching btrbk for updating the backup drive on my pc automatically, it’s pretty manual atm. For most scenarios the automatic btrfs snapshots on my main disks are going to be enough anyway.
Look into storj and tardigrade. It’s a crypto thing, but don’t get scared. You back up to S3 compatible endpoints and it’s super cheap (and pay with USD credit card)
The easiest offsite backup would be any cloud platform. Downside is that you aren’t gonna own your own data like if you deployed your own system.
Next option is an external SSD that you leave at your work desk and take home once a week or so to update.
The most robust solution would be to find a friend or relative willing to let you set up a server in their house. Might need to cover part of their electric bill if your machine is hungry.
My friend has 1G/1G Internet. I have a rsync cron job backing up there 2 times a week.
It has a 8TB NVMe drive that I use bulk data backup and a 2TB os drive for VM stuff.
LTO8 in box elsewhere
The price per terabyte became viable when a drive was on sale for half off at a local retailer.
Works well and it was a fun learning experience.
I have two large (8 Bay) Synology NAS. They backup certain data between each other and replicate internally and push to Back blaze. $6/mo.
If you are gonna go for TrueNAS, try Storj with TrueNAS Cloud task. TrueNAS made a partnership with Storj and the price is very good. www.truenas.com/truecloud-backup/
TlDr; The data is encrypted with restic and sent to Storj S3 storage that is further fragmenting it (and encrypting it too - so double encryption) into multiple pieces (with redundancy) and storing on other peoples TrueNASes (you can also provide your unused space btw and gain some small money back).
I am in process of setting this up (already run a working test backup) and I didn’t find anything that’s better than this integrated solution. Very cool!
If you use ZFS this becomes easy, because you can do incremental backups at the block level.
I have my home lab server and do snapshots and sends to a server at my fathers house. Then I also have an external drive that I snapshot to as well.