from mesamunefire@piefed.social to selfhosted@lemmy.world on 03 Oct 17:19
https://piefed.social/post/1332496
You might not even like rsync. Yeah it’s old. Yeah it’s slow. But if you’re working with Linux you’re going to need to know it.
In this video I walk through my favorite everyday flags for rsync.
Support the channel:
patreon.com/VeronicaExplains
ko-fi.com/VeronicaExplains
thestopbits.bandcamp.com
Here’s a companion blog post, where I cover a bit more detail: vkc.sh/everyday-rsync
Also, @BreadOnPenguins made an awesome rsync video and you should check it out: www.youtube.com/watch?v=eifQI5uD6VQ
Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it’s still pretty good: www.youtube.com/watch?v=3FKsdbjzBcc
Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica… why not use (insert shiny object here)
threaded - newest
Ive personally used rsync for backups for about….15 years or so? Its worked out great. An awesome video going over all the basics and what you can do with it.
And I generally enjoy Veronica's presentation. Knowledgable and simple.
Her https://tinkerbetter.tube/w/ffhBwuXDg7ZuPPFcqR93Bd made me learn a new way of looking at data. There was some tricks I havent done before. She has such good videos.
Yep, I found her through YouTube. Her and action retro's content is always great.with some Adrian black on the side.
Veronica is fantastic. Love her video editing reminds me more of the early days of YouTube.
I use rsync for many of the reasons covered in the video. It’s widely available and has a long history. To me that feels important because it’s had time to become stable and reliable. Using Linux is a hobby for me so my needs are quite low. It’s nice to have a tool that just works.
I use it for all my backups and moving my backups to off network locations as well as file/folder transfers on my own network.
I even made my own tool (codeberg.org/taters/rTransfer) to simplify all my rsync commands into readable files because rsync commands can get quite long and overwhelming. It’s especially useful chaining multiple rsync commands together to run under a single command.
I’ve tried other backup and syncing programs and I’ve had bad experiences with all of them. Other backup programs have failed to restore my system. Syncing programs constantly stop working and I got tired of always troubleshooting. Rsync when set up properly has given me a lot less headaches.
It works fine if all you need is transfer, my issue with it it’s just not efficient. If you want a “time travel” feature, your only option is to duplicate data. Differential backups, compression, and encryption for off-site ones is where other tools shine.
Agree. It’s neat for file transfers and simple one-shot backups, but if you’re looking for a proper backup solution then other tools/services have advanced virtually every aspect of backups so much it pretty much always makes sense to use one of those instead.
I have it add a backup suffix based on the date. It moves changed and deleted files to another directory adding the date to the filename.
It can also do hard-link copied so that you can have multiple full directory trees to avoid all that duplication.
No file deltas or compression, but it does mean that you can access the backups directly.
Thanks! I was not aware of these options, along with what other poster mentioned about
–link-dest
. These do turn rsync into a backup program, which is something the root article should explain!(Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)
Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.
Isn’t that creating hardlinks between source and dest? Hard links only work on the same drive. And I’m not sure how that gives you “time travel”, as in, browsing snapshots or file states at the different times you ran rsync.
Edit: ah the hard link is between dest and the link-dest argument, makes more sense.
I wouldn’t bundle fs and backup compression in the same bucket, because they have vastly different reqs. Backup compression doesn’t need to be optimized for fast decompression.
Snapper and BTRFS. Its only adjusts changes in data, so time travel is just pointing to what blocks changed and when, and not building a duplicate of the entire file or filesystem. A snapshot is instant, and new block changes belong to the current default.
I’ve been using borg because of the backend encryption and because the deduplication and snapshot features are really nice. It could be interesting to have cross-archive deduplication but maybe I can get something like that by reorganizing my backups. I do use rsync for mirroring and organizing downloads, but not really for backups. It’s a synchronization program as the name implies, not really intended for backups.
I think Arch wiki recommends rsync for backups
The thing I hate most about rsync is that I always fumble to get the right syntax and flags.
This is a problem because once it’s working I never have to touch it ever again because it just works and keeping working. There’s not enough time to memorize the usage.
I feel this too. I have a couple of “spells” that work wonders in a literal small notebook with other one liners over the years. Its my spell book lol.
One trick that one of my students taught me a decade or so ago is to actually make an alias to list the useful flags.
Yes, a lot of us think we are smart and set up aliases/functions and have a huge list of them that we never remember or, even worse, ONLY remember. What I noticed her doing was having something like
goodman-rsync
that would just echo out a list of the most useful flags and what they actually do.So nine times out of 10 I just want
rsync -azvh --progress ${SRC} ${DEST}
but when I am doing something funky and am thinking “I vaguely recall how to do this”?dumbman rsync
and I get a quick cheat sheet of what flags I have found REALLY useful in the past or even just explaining whatazvh
actually does without grepping past all the crap I don’t care about in the man page. And I just keep that in the repo of dotfiles I copy to machines I work on regularly.Most Unix commands will show a short list of the most-helpful flags if you use
--help
or-h
.tldr
andatuin
have been my main way of remembering complex but frequent flag combinationsYeah. There are a few useful websites I end up at that serve similar purposes.
My usual workflow is that I need to be able to work in an airgapped environment where it is a lot easier to get “my dotfiles” approved than to ask for utility packages like that. Especially since there will inevitably be some jackass who says “You don’t know how to work without google? What are we paying you for?” because they mostly do the same task every day of their life.
And I do find that writing the cheat sheet myself goes a long way towards me actually learning them so I don’t always need it. But I know that is very much how my brain works (I write probably hundreds of pages of notes a year… I look at maybe two pages a year).
rsync -avzhP
gang unite! I knew someone would have posted my standard flags. I used them enough that my brain moved them from RAM to ROM at this point…This is why I still don’t know
sed
andawk
syntax lol. I eventually get the data in the shape I need and then move on, and never imprint how they actually work. Still feel like a script kiddie every time I use them (so once every few years).sed
can do a bunch of things, but I overwhelmingly use it for a single operation in a pipeline: thes//
operation. I think that that’s worth knowing.will replace all the first text in each line matching the regex “foo” with “bar”.
That’ll already handle a lot of cases, but a few other helpful sub-uses:
will replace all text matching regex “foo” with “bar”, even if there are more than one per line
will take the text inside the backslash-escaped parens and put that matched text back in the replacement text, where one has ‘\1’. In the above example, that’s finding all hexadecimal strings and prefixing them with ‘0x’
If you want to match a literal “/”, the easiest way to do it is to just use a different separator; if you use something other than a “/” as separator after the “s”,
sed
will expect that later in the expression too, like this:will replace all instances of a “/” in the text with “SLASH”.
I would generally argue that rsync is not a backup solution. But it is one of the best transfer/archiving solutions.
Yes, it is INCREDIBLY powerful and is often 90% of what people actually want/need. But to be an actual backup solution you still need infrastructure around that. Bare minimum is a crontab. But if you are actually backing something up (not just copying it to a local directory) then you need some logging/retry logic on top of that.
At which point you are building your own borg, as it were. Which, to be clear, is a great thing to do. But… backups are incredibly important and it is very much important to understand what a backup actually needs to be.
Yeah, if you want to use rsync specifically for backups, you’re probably better-off using something like
rdiff-backup
, which makes use of rsync to generate backups and store them efficiently, and drive it from something likebackupninja
, which will run the task periodically and notify you if it fails.rsync
: one-way synchronizationunison
: bidirectional synchronizationgit
: synchronization of text files with good interactive merging.rdiff-backup
:rsync
-based backups. I used to use this and moved torestic
, as thebackupninja
target forrdiff-backup
has kind of fallen into disrepair.That doesn’t mean “don’t use
rsync
”. I mean,rsync
’s a fine tool. It’s just…not really a backup program on its own.+1 for rdiff-backup. Been using it for 20 years or so, and I love it.
Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
Don’t understand the downvotes. This is the type of lesson people have learned from losing data and no sense in learning it the hard way yourself.
How would you pin down something like this? If it happened to me, I expect I just wouldn’t understand what’s going on.
I originally thought it was one of my drives in my RAID1 array that was failing, but I noticed copying data was yielding btrfs corruption errors on both drives that could not be fixed with a scrub and I was also getting btrfs corruption errors on the root volume as well. I figured it would be quite an odd coincidence if my main SSD and 2 hard disks all went bad and I happened upon an article talking about how corrupt data can also occur if the RAM is bad. I also ran SMART tests and everything came back with a clean bill of health. So, I installed and booted into Memtester86+ and it immediately started showing errors on the single 16Gi stick I was using. I happened to have a spare stick that was a different brand, and that one passed the memory test with flying colors. After that, all the corruption errors went away and everything has been working perfectly ever since.
I will also say that legacy file systems like ext4 with no checksums wouldn’t even complain about corrupt data. I originally had ext4 on my main drive and at one point thought my OS install went bad, so I reinstalled with btrfs on top of LUKS and saw I was getting corruption errors on the main drive at that point, so it occurred to me that 3 different drives could not have possibly had a hardware failure and something else must be going on. I was also previously using ext4 and mdadm for my RAID1 and migrated it to btrfs a while back. I was previously noticing as far back as a year ago that certain installers, etc. that previously worked no longer worked, which happened infrequently and didn’t really register with me as a potential hardware problem at the time, but I think the RAM was actually progressively going bad for quite a while. btrfs with regular scrubs would’ve made it abundantly clear much sooner that I had files getting corrupted and that something was wrong.
So, I’m quite convinced at this point that RAID is not a backup, even with the abilities of btrfs to self-heal, and simply copying data elsewhere is not a backup, because something like bad RAM in both cases can destroy data during the copying process, whereas older snapshots in the cloud will survive such a hardware failure. Older data backed up that wasn’t coped with faulty RAM may be fine as well, but you’re taking a chance that a recent update may overwrite good data with bad data. I was previously using Rclone for most backups while testing Restic with daily, weekly, and monthly snapshots for a small subset of important data the last few months. After finding some data that was only recoverable in a previous Restic snapshot, I’ve since switched to using Restic exclusively for anything important enough for cloud backups. I was mainly concerned about the space requirements of keeping historical snapshots, and I’m still working on tweaking retention policies and taking separate snapshots of different directories with different retention policies according risk tolerance for each directory I’m backing up. For some things, I think even btrfs local snapshots would suffice with the understanding that it’s to reduce recovery time, but isn’t really a backup . However, any irreplaceable data really needs monthly Restic snapshots in the cloud. I suppose if don’t have something like btrfs scrubs to alert you that you have a problem, even snapshots from months ago may have an unnoticed problem.
Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.
(OTOH, rsync is still wonderful for large transfers.)
I think that you may be thinking of
rsnapshot
rather thanrdiff-backup
which has that behavior; both usersync
.But I’m not sure why you’d be concerned about this behavior.
Are you worried about inode exhaustion on the destination filesystem?
Huh, I think you’re right.
Before discovering ZFS, my previous backup solution was rdiff-backup. I have memories of it being problematic for me, but I may be wrong in my remembering of why it caused problems.
I use rsync and a pruning script in crontab on my NFS mounts. I’ve tested it numerous times breaking containers and restoring them from backup. It works great for me at home because I don’t need anything older than 4 monthly, 4 weekly, and 7 daily backups.
However, in my job I prefer something like bacula. The extra features and granularity of restore options makes a world of difference when someone calls because they deleted prod files.
I don’t know if there’s a term for them, but Bacula (and I think AMANDA might fall into this camp, but I haven’t looked at it in ages) are oriented more towards…"institutional” backup. Like, there’s a dedicated backup server, maybe dedicated offline media like tapes, the backup server needs to drive the backup, etc).
There are some things that
rsnapshot
,rdiff-backup
,duplicity
, and so forth won’t do.At least some of them (
rdiff-backup
, for one) won’t dedup files with different names. If a file is unchanged, it won’t use extra storage, but it won’t identify different identical files at different locations. This usually isn’t all that important for a single host, other than maybe if you rename files, but if you’re backing up many different hosts, as in an institutional setting, they likely files in common. They aren’t intended to back up multiple hosts to a single, shared repository.Pull-only. I think that it might be possible to run some of the above three in “pull” mode, where the backup server connects and gets the backup, but where they don’t have the ability to write to the backup server. This may be desirable if you’re concerned about a host being compromised, but not the backup server, since it means that an attacker can’t go dick with your backups. Think of those cybercriminals who encrypt data at a company and wipe other copies and then demand a ransom for an unlock key. But the “institutional” backup systems are going to be aimed at having the backup server drive all this, and have the backup server have access to log into the individual hosts and pull the backups over.
Dedup for non-identical files. Note that
restic
can do this. While files might not be identical, they might share some common elements, and one might want to try to take advantage of that in backup storage.rdiff-backup
andrsnapshot
don’t do encryption (thoughduplicity
does). If one intends to use storage not under one’s physical control (e.g. “cloud backup"), this might be a concern.No “full” backups. Some backup programs follow a scheme where one periodically does a backup that stores a full copy of the data, and then stores “incremental” backups from the last full backup. All
rsnapshot
,rdiff-backup
, andduplicity
are always-incremental, and are aimed at storing their backups on a single destination filesystem. A split between “full” and “incremental” is probably something you want if you’re using, say, tape storage and having backups that span multiple tapes, since it controls how many pieces of media you have to dig up to perform a restore.I don’t know how Bacula or AMANDA handle it, if at all, but if you have a DBMS like PostgreSQL or MySQL or the like, it may be constantly receiving writes. This means that you can’t get an atomic snapshot of the database, which is critical if you want to be reliably backing up the storage. I don’t know what the convention is here, but I’d guess either using filesystem-level atomic snapshot support (e.g.
btrfs
) or requiring the backup system to be aware of the DBMS and instructing it to suspend modification while it does the backup.rsnapshot
,rdiff-backup
, andduplicity
aren’t going to do anything like that.I’d agree that using the more-heavyweight, “institutional” backup programs can make sense for some use cases, like if you’re backing up many workstations or something.
Borg gang represent!
Rsync is great. I’ve been using it to back up my book library from my local Calibre collection to my NAS for years, it’s absurdly simple and convenient. Plus, -ruv lets me ignore unchanged files and backup recursively, and if I clean up locally and need that replicated, just need to add —delete.
Here’s how I approach old and slow:
Tangentially, I don’t see people talk about rclone a lot, which is like rsync for cloud storage.
It’s awesome for moving things from one provider to another, for example.
It's fine. But yes in the Linux space. We tend to want to host ourselves. Not have to trust some administrator of some cloud we don't know/trust.
.
I mention in the Linux space only because it's what I'm familiar with and didn't want to make assumptions about groups I'm not familiar with. Unlike you who's looking for a way to take umbridge and talk passed people. I went to college for IT and have done it for 30 years.
In network and IT planning. The cloud is the wider network outside your own. That you don't have mapped. Often depicted by a "cloud". If I have a personal data pool on one of my own networks. And need it from another. It may transmit via the "cloud". But it isn't IN the cloud. It's on a personal server. If the server is in your house, and you can point exactly to where your data is. Then the rule of thumb is that it is in your house. Not the cloud. If it's hosted on a system you couldn't directly point to on a network you have no knowledge of. Especially a shared system. Then things literally and figuratively are getting cloudier.
That said, marketing as it often does. Appropriates and misuses words based around buzz. And I am not about to admonish hobbyist who use it in the marketing sense. I understand, I get it.
If you host in OSX on Apple Silicon, that's great. If you host on a 68k Mac or Amiga you're a fucking mad lad! If you're hosting under Windows, any TCP port in the storm mate. If you are hosting from a Linux distribution that is not God's chosen, cool how is it working out? If you are hosting from BeOS. or Haiku, you are a glorious oddball and absolutely my sort of person. And if you are hosting from an appliance that you really don't know what it's running, welcome to the hobby. It's a good starting point. And a lill data in the cloud isn't a crime. We all have some. But if you can't easily point to it. Can you really know you have it?
I’m not reading all that. Sorry for your issue, or I’m happy for you. Whichever you prefer.
you should. they were polite unlike you. explained the origin of the term and how it was used. explaining that they were aware of how hobbyists have changed the definition etc. it was a decent post. frankly I'm kind of curious why your so hateful. but not enough to really care.
Partakes in text-based medium. Refuses to read well written and polite comment that is four whole paragraphs. Proceeds to think they are the intelligent one in the conversation. Are you huffing glue right now?
rclone does support other protocols besides S3. You can also selfhost your own S3 storage.
I tried rclone once because I wanted to sync a single folder from documents and freaked out when it looked like it was going to purge all documents except for my targeted folder.
Then I just did it via the portal…
rsync can sometimes look similarly scary! I very clearly remember triple-checking what it’s doing.
rclone works amazingly well if you have hundreds of folders or thousands of files and you can’t be bothered to babysit a portal.
@calliope It’s also great for local or remote backups over ssh, smb, etc.
It has been remarkably useful! I keep trying to tell people about it but apparently I am just their main use case or something.
I would have loved it when I was using Samba to share files on my local network decades ago. It’s like a Swiss Army knife!
I need a breakdown like this for Rclone. I’ve got 1TB of OneDrive free and nothing to do with it.
I’d love to setup a home server and backup some stuff to it.
rsync
is pretty fast, frankly. Once it’s run once, if you have-a
or-t
passed, it’ll synchronize mtimes. If the modification time and filesize matches, by default,rsync
won’t look at a file further, so subsequent runs will be pretty fast. You can’t really beat that for speed unless you have some sort of monitoring system in place (like, filesystem-level support for identifying modifications).yeah, more often than not I notice the bottleneck being the storage drive itself, not rsync.
Can also use fpsync to speed things up. Handles a lot for you
Maybe I am missing something but how does it handle snapshots?
I use rsync all the time but only for moving data around effectively. But not for backups as it doesn’t (AFAIK) hanld snapshots
yeah, it doesn’t, it’s just for file transfer. It’s only useful if transferring files somewhere else counts as a backup for you.
To me, the file transfer is just a small component of a backup tool.
You get incremental backups (snapshots) by using
To use this you pass in the previous snapshot location as DIR and use a new destination directory for the current snapshot. This creates hard links in the new snapshot to the files which were unchanged from the previous snapshot, so only the new files are transferred, and there is no duplication of data on disk (for whole-file matches).
This does of course require that all of the snapshots exist in the same filesystem, since you cannot hard-link across filesystems.
Ah, I didn’t know of this. This should be in the linked article! Because it’s one of the ways to turn rsync into a real backup! (I didn’t know this flag- I thought this was the main point of rdiff-backup.)
Rsync depends on OpenSSH, but it definitely isn’t SFTP. I’ve tried using it against an SFTPGo instance, and lost some files because it runs its own binary, bypassing SFTPGo’s permission checks. Instead, I’ve opted for rclone with the SFTP backend, which does everything rsync do and is very well compliant.
In fact, while SFTPGo’s main developer published a fix for this bug, he also expressed intention to drop support for the command entirely. I think I’m just commenting to give a heads up for any passerby.
If you’re trying to back up Windows OS drives for some reason, robocopy works quite similarly to rsync.
Ah… robocopy… that’s a great tool
rsnapshot is a script for the purpose of repeatedly creating deduplicated copies (hardlinks) for one or more directories. You can chose how many hourly, daily, weekly,… copies you’d like to keep and it removes outdated copies automatically. It wraps rsync and ssh (public key auth) which need to be configured before.
Hardlinks need to be on the same filesystem, don’t they? I don’t see how that would work with a remote backup…?
The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup
Ahh, ok. Thanks for clarifying.
Use borg/borgmatic for your backups. Use rsync to send your differentials to your secondary & offsite backup storage.
I use cp and an external hdd for backups
that’s great until it’s not.
What’s slow about async? If you have a reasonably fast CPU and are merely syncing differences, it’s pretty quick.
It’s single thread, one file at a time.
For a home setup that seems fine. But I can understand why you wouldn’t want this for a whole enterprise.
That would only matter if it’s lots of small files, right? And after the initial sync, you’d have very few files, no?
Rsync is designed for incremental syncs, which is exactly what you want in a backup solution. If your multithreaded alternative doesn’t do a diff, rsync will win on larger data sets that don’t have rapid changes.
Y’all don’t seem to know about rsbackup, which is a terrible shame for you.
(I mean the one on greenend.org.uk!)
I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we’re already talking about rsync, I guess I may as well ask if this is right way to go?
I couldn’t tell you if it’s the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there’s new data
yes, it’s the right way to go.
rsync over ssh is the best, and works as long as rsync is installed on both systems.
On low end CPUs you can max out the CPU before maxing out network—if you want to get fancy, you can use rsync over an unencrypted remote shell like
rsh
, but I would only do this if the computers were directly connected to each other by one Ethernet cable.It depends
rsync
is fine, but to clarify a little further…If you think you’ll stop the transfer and want it to resume (and some data might have changed), then yep,
rsync
is best.But, if you’re just doing a 1-off bulk transfer in a single run, then you could use other tools like
xcopy
/scp
or - if you’ve mounted the remote NAS at a local mount point - just plain oldcp
The reason for that is that
rsync
has to work out what’s at the other end for each file, so it’s doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.(From memory, I think Raspberry Pi don’t handle large transfers over
scp
well… I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)Also, on a local network, there’s probably no point in using encryption or compression options - esp. for photos / videos / music… you’re just loading the CPU again to work out that it can’t compress any further.
It’s just a one-off transfer, I’m not planning to stop the transfer, and it’s my media library, so nothing should change, but I figured something resumable is a good idea for a transfer that’s going to take 12+ hours, in case there’s an unplanned stop.
One thing I forgot to mention:
rsync
has an option to preserve file timestamps, so if that’s important for your files, then thst might also be useful… without checking, the other commands probably have that feature, but I don’t recall at the moment.rsync -Prvt <source> <destination>
might be something to try, leave for a minute, stop and retry … that’ll prove it’s all working.Oh… and make sure you get the source and destination paths correct with a trailing
/
(or not), otherwise you’ll get all your files copied to an extra subfolder (or not)It’s slow?!?
That part threw me off. Last time i used it, I did incremental backups of a 500 gig disk once a week or so, and it took 20 seconds max.
Yes but imagine… 18 seconds.
Compared to something multi threaded, yes. But there are obviously a number of bottlenecks that might diminish the gains of a multi threaded program.
With xargs everything is multithreaded.
If you want rsync but shiny, check out rshiny
Grsync is great. Having a GUI can be helpful
rsync for backups? I guess it depends on what kind of backup
for redundant backups of my data and configs that I still have a live copy of, I use restic, it compresses extremely well
I have used rsync to permanently move something to another drive though
I think the there are better alternatives for backup like kopia and restic. Even seafile. Want protection against ransomware, storage compression, encryption, versioning, sync upon write and block deduplication.
comparing seafile to rsync reminds me the old “Space Pen” folk tale.
This exactly. I’d use rsync to sync a directory to a location to then be backed up by kopia, but I wouldn’t use rsync exclusively for backups.
I used to use rsnapshot, which is a thin wrapper around rsync to make it incremental, but moved to restic and never looked back. Much easier and encrypted by default.
I use syncthing.
Is rsync better?
Syncthing works pretty well for me and my stable of Ubuntu, pi, Mac, and Windows
its for a different purpose. I wouldn’t use syncthing the way I use rsync
Syncthing is technically to synchronize data across different devices in real time (which I do with my phone), but I also use it to transfer data weekly via wi-fi to my old 2013 laptop with a 500GB HDD and Linux Mint (I only boot it to transfer data, and even then I pause the transfers to this device when its done transferring stuff) so I can have larger data backups that wouldn’t fit in my phone, since LocalSend is unreliable for large amounts of data while Synchting can resume the transfer if anything goes wrong. On top of that Syncthing also works in Windows and Android out of the box.
I’m not super familiar with Syncthing, but judging by the name I’d say Syncthing is not at all meant for backups.
Different tools for different use cases IMO.
But neither do backups.
I dunno.
I am using it to keep a real time copy of documents on an offsite server.
Feels like a backup to me.
What happens if you accidentally overwrite something important in a document and save it though? If there’s no incremental versioning you can’t recover from that.
That is a good point.
In my case, I was trying to address the shortcomings of Apple Time Machine. I use a Mac mini as the server I work from on all my machines. Time Machine does the version Managment for me.
I just use Sync Thing through a VPN to keep an offsite backup of content files (not a complete OS restore) and to keep a copy of critical files on my laptop in case I am away from my home network and need to see a file.
I still need to implement a regular air gapped backup instead of the ad-hoc that I have now.
Rsnapshot. It uses rsync, but provides snapshot management and multiple backup versioning.
Yes, but a few hours writing my own scripts will save me from several minutes of reading its documentation…
It took me like 10 min to setup rsnapshot (installing, and writing systemd unit /timer files) on my servers.
I’m sure I could script something similar in under 10 (hours).
Man ilove being autostic
Yah, I really like this approach. Same reason I set up Timeshift and Mint Backup on all the user machines in my house. For others rsync + cron is aces.
.
Veeam for image/block based backups of Windows, Linux and VMs.
syncthing for syncing smaller files across devices.
Thank you very much.
I’ll never not upvote Veronica Explains. Excellent creator and excellent info on everything I’ve seen.
I still prefer tar for quick and dirty same box copies.
Why not just
cp
?Why videos? I feel like an old man yelling at clouds every time something that sounds interesting is presented in a fucking video. Videos are so damn awful. They take time, I need audio and I can’t copy&paste. Why have they become the default for things that should’ve been a blog post?
Especially for a command line tool
man rsync
They linked blog post with the video: vkc.sh/everyday-rsync/
Hear hear. Knowledge should be communicated in an easily shareable way that can also be archived as easily, in contrast to a video requiring hundreds of MB:s.
Ad money.
Blogs can have ads.
Thank you for putting into words what ive subconsciously been thinking for years. Every search result prioritizes videos at the top and I’m still annoyed every time. Or even worst I have to hunt through a 10 minute video for the 30 seconds of info I needed. Stoohhhhpppp internet of new! Make it good again!
I never thought of it as slow. More like very reliable. I dont need my data to move fast, I need it to be copied with 100% reliability.
And not waste time copying duplicate data. And for the typical home user, it’s probably mo slower than other options.
It’s not bad if you don’t need historical backups. I kinda think I do, so I use github.com/rustic-rs/rustic becase rust
Restic (github.com/restic/restic) is probably a better choice if you’re not a rust-freak like me.
I use rsync + ZFS for backups which includes historical backups
Yup, just configure a snapshot policy and you can recover deleted and modified files going back as long as you choose. And it is probably more space efficient than both/restic too.
Rustic scares me. I will 100% forget what tool I used to backup after 5 years and be unable to recover my files.
I tried to use it via tailscale but it disconnects very easily - is to be expected?
I would not expect rsync to have frequent disconnects, no.
If I connect to the same server via my own VPN I don’t have the disconnections, so I’m thinking it’s tailscale cutting connections after too much traffic. But connecting via tailscale is so much more convenient 😢
Surely restic or borg would be better for backups?
Rsync can send files and not delete stuff, but there’s no versioning or retention settings.
If you add --delete-before, it absolutely can delete stuff.
Yeah but then it’s not really a good backup!
For versioning/retention, just use snapshots in whatever filesystem you’re using (you are using a proper filesystem like ZFS or BTRFS, right?).
How does that get sent over rsync though? Wouldn’t you need snapshots on the remote destination server?
Why not just use a backup utility instead?
Yes, async copies files to the remote server, the remote server takes regular snapshots.
What is that utility providing that snapshots + rsync doesn’t. If rsync + snapshots is sufficient, why overcomplicate it with a backup utility?
The main things that come to mind are you have to test/monitor 2 seperate actions instead of 1, and restores of single files could be more difficult since you need to login to the backup server, restore the file from a snapshot, then also copy that file back to your PC.
My point is, how often do you actually need to restore from backup? If it’s frequent, consider a dedicated tool for whatever that thing is. If it’s infrequent, it’ll probably easier to just learn how to do it every five years or whatever.
If you like borg/restic/etc, by all means, use it.
My point is that most people probably don’t need it. Snapshots are something you set up once, and you should probably use them even if you’re using something like borg for any files that aren’t covered (e.g. config files on the server). Rsync is also something you set up once, and checking it is the same as any other service.
Nope. I never have needed to know it. I only ever used it because I was either curious to know how to use it or because it was more convenient than other solutions. But scp is basically just as convenient.
It doesn’t do diffs, so it’s really bad if there’s a lot of duplicate data.
If you want to use it for backups, there are other solutions, so you still don’t need to use it or know it. You can use something else. That’s my only point. 🤷♂️
And “really bad” is all relative. If you are only backing up your home drive with documents or whatever, copying a few unnecessary gigabytes over a LAN connection isn’t too bad at all. But scp isn’t what you should be using for backups anyway. I only used rsync for file transfer…
I use rsync for all kinda of things:
I only really use scp if the system doesn’t already have rsync.
Alright. But you don’t need to know rsync. That’s my only point. 👍👍
Sure, but you should probably be aware of what it is and what it does. It’s incredibly common and will be referenced in a ton of documentation for Linux server stuff.
You won’t need to unless you run a server in that case. 👍 But the only condition here was “working with Linux”.
Like I said, I’ve been using Linux at home and for work for over a decade, maybe 15+ years, never once did I need to use rsync or know what it is.
That being said, it was convenient when I used it, but never did I need it.
This is the self-hosted community, so that’s the context I was assuming.