You might not even like rsync. Yeah it’s old. Yeah it’s slow. But if you’re working with Linux you’re going to need to know it.
In this video I walk through my favorite everyday flags for rsync.
Support the channel:
https://patreon.com/VeronicaExplains
https://ko-fi.com/VeronicaExplains
https://thestopbits.bandcamp.com/
Here’s a companion blog post, where I cover a bit more detail: https://vkc.sh/everyday-rsync
Also, @BreadOnPenguins made an awesome rsync video and you should check it out: https://www.youtube.com/watch?v=eifQI5uD6VQ
Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it’s still pretty good: https://www.youtube.com/watch?v=3FKsdbjzBcc
Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica… why not use (insert shiny object here)
deleted by creator
Rsnapshot. It uses rsync, but provides snapshot management and multiple backup versioning.
I use syncthing.
Is rsync better?
Syncthing works pretty well for me and my stable of Ubuntu, pi, Mac, and Windows
I think the there are better alternatives for backup like kopia and restic. Even seafile. Want protection against ransomware, storage compression, encryption, versioning, sync upon write and block deduplication.
comparing seafile to rsync reminds me the old “Space Pen” folk tale.
I used to use rsnapshot, which is a thin wrapper around rsync to make it incremental, but moved to restic and never looked back. Much easier and encrypted by default.
rsync for backups? I guess it depends on what kind of backup
for redundant backups of my data and configs that I still have a live copy of, I use restic, it compresses extremely well
I have used rsync to permanently move something to another drive though
It’s slow?!?
That part threw me off. Last time i used it, I did incremental backups of a 500 gig disk once a week or so, and it took 20 seconds max.
Grsync is great. Having a GUI can be helpful
Yeah it’s slow
What’s slow about async? If you have a reasonably fast CPU and are merely syncing differences, it’s pretty quick.
It’s single thread, one file at a time.
I would generally argue that rsync is not a backup solution. But it is one of the best transfer/archiving solutions.
Yes, it is INCREDIBLY powerful and is often 90% of what people actually want/need. But to be an actual backup solution you still need infrastructure around that. Bare minimum is a crontab. But if you are actually backing something up (not just copying it to a local directory) then you need some logging/retry logic on top of that.
At which point you are building your own borg, as it were. Which, to be clear, is a great thing to do. But… backups are incredibly important and it is very much important to understand what a backup actually needs to be.
I would generally argue that rsync is not a backup solution.
Yeah, if you want to use rsync specifically for backups, you’re probably better-off using something like
rdiff-backup
, which makes use of rsync to generate backups and store them efficiently, and drive it from something likebackupninja
, which will run the task periodically and notify you if it fails.rsync
: one-way synchronizationunison
: bidirectional synchronizationgit
: synchronization of text files with good interactive merging.rdiff-backup
:rsync
-based backups. I used to use this and moved torestic
, as thebackupninja
target forrdiff-backup
has kind of fallen into disrepair.That doesn’t mean “don’t use
rsync
”. I mean,rsync
’s a fine tool. It’s just…not really a backup program on its own.Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.
However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every “snapshot” you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.
But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it’s not a backup.
(OTOH, rsync is still wonderful for large transfers.)
Having a synced copy elsewhere is not an adequate backup and snapshots are pretty important. I recently had RAM go bad and my most recent backups had corrupt data, but having previous snapshots saved the day.
+1 for rdiff-backup. Been using it for 20 years or so, and I love it.
I use rsync and a pruning script in crontab on my NFS mounts. I’ve tested it numerous times breaking containers and restoring them from backup. It works great for me at home because I don’t need anything older than 4 monthly, 4 weekly, and 7 daily backups.
However, in my job I prefer something like bacula. The extra features and granularity of restore options makes a world of difference when someone calls because they deleted prod files.
I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we’re already talking about rsync, I guess I may as well ask if this is right way to go?
It depends
rsync
is fine, but to clarify a little further…If you think you’ll stop the transfer and want it to resume (and some data might have changed), then yep,
rsync
is best.But, if you’re just doing a 1-off bulk transfer in a single run, then you could use other tools like
xcopy
/scp
or - if you’ve mounted the remote NAS at a local mount point - just plain oldcp
The reason for that is that
rsync
has to work out what’s at the other end for each file, so it’s doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.(From memory, I think Raspberry Pi don’t handle large transfers over
scp
well… I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)Also, on a local network, there’s probably no point in using encryption or compression options - esp. for photos / videos / music… you’re just loading the CPU again to work out that it can’t compress any further.
I couldn’t tell you if it’s the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there’s new data
yes, it’s the right way to go.
rsync over ssh is the best, and works as long as rsync is installed on both systems.
On low end CPUs you can max out the CPU before maxing out network—if you want to get fancy, you can use rsync over an unencrypted remote shell like
rsh
, but I would only do this if the computers were directly connected to each other by one Ethernet cable.
Use borg/borgmatic for your backups. Use rsync to send your differentials to your secondary & offsite backup storage.
Ive personally used rsync for backups for about…15 years or so? Its worked out great. An awesome video going over all the basics and what you can do with it.
It works fine if all you need is transfer, my issue with it it’s just not efficient. If you want a “time travel” feature, your only option is to duplicate data. Differential backups, compression, and encryption for off-site ones is where other tools shine.
I have it add a backup suffix based on the date. It moves changed and deleted files to another directory adding the date to the filename.
It can also do hard-link copied so that you can have multiple full directory trees to avoid all that duplication.
No file deltas or compression, but it does mean that you can access the backups directly.
Thanks! I was not aware of these options, along with what other poster mentioned about
--link-dest
. These do turn rsync into a backup program, which is something the root article should explain!(Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)
Agree. It’s neat for file transfers and simple one-shot backups, but if you’re looking for a proper backup solution then other tools/services have advanced virtually every aspect of backups it pretty much always makes sense to use one of those instead.
And I generally enjoy Veronica’s presentation. Knowledgable and simple.
Her https://tinkerbetter.tube/w/ffhBwuXDg7ZuPPFcqR93Bd made me learn a new way of looking at data. There was some tricks I havent done before. She has such good videos.
Yep, I found her through YouTube. Her and action retro’s content is always great.with some Adrian black on the side.
I use rsync for many of the reasons covered in the video. It’s widely available and has a long history. To me that feels important because it’s had time to become stable and reliable. Using Linux is a hobby for me so my needs are quite low. It’s nice to have a tool that just works.
I use it for all my backups and moving my backups to off network locations as well as file/folder transfers on my own network.
I even made my own tool (https://codeberg.org/taters/rTransfer) to simplify all my rsync commands into readable files because rsync commands can get quite long and overwhelming. It’s especially useful chaining multiple rsync commands together to run under a single command.
I’ve tried other backup and syncing programs and I’ve had bad experiences with all of them. Other backup programs have failed to restore my system. Syncing programs constantly stop working and I got tired of always troubleshooting. Rsync when set up properly has given me a lot less headaches.
rsnapshot is a script for the purpose of repeatedly creating deduplicated copies (hardlinks) for one or more directories. You can chose how many hourly, daily, weekly,… copies you’d like to keep and it removes outdated copies automatically. It wraps rsync and ssh (public key auth) which need to be configured before.
Hardlinks need to be on the same filesystem, don’t they? I don’t see how that would work with a remote backup…?
The thing I hate most about rsync is that I always fumble to get the right syntax and flags.
This is a problem because once it’s working I never have to touch it ever again because it just works and keeping working. There’s not enough time to memorize the usage.
I feel this too. I have a couple of “spells” that work wonders in a literal small notebook with other one liners over the years. Its my spell book lol.
This is why I still don’t know
sed
andawk
syntax lol. I eventually get the data in the shape I need and then move on, and never imprint how they actually work. Still feel like a script kiddie every time I use them (so once every few years).sed
can do a bunch of things, but I overwhelmingly use it for a single operation in a pipeline: thes//
operation. I think that that’s worth knowing.sed 's/foo/bar/'
will replace all the first text in each line matching the regex “foo” with “bar”.
That’ll already handle a lot of cases, but a few other helpful sub-uses:
sed 's/foo/bar/g'
will replace all text matching regex “foo” with “bar”, even if there are more than one per line
sed 's/\([0-9a-f]*\)/0x\1/g
will take the text inside the backslash-escaped parens and put that matched text back in the replacement text, where one has ‘\1’. In the above example, that’s finding all hexadecimal strings and prefixing them with ‘0x’
If you want to match a literal “/”, the easiest way to do it is to just use a different separator; if you use something other than a “/” as separator after the “s”,
sed
will expect that later in the expression too, like this:sed 's%/%SLASH%g
will replace all instances of a “/” in the text with “SLASH”.
One trick that one of my students taught me a decade or so ago is to actually make an alias to list the useful flags.
Yes, a lot of us think we are smart and set up aliases/functions and have a huge list of them that we never remember or, even worse, ONLY remember. What I noticed her doing was having something like
goodman-rsync
that would just echo out a list of the most useful flags and what they actually do.So nine times out of 10 I just want
rsync -azvh --progress ${SRC} ${DEST}
but when I am doing something funky and am thinking “I vaguely recall how to do this”?dumbman rsync
and I get a quick cheat sheet of what flags I have found REALLY useful in the past or even just explaining whatazvh
actually does without grepping past all the crap I don’t care about in the man page. And I just keep that in the repo of dotfiles I copy to machines I work on regularly.tldr
andatuin
have been my main way of remembering complex but frequent flag combinationsYeah. There are a few useful websites I end up at that serve similar purposes.
My usual workflow is that I need to be able to work in an airgapped environment where it is a lot easier to get “my dotfiles” approved than to ask for utility packages like that. Especially since there will inevitably be some jackass who says “You don’t know how to work without google? What are we paying you for?” because they mostly do the same task every day of their life.
And I do find that writing the cheat sheet myself goes a long way towards me actually learning them so I don’t always need it. But I know that is very much how my brain works (I write probably hundreds of pages of notes a year… I look at maybe two pages a year).
Most Unix commands will show a short list of the most-helpful flags if you use
--help
or-h
.