Guides 11792 Published by

The article explains how to effectively use rsync on Linux for keeping folders synchronized, providing ten practical patterns ranging from simple local mirroring to automated remote backups. It covers core options like -a, -v, --delete, compression with -z, progress indicators, and checksum verification, showing real‑world scenarios such as photo libraries, web servers, and large database transfers. The piece also introduces advanced techniques, including hard‑linked snapshots via rsnapshot, excluding directories like node_modules and .git, scheduling cron jobs with lockfiles, monitoring throughput with pv, performing dry runs, and cleaning up old backups automatically. Throughout, the author offers concise commands, explains why each flag matters, and shares personal anecdotes to illustrate how these strategies prevent data loss and save storage space.



How to use Rsync on Linux: Top Practical Examples

Want to keep a folder in perfect shape without fussing over file‑by‑file copies? Rsync on Linux is your go‑to tool for quick, incremental syncs—local or remote, full or selective. Below are the most useful patterns I’ve used in real life, from simple mirroring to automated backups that never miss a change.

1. Basic local sync – keep two directories identical
Command
rsync -av --delete /source/ /destination/

`-a` preserves ownership, timestamps, and permissions; `-v` shows progress so you know something is happening; `--delete` wipes out files that vanished from the source so the target never holds stale junk. I once had a photo library where old albums were moved out of a shared folder—without `--delete`, the backup kept growing with those orphaned files.

2. Backup to a remote server over SSH
Command
rsync -azP --exclude='*.tmp' user@remote.example.com:/home/user/backup/ /local/backup/

Why it matters:

`-z` compresses data on the wire—great when the link is slow. `-P` shows progress and allows resuming if the connection drops. Excluding temporary files keeps bandwidth to a minimum. I’ve set this up nightly for my home server; even after an unexpected power loss, rsync can resume from where it left off instead of starting over.

3. Incremental backup with hard links (time‑stamped snapshots)
Command
rsnapshot config

(Then edit `/etc/rsnapshot.conf` to set `snapshot_root` and `backup /var/www/ localhost:/backups/www/`).

Run:

sudo rsnapshot daily

Why it matters:

Each snapshot is a hard‑linked copy of unchanged files, so you get the look of a full backup without using extra disk space. I use this on my web server to roll back a bad deployment in seconds—no need for a heavy VM restore.

4. Use `--inplace` to update large files without creating temp copies
Command
rsync -av --inplace /source/largefile.dat user@remote:/backups/

Why it matters:

Without `--inplace`, rsync creates a new file then swaps it, doubling disk usage temporarily. When backing up gigabyte‑sized databases on a partition that’s almost full, this flag saves the day.

5. Mirror a directory while excluding certain patterns
Command
rsync -av --exclude='node_modules/' --exclude='.git/' /project/ /backup/project/

Why it matters:

Source code often contains huge auto‑generated folders that you don’t want in the backup. By listing excludes, rsync skips them entirely, saving time and space. I use this pattern for a daily sync to an external SSD before I leave for the day.

6. Verify integrity with checksums
Command
rsync -avc /source/ /destination/

`-c` forces rsync to compare files by checksum instead of just timestamps and sizes. When a corrupted network transfer caused some PDF copies to appear blank, the checksum check immediately flagged the mismatch.

7. Schedule regular syncs with `cron` (and avoid overlap)
Crontab entry
0 3   * /usr/bin/rsync -az --delete /var/log /mnt/backup/logs/

Running rsync every night at 3 AM ensures logs are safely backed up before the day’s activity starts. The `--delete` flag keeps the backup tidy. I added a lockfile check (`flock -n`) so two runs never clash if one drags on.

8. Monitor progress with `pv` for large transfers
Command
rsync -av /bigdata/ | pv -s $(du -sb /bigdata | cut -f1) > /dev/null

`pv` shows a live bar that tells you how much data has moved, which is handy when the transfer takes hours. I use this when moving a 200 GB media library to a new NAS; the progress bar keeps me from wondering if rsync is still alive.

9. Quick test of what would change (dry run)
Command
rsync -av --dry-run /source/ /destination/

Dry runs let you see exactly which files would be transferred or deleted without touching anything. When I set up a new backup rule, the dry‑run output helped me catch an accidental `--exclude='*'` typo that would have wiped my entire destination.

10. Use `rsync` as part of a script to clean up old snapshots
Script snippet
#!/bin/bash
snapshot_dir="/backups/daily"
find "$snapshot_dir" -maxdepth 1 -type d -mtime +30 -exec rm -rf {} \;

Rsync can keep daily snapshots indefinitely, but that fills up disks fast. The script above automatically deletes snapshots older than a month, keeping the space usage predictable.

There you have it—ten solid ways to harness rsync on Linux for everyday backup and sync tasks. Pick the pattern that fits your workflow, tweak the options as needed, and you’ll never be left with half‑copied directories again.