Backup Your Databases with mysqldump – The Quick, Reliable Way
If you’re running a website or an internal tool on MySQL and you haven’t yet backed up your data, you’re basically gambling with a hard drive that might die any minute. This post walks through using mysqldump from the command line to create safe, portable snapshots of your databases. No fancy GUI, no extra overhead, just plain old SQL text files you can drop on a USB stick or ship to the cloud.
Why mysqldump is the tool of choice
I’ve watched a few coworkers sit there in disbelief when an unplanned server reboot wipes out half a month’s worth of logs. The culprit? They never ran mysqldump before the upgrade and ended up with a corrupted data directory that had to be rebuilt from scratch. With mysqldump you get a lightweight, transaction‑aware dump that can be restored even on older MySQL versions.
Step 1: Pick your target database(s)
mysqldump --user=root --password=secret mydatabase > mydatabase.sql
- The --user and --password flags let you authenticate. If you’re on a local dev box, the password can be omitted for speed; just be sure you run this from an account that has only read privileges on the target schema.
- Dropping the dump to a file (> mydatabase.sql) keeps your shell tidy and gives you a single file you can copy or compress later.
If you need multiple databases in one go, add --all-databases:
mysqldump --user=root --password=secret --all-databases > all.sql
But be careful: this creates a massive file and includes the mysql system database. Only do it when you really need everything.
Step 2: Add useful options for stability
mysqldump --single-transaction \
--quick \
--lock-tables=false \
mydatabase > mydb.sql
- --single-transaction starts a transaction before dumping. That means you get a consistent snapshot without locking tables, which is essential for high‑traffic production systems.
- --quick streams rows from the server to your file as they’re read. It keeps memory usage low—no need to let mysqldump chew through a gigabyte of data in RAM.
- --lock-tables=false prevents mysqldump from locking the entire database, letting writes continue during backup.
You’ll notice I’m avoiding the default --lock-tables=true. That’s fine for small dev databases but kills performance on live sites.
Step 3: Compress on the fly to save space
mysqldump --single-transaction mydatabase | gzip > mydb.sql.gz
Piping straight into gzip cuts the file size in half and keeps your storage usage down. If you’re backing up a huge schema, consider using bzip2 or even xz, but the trade‑off is CPU.
Step 4: Verify the dump before you lose it
gunzip -c mydb.sql.gz | mysql --user=root --password=secret testdb
Try restoring into a test database first. If the command hangs or throws syntax errors, you’ve got a corrupted dump. It’s better to catch that now than in production.
Step 5: Automate with cron
For Linux:
0 3 * /usr/bin/mysqldump --single-transaction mydatabase | gzip > /backups/mydb_$(date +\%F).sql.gz
That runs nightly at three a.m., names the file after today’s date, and leaves you with an incremental archive.
When mysqldump isn’t enough
If you need point‑in‑time recovery or want to avoid downtime altogether, consider binary logs or replication instead. But for most folks who just want an offline snapshot, mysqldump is rock solid—no extra software, no licensing headaches.
That’s it – you’re now armed with a quick, reliable way to back up any MySQL database without pulling out the big guns.