Where developers come to connect, share, build and be inspired.

5

MySQL dump done right

513 views

Such a common problem: site had an emergency crash, and needed to restore from backup.

• They discovered the last six months of backups had been sent to a 100% full filesystem – none of the backups were restorable.

• It’s hard for a business to recover from this kind of mistake – you may be turning off the lights

http://www.percona.com/sites/default/files/presentations/Deadly%20Sins%20of%20MySQL%20Operations%201.22.14.pdf

Once your database less than few GiB mysqldump fits well, so let's do it through crontab. To prevent any unexpected issues we do set -e (bash -e) that causes the shell to exit if any subcommand or pipeline returns a non-zero status. Let's assume we use s3cmd CLI to upload file onto AWS S3, you can easily replace it with rsync or something else.

mkdir -p /mnt/backup

echo '#!/bin/bash -e
current_date=`date +%Y-%m-%d-%H:%M`
dbfile_name="dump.$current_date.sql.gz"
dbfile_full="/mnt/backup/$dbfile_name"
mysqldump --innodb-optimize-keys --single-transaction -u user db | gzip -c > $dbfile_full
log=$(s3cmd put -sv s3_bucket:mysql/$dbfile_name $dbfile_name 2>&1)
echo $log | mail -s "[backup] DB#0" your_email@domain.com
/bin/sync; echo 1 > /proc/sys/vm/drop_caches;
' > /mnt/backup.sh

echo "0 */24 * * * /mnt/backup.sh" | crontab -

What does exactly this simple script do:

• makes sure that script stop immediately if something goes wrong (-e option)

• once you receive an email the dump files uploaded successfully

• you can track the dump file size and see if db slave still alive

• all emails will be organised in thread by title to easy tracking

• prevents swap by cleaning the file cache straight after dump file is uploaded

• once you stopped from receving emails you know that filesystem is 100% full in most cases

Comments

Add a comment