Hello,
My website (shared hosting) is growing bigger.
Usually I backup using MySQLDumper (http://www.mysqldumper.net/)
It will backup fine, also no error when restoring. Then, when browse the site, some post will missing (not found), but the link to that missing posts still there. I have no idea whats going on. When looking at the database by checking the tables and compare it to the original database, it looks like no data loss, with same number of lines.
I dont really like idea of dump database. MySQLDumper is just best choice for shared hosting.
Today I will move to unmanaged VPS server with root access. So I have more option to backup and I have some questions of every choice below:
-
Copying individual MyISAM table files
(shut down mysqld and copy the .frm, .myd, and .myi files from one database folder to another)
Question:
(a) can I use this way to backup MySQL database folder from one server to another server with different MySQL version?
(b) can this backup files moved to different OS? (example: debian to centos) -
mysqlhotcopy(backup while the database server is running)
Question:
(a) is this another type of dump?
I like copying rather than dump the database to become other format (example: become CSV) because Im worried of data loss. Thats a big pain.
But I like mysqlhotcopy because I can backup a snapshot of the database. It saves much time.
If you know other safe way to backup, I will greatly appreciated.
Thanks in advance.
Comments
mysqldumper ist great. use
mysqldumper ist great. use the perl skript from mysqldumper.
Thanks for let me know that,
Thanks for let me know that, but sadly perl script cannot running on that server.
How big is the db? I have a
How big is the db?
I have a 3gb db that gets backuped as SQL every night using mysqldump (you can run this from the command line of the vps)
mysqldump -uusername -psecret dbname --opt --single-transaction \--ignore-table=$db.cache \
--ignore-table=$db.cache_filter \
--ignore-table=$db.cache_menu \
--ignore-table=$db.cache_page \
--ignore-table=$db.cache_views \
--ignore-table=$db.devel_queries \
--ignore-table=$db.devel_times \
--ignore-table=$db.search_dataset \
--ignore-table=$db.search_index \
--ignore-table=$db.search_total \
--ignore-table=$db.sessions \
--ignore-table=$db.watchdog > file.sql
EDIT* - this only backs up the data not the structure, you need to run this as well which will give you a file with the structure.
mysqldump -d -uusername -psecretpass databasename > structure.sqland when you come to restore you need to run the structure in first, then the data
$ mysql databasename < structure.sql$ mysql databasename < file.sql
There are more notes on my blog and a script which also compresses the db and stores on s3 - http://www.initsix.co.uk/content/script-efficient-drupal-database-backup...
For backing up mysql using filebackups, it would work, but you would need to use something like mysql to lock and flush the tables, then use xfs to lock and flush the filesystem, then take a copy. I wouldnt like to think of changing mysql versions like this tho!!
mysqldump is the defacto tool for doing this, you can pipe it between ssh sessions and all sorts of stuff.
if you have a huge online db, and cant take it offline to dump it properly, then the only other way i know (without using more tools) is to run a second mysql db as a slave and use that to take backups from. I think i started to implement it and half wrote the instructions, but then didnt bother as i'm running on ec2 with xfs + ebs snapshots
anyway, hope this helps.
Systems architecture and Drupal development - www.initsix.co.uk
thanks for this great guide.
thanks for this great guide. I need to learn this.
We do a double backup
We do a double backup approach. We have constant backups being done through replication, then we have a script that runs at 3am everyday that locks and flushes the tables then does an rsync of the data directory. This method works out rather well.
HollyIT - Grab the Netbeans Drupal Development Tool at GitHub.
locks and flushes the tables
About locks and flushes the tables, Are you using mysqlhotcopy?
(sorry I'm still not clear about the whole locks table function)
thanks.
seems the mysqlhotcopy does
seems the mysqlhotcopy does all this for you in the background.
http://dev.mysql.com/doc/refman/5.0/en/mysqlhotcopy.html
Systems architecture and Drupal development - www.initsix.co.uk
We are using a CLI php script
We are using a CLI php script that connects to MySQL and just issues the flush and lock commands. You really don't need to "flush" for a backup, though it is a good idea. The idea behind locking the tables is so that the data doesn't get modified during the backup. If you don't lock the tables then you can end up with a bad backup - ie if a node is published when the backup is happening and {node} got backedup with the new node, but {node_revisions} got backed up with the old data.
The Rsync to do the backup is actually a lot faster than doing a regular mysqldump. The main reason is we are just copying files, versues having mysqldump write out a bunch of sql commands into a text file.
Another reason we went this route was so we could add in a bunch of other maintenance stuff at the time. Basically when the PHP script runs the site is automatically put into maintenance mode. The cache is totally flushed and the backups are done. After that any MyISAM tables with more than 5% overhead get issued an optimize command and once all done the tables are checked and the site put back online. This whole process takes less than a minute to run on a database approaching 1.5gb (and that's without Drupal cache in it).
HollyIT - Grab the Netbeans Drupal Development Tool at GitHub.
Zrm
I use Zmanda Recovery Manager for MySQL community edition (free) http://zmanda.com/backup-mysql.html and it's great
xtrabackup is a nice open source solution
xtrabackup has been created by Percona as an opensource alternative for mysqlhotcopy, I'm currently doing some research and am planning to implement this to do online backups (on my slave server).
More information can be found here:
http://www.percona.com/docs/wiki/percona-xtrabackup:xtrabackup_manual