Best strategy for offline backup of Octopus instances

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
jamiet's picture

Hi,

I have a number of Octopus instances and would like to create an offline backup routine for each of them. This may be more due to my lack of understanding with Rsync and relevant switches (I am using luckybackup as a GUI to Rsync). I have setup ssh keys for each octopus instance (the *.ftp users) to create password less logins and have successfully run the Rsync backup script through luckbackup. However all it seems to create on the local drive is replica of symlink to the actual backup directory and it does not backup any of the tar files.

My thoughts are as follows:
All of my platforms are either based on the Octopus platforms or are custom platforms generated from a makefile stored in a git repo elsewhere - also currently the sites on the server are mainly brochure sites without comments etc. As a result I only really need to backup the regular tar.gz file created by the aegir backup routine. If I need to quickly recreate the server and sites on the server - I should be able to rebuild the server from scratch using a newly provisioned server and the BOA scripts. I can then create new Octopus instances and import the backed up tar.gz files.

My questions are:
-Is the above an ok backup process?
-How do I rsync the tar.gz backup files? Do I use the *.ftp users and if so how do I get rsync to follow the symlinks to backup the tar.gz files. If I need to use a different user what permissions / groups do they need to be able to back up the relevant directories?

TIA,

JamieT

Comments

Backup to S3 with s3fs

geofftech's picture

I use s3fs to map the backups folder to an Amazon s3 bucket.

That way backups are auto-magically offsite.

[Note: when backing up this way, backups are accessible from SSH if hacked so a hacker could bring down your site and delete your backups as well. But as a quick off-site solution it works fine.]

Here are my notes if it helps http://geoff.com.au/content/setup-backup-folder-ubuntu-using-amazon-s3

Thanks for the thought

jamiet's picture

Thanks for the thought starter - I will take a look at Amazon S3. I was currently looking at backing up to my local server - I have overcome one hiccup by adding my SSH user to the group users and that has enabled me to backup the site level tarballs - however I cannot backup the system folder within the backups directory and I have a site with a large 500mb files directory so following best practice I have symlinked this into the sites directory and to ensure the site works correctly I had to set the group to www-data so I cannot back this up.

However I was reading the guidance on remote site migration and it looks like you can add ssh to the octopus system user (not the o1.ftp but the o1 user). Is this ok to do or does it open up significant security issues?

Regards,

JamieT

thank you for posting this!

socialtalker's picture

This whole backup thing is something I have been avoiding. Last year, i had set aside nearly half of my server for backups, but it didnt make sense to me to have it all on the same place. sorry to the original poster, dont mean to take over a thread. but could you just give a hint how to retrieve and re-install the data if someone brings it down?

No apology necessary its the

jamiet's picture

No apology necessary its the vital next question to ask - I have my backups how do I restore??

Here's my far from perfect take on this - others feel free to chip in.

I currently have setup key based SSH logins on each of the octopus system accounts and rsync the backups directory (and a separate files directory for a site > 500mb) down to my local server on a regular basis. This local server/pc is the hub of my digital life so it is backed up using an online backup service (crashplan).

The reason I only backup the backups directory is because the platforms I use are all based on drush make files which are version controlled and I set the version of all the modules I dl so if I need to recreate a platform I can just pull in the make files from the repo.

Should the server go south then depending on the issue I could setup a new server with BOA and create new octopus instances which I can upload the site backups to and import them to the new instance. For importing the site using an existing backup I use the following instructions on the omega8.cc website (step 4 onwards):
http://omega8.cc/import-your-sites-to-aegir-in-8-easy-steps-109

As stated this is not a perfect solution and will not scale to lots of sites easily without a ton of work.

Last time I had an issue with my VPS (a botched BOA 2.03 upgrade - still not built up enough courage to try again) their control panel restore did not work, I had to get them to do a manual restore which took them 2 days?? I am now seriously considering moving to a more reliable host - currently checking out Linode and they have a backup extra which looks quite good. If anyone has experience of BOA on a Linode 512 node I would be interested in feedback.

HTH,

JamieT

re: Linode 512

jtbayly's picture

We ran on a Linode 512 for quite a while, but when we began to have sites running on more than a couple of platforms, it was necessary for us to go up to 1024. There simply wasn't enough RAM to handle the opcode caching of all our code. The sites still worked fine, but we wanted them faster.

Also, we pay for their backup service, and it works great. I've had to use it. :) It's a bit more complicated to use than the similar service that Slicehost had, but it is more flexible. We were at Slicehost before, and I also had to use our backups there.

Thanks for the feedback - I

jamiet's picture

Thanks for the feedback - I figured I would try with a 512mb as our needs are relatively modest and if it needs more I would upgrade. Is the upgrade path easy to kick off?

BOA

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds: