Send backups to Amazon S3 bucket

Events happening in the community are now at Drupal community events on www.drupal.org.
You are viewing a wiki page. You are welcome to join the group and then edit it. Be bold!

Both Barracuda and Octopus have a wonderful backup feature that allows you to keep regular scheduled backups and restore from them with the click of a button.
Here is a quick guide to follow if you would like to send the backups to your Amazon s3 bucket.

This feature is enabled from your Barracuda / Octopus admin panel under Hosting -> Features -> Experimental
After setting up your backup preferences you will can find the site backups in your filesystem /data/disk/MYUSERNAME/backups
What I did was simply mount my Amazon s3 bucket in place of the backups folder by using the s3fs utility.

Here is how...

  • log into your server as root in the root directory
    apt-get install build-essential libfuse-dev fuse-utils libcurl4-openssl-dev libxml2-dev mime-support
    wget http://s3fs.googlecode.com/files/s3fs-1.58.tar.gz
    tar xvzf s3fs-1.58.tar.gz
    cd s3fs-1.58/
    ./configure --prefix=/usr
    make && make install

    ^s3fs installation instructions for debian / ubuntu from here
  • now setup your s3 bucket if you haven't already
  • you will need the bucket name, the access key, and secret access key of authorized user
  • what i did was create a group from my IAM called "backup managers" and give them r/w access to s3
  • create a user and press the link to view the user access keys
  • back in the terminal execute the following (still as root)

nano /etc/passwd-s3fs

  • enter your credentials in this format
    bucketName:accessKeyId:secretAccessKey

  • save & exit

    chmod 640 /etc/passwd-s3fs
    nano /etc/init.d/s3backup
  • paste the following script

#! /bin/sh
# /etc/init.d/blah
#
bucketName='YOURBUCKETNAME'
mountDestination='ABSILUTE DESTINATION FOLDER'

case "$1" in
    start)
     echo "mounting '$bucketName' to '$mountDestination'"
     /usr/bin/s3fs -o allow_other $bucketName $mountDestination
     ;;
stop)
      echo "unmounting '$bucketName' from '$mountDestination'"
     umount s3fs
        ;;
*)
     echo "Usage: /etc/init.d/s3backup {mount|unmount}"
       exit 1
   ;;
esac

exit 0
  • remember to change the bucketName & mountDestination variables
  • save & exit
    chmod 755 /etc/init.d/s3backup  <--- this makes it executable
    update-rc.d s3backup defaults  <--- this makes it a startup script
  • log in as your regular user and empty the backup directory so you can mount the s3 bucket in it
    mv ~/backups ~/backups.old
    mkdir ~/backups
  • at this point make sure your bucket permissions are correct but having the bucket r/w to Authenticated Users, your user group able to r/w S3 buckets, and your user part of the group.
  • you can now log into the server as your regular user in the server and run
    sudo /etc/init.d/s3backup start   <--- to mount the s3 bucket
    mv ~/backups.old/* ~/backups/   <--- to put your old backups in the bucket
    sudo /etc/init.d/s3backup stop   <--- to unmount the s3 bucket if you need to
  • if you aren't able to execute sudo commands you must add yourself to the sudo group in /etc/group and log out/in to apply changes

Comments

Wow, great stuff thanks!

tribe_of_dan's picture

Wow, great stuff thanks!

Any fix on how to secure the

elvis2's picture

Any fix on how to secure the backups (s3 mount) in case a hacker gains access to the webserver?

BOA

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds:

Hot content this week