Project Mercury: PressFlow Drupal+Varnish AMI Alpha4 Release

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
joshk's picture

Today I'm glad to announce the latest release in this line of AMI development. This update solves a number of issues and moves us one step closer to a sable beta release. The current AMI ID is ami-c353b2aa, and you can find this AMI by searching for "chapter3" or "mercury" in your AWS console.

For more background information about this project, see my initial g.d.o post and my blog post announcing the initial release.

Below you will find the notes for this release. Also in this post I will include a development roadmap, as well as some more explicit explanation of the techniques I'm using for making the AMI work out of the box.

Alpha4 Release Notes:

This release improves stability and adds some critical missing components from previous images.

  • Switched from experimental memecached back to APC for simplicity/stability
  • Increased the APC SHM size to 256M to accommodate Drupal caches as well as opcode
  • Installed mod_rpaf for apache2
  • Configured a launchpad ppa for BZR so it is up to date
  • Installed postfix mta and included an internal metadata lookup to set hostname, etc.

Future Release Roadmap:

Future releases will be focused on increasing stability, improving configuration, and delivering an image that can be used in small to mid-scale production environments, and serve as a building-block for more advanced architecture.

If you have any interest in helping with the below, or have other suggestions, feel free to drop me a comment or reach out to josh@chapterthree.com.

Goals:

  • Improve boot script process to accept user-data as part of launch command, enabling the auto-attachment of stable EBS volumes for storage. This is absolutely necessary for data persistence in production use.
  • Develop a 64-bit version for those who want to run this on more powerful instances.
  • Test with a more robust/real-world stack of modules, especially around functionality that could be glitchy with Varnish (e.g. AJAX, etc).
  • Investigate server-initiated Varnish cache invalidation. This is possible using the control channel, and would be super-hot to get working.
  • Figure out some ways to package pre-configuration without hard-coding user #1. Is there anything less labor intensive than an install profile?

Anatomy Of This Release

Previous posts have given a general list of features used or explained their advantageous features, but I wanted to give a sysadmin-level view of how the stack actually functions on this server so that people can start collaborating more meaningfully on the development of this project.

Varnish

Varnish is one of the main keys to the high-performance value of this stack. When you request a page from Mercury, if you are not logged in and Drupal has previously served the page before, the resulting page is delivered by Varnish, not Apache or Drupal. This is orders of magnitude faster and more scalable.

Varnish is started at boot time, and is controlled as a service in the typical Debian/Ubuntu way. You can start/stop or restart it with /etc/init.d/varnish <command>

Configuration for Varnish is in to places:

  • /etc/default/varnish contains the configuration which is used when Varnish is launched on boot, or as a result of a service (re)start. Among other things, this tells the system to look for...
  • /etc/varnish/default.vcl contains the basic Varnish Control Language declarations that work with Pressflow Drupal. Basically, if there's no cookies and no post or get, try and serve from cache. Otherwise query Drupal.

Beyond testing for edge-cases with Varnish, one thing I'm very interested in exploring is the Control Channel, which allows for more active manipulation of the service from the back-end. In particular, it's possible to create an active-cache invalidation system, which would allow us to set a very high TTL within Drupal, and actively invalidate urls/pages when they are updated.

The Boot Script

By default Varnish will set up a file-system storage space in /var, but that's not a great place on an EC2 image because /var is on the root file-system partition, which is only 10GB in size, so one of the things we do at boot time is move this over into /mnt, which is the main storage area for an EC2 image.

We need to do the same with mysql's files as they could also take up a lot of space (though really these should really be on an EBS).

In order to move these things around, and to be sure Pressflow and our Ubuntu distro are up to date, I've included a simple boot script that runs at startup. This is in /etc/mercury/init.sh and are invoked via /etc/rc.local. The boot script will run some updates, move some files, dump all output to /etc/mercury/bootlog and then write the current date/time to /etc/mercury/incep. The first thing the script does is look for this file, which is how we can be sure it will only run on the initial spin-up, and not on every reboot.

Development of this script is an important area for making Mercury better. We should be looking to include user data parameters from the ec2 launch command to detect and mounting an EBS volume for mysql (and sites/all/files) rather than just using /mnt. There are many other improvements that could be made here as well.

Other System Config of Note

There's still quite a lot of tuning that can be done to the more familiar areas of the system, and I am all-ears for suggestions on how best to set up Apache, MySQL, PHP, APC or any other component to run most effectively in the cloud. Below are the interesting bits that seem worth calling out to me.

  • Apache on port 8080: since Varnish is the gatekeeper for all web requests, that means we need Apache to run on an alternate port which Varnish can query behind the scenes when it actually needs a page. This is configured in /etc/apache2/ports.conf and picked up again in /etc/apache2/sites-available/default which is the virtualhost that serves requests.
  • APC: this is installed via standard pecl install apc, and configured in /etc/php5/conf.d/apc.ini. I have given it a large SHM memory footprint because we are using it as the main cache backend for Drupal. While pages won't be cached there (they're in Varnish), everything else will be, and we don't want to run out of room.
  • Pressflow: the main document root for the site is in /var/www, and Pressflow is checked out there (and freshened by the boot script) from Four Kitchen's BZR repository.
  • Cacherouter: finally, we're utilizing the cacherouter module as our Drupal caching system. This is installed in /var/www/pressflow/sites/all/modules/cacherouter and a very basic/vanilly configuration file lives in ...sites/default/settings.php. This is the mechanism by which we utilize the local APC memory cache for Drupal's object caching system.

In conclusion

This process is getting more and more like second nature to me. I'd like to figure out how to engage more people in development and innovation, and to build a shared repository of tools and practices for this kind of development.

Big ideas going forward would be configuring other kinds of useful Drupal AMIs (e.g. performance benchmarking, automated testing, etc) as well as mappoing out ways to build high-performance/redundant Drupal Clusters in the cloud.

Comments

Very Exciting

ccat's picture

Great effort, Josh. This takes care of a lot of legwork for us, and I look forward to helping with Mercury over the next few months.

server-initiated Varnish cache invalidation

mikeytown2's picture

Where/What's the Varnish API for it's control channel? Boost is 1/2 there in it's implementation of smarter cache expiration. Go get RC2 and play around with it...
Whats In For Cache Expiration:

  • Flush front page when node is edited/created with promote to front page selected. - Easy to do
  • On node edited/created, flush associated term page caches as well. Works with view's taxonomy/term/% path as well. - Requires db tables

_boost_get_menu_router() is where the magic happens for getting the right info for any content type.

One other thing that might be of interest in RC2 is the MT cron crawler. Just tested it on cheap shared hosting and using 2 threads it crawls my site 2x as fast as the old way. Using this we can rethink how content is expired... When a page is expired, it could be added to the crawler list instead of being flushed. This is useful if fast & stale is better then slow & fresh. Once again using the DB you could get all child pages of that page (views pager, ect...).

Boost is also caching AJAX now so sharing info for this would be good. Right now I'm adding &nocache=1 to the end of AJAX requests that should not be cached. Setting the ttl for AJAX to a low value is another way, if cron runs all the time.

In short once you know the API, lift my code and make a varnish module.

Excellent!

joshk's picture

I was hoping something like this would be the case. I will investiage the control channel, think it's just a matter of making drupal_http_requests() (or curl calls) to the right IP/port with some simple language. I will prepare to be "inspired" by boost. ;)

Good news!

joshk's picture

Turns out the admin interface is a simple TCP/telnet socket thing:

http://varnish.projects.linpro.no/wiki/CLI

Which we can access with fsockopen()! I am going to mess around with this a little bit more, but an active-cache invalidation system (and also varnish status system) should be very straightforward to build!

And here we go

Thanks for your work on this

SeanBannister's picture

Thanks for your work on this Josh, I look forward to contributing.

Just noticed when I search for the AMI in the AWS Management Console it claims to be Cent OS but once installed it's clearly Ubuntu 9.04 jaunty.

I've previously worked on a user-data script that performed all the installation and configuration we needed to turn one of Eric Hammond's default Ubuntu EC2 AMI's into a Drupal web server. As you can imagine it turned into a bit of a monster because we were starting with a bare Ubuntu Ami and doing EVERYTHING in the user-data script. But by offloading the actual configuration to the AMI the user-data script can become a list of variables that allow custom configurations.

A few things I originally wrote that'd be useful for the Mercury user-data script:

  • Elastic IPS:
    • Create a new Elastic IP and attach the IP to the instance
    • or Attach an existing Elastic IP
  • EBS volumes:
    • Create 1 or more new EBS volumes and attach them to the instance
    • Create 1 or more EBS volumes from existing snapshots and attach them to the instance
    • Attach 1 or more already running EBS volumes
  • Specify which directories or files on the EC2 instance should be Symlink or Copied from the EBS volume. This included copying the Apache configuration and vhosts to the server from a preconfigured EBS volume (maybe they should be symlinked?) and symlinking the /var/www directory and the MySQL database to the EBS volume where the data was already stored.
  • Specify Hostnames for the instance for /etc/hostname and /etc/hosts
     

Obviously to do this stuff we'd need ec2-api-tools (http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip) which require sun-java6-jre (apt-get install sun-java6-jre) installed in Mercury.

How about instead of hard coding User #1 you could allow the user to install Drupal because as of Drupal 7 there'll be at least two install profiles that the user may want to choose between. All they'd need is the MySQL password that they could specify as user-data mysql_pass="password';

I just tried running my existing user-data script and realized Mercury doesn't support it. Is there any reason for this I thought the instances were based on Eric Hammond's which support user-data.

Hmmm...

joshk's picture

This is interesting. The source AMI here is a jaunty server image from Alestic. As you can see from the MOTD:

Amazon EC2 Ubuntu 9.04 jaunty AMI built by Eric Hammond
http://alestic.com  http://ec2ubuntu-group.notlong.com

So I'm not sure why it's showing up as CentOS or behaving in unexpected ways.

It's possible (though I don't really see how) that my own little boot script is somehow conflicting w/your user-data work. Ping me and let's figure out what's going on. I've read up on this stuff, but have not started working with it yet, and this is definitely the "next level" for the project.

Also, the reason for hard-coding user #1 at the moment is I want to assume that people using the image don't know/don't care about tweaking the settings to set drupal performance up correctly to work with Varnish, etc. An install profile would be the ideal way to go here, but I haven't had the time/energy to make that effort. Maybe investigate features?

http://www.chapterthree.com | http://www.outlandishjosh.com

Sorry my mistake, was in a

SeanBannister's picture

Sorry my mistake, was in a bit of a rush yesterday and ran my user-data scripts incorrectly. Ran them again today and everything worked correctly.

I'm just working on integrating my scripts for auto mounting EBS volumes and attaching Elastic IPs. I'll release a public AMI for people to test and then once it looks good we can integrate it into your AMI. I don't think there's an easier workflow???

Yeah

joshk's picture

I don't know that there's a better way to do collaborative AMI development. The only immediate idea I had was to start getting some version controlled place together with all the scripts and the like. This doesn't help w/the image dev so much, but would be a big win for people looking to tinker.

Feel free to ping me directly (or post a new group discussion) when you've got someting. I just did a "hello world" level test with user-data, and am definitely looking fwd to tackling harder problems like including credentials files, etc. (which I think is needed in order to get IPs, etc)

http://www.chapterthree.com | http://www.outlandishjosh.com

Wider infrastructure?

kbahey's picture

Well, that brings an interesting thought that I had for sometime on Mercury.

Is there a way to decouple its development from Amazon?

For example, your Varnish configuration, my configuration for fcgid, ...etc.
Is there a way to have a project for these on drupal.org for example? A pool for optimized configurations for various components.

Also, perhaps a generic Appliance like this one http://www.turnkeylinux.org/appliances/drupal6

Tying this to Amazon is kind of a bummer for me and those who do not like/use them.

Drupal performance tuning, development, customization and consulting: 2bits.com, Inc..
Personal blog: Baheyeldin.com.

Drupal performance tuning, development, customization and consulting: 2bits.com, Inc..
Personal blog: Baheyeldin.com.

That was also a question for me

omega8cc's picture

...when I first found an appliance on TurnKey Linux and next found Project Mercury.

If Drupal is still more devs oriented than end-users, then if you add this Amazon monster here, it looks like a "for devs only!" stuff.

In the past I have seen an interesting example where it goes with that approach - see http://openacs.org history.

Nothing funny.

~Grace

-- Turnkey Drupal Hosting on Steroids -- http://omega8.cc

stability vs. portability

greggles's picture

In theory I really like the idea of using the "user data" scripts instead of an AMI. We put all the apt-get all the configuration etc. into a script and run that on top of a base Ubuntu image instead of using a locked-in-stone image. The benefit is greater portability and that it "should" run just fine whenever Alestic/Canonical releases new images (like they did last week...including a security fix that, IMO, isn't really super important but the "security" nature may make people think it is important...). This also makes it easy for someone else to collaborate on the script AND inspect it to make sure that it's only doing things that you like.

In reality I think having a specific image is much easier to maintain. You don't have to worry about packages changing underneath you and breaking your script. Given that maintainer time is usually at a premium, anything that makes it easier to maintain is important.

I'm split on the right path to follow, but certainly hopeful for this project in general.

--
http://growingventuresolutions.com | http://drupaldashboard.com | http://drupal.org/books

At this point in time

SeanBannister's picture

At this point in time creating an AMI is the easiest way to test and develop the implementation, but in the future we could look at creating a "user-data" script or some other method to achieve portability. From my experience they are a bit of a pain to maintain.

One of the issues that we'll

SeanBannister's picture

One of the issues that we'll need to deal with in the future is when a new version of Ubuntu is released a new Amazon AMI needs to be created which would include moving all of the configuration files. This is the reason I originally configured my personal AMI with just user-data and I think a similar concept of writing a script that performed most of (if not all) of the configuration is a possibility. This script could then be run on any Ubuntu install and possibly other Linux distros.

One of the major advantages of this is moving instances between cloud providers such as The Rackspace Cloud (Mosso).

Cloudkick has a very nifty

kyle_mathews's picture

Cloudkick has a very nifty feature that let's you (or soon will let you) migrate Amazon AMI images to other providers. Apperently they have migration to slicehost working and soon will to Rackspacecloud.

See http://www.youtube.com/watch?v=XZXOBjs2BEg and https://www.cloudkick.com/features

That would (lessen) the worry about maintaining only an Amazon AMI images.

Kyle Mathews

Kyle Mathews

Yeah I'd actually signed up

SeanBannister's picture

Yeah I'd actually signed up from an account a while back but was a little skeptical how it'll all worked. Looks like a good service.

VCS++

joshk's picture

This is a good/important question.

My goal here isn't to restrict any of the techniques to amazon, but it was the best/easiest way for me to develop, and it can help for people who want to try out running services on EC2, but don't know where to start.

I think it makes a lot of sense to find a nice verson-control place where we can begin storing useful vanilla configuration files. Whether or not this gets used on Amazon or real hardware, we can share best practices and tips.

http://www.chapterthree.com | http://www.outlandishjosh.com

Scripts sound fun

langworthy's picture

I'd be happy to contribute some time to working on configuration scripts. Mercury is a very exciting project and EC2 seems great but it's not something I have access to.

I'm moving my personal projects over to linode.com soon and was planning on using Ubuntu or Debian.

I have no desire to slow development of Mercury by moving potential resources elsewhere, but this seems like something that could be done in tandem. Sharing of tools used and configuration and such.

What's the next best step for this? New post in another group? Round up some people first?

I need to post a recipe

joshk's picture

It's on my plate to post a recipe for how I got all this going. Basically a more step-by step description (similar to the excellent walkthrough the Aeger groups did) that anyone can follow to set this up from scratch on an Ubuntu system.

http://www.chapterthree.com | http://www.outlandishjosh.com

Managed to pull out all of

SeanBannister's picture

Managed to pull out all of my custom user-data and generate a generic configuration that anyone can use.

You do need to include your credentials (X.509 Certificate, Private Key) before you can use the ec2-api-tools. So currently the configuration includes:

  • X.509 Certificate
  • Private Key
  • Attach an Elastic IP
  • Mount/Create EBS Volumes
  • Hostnames - For /etc/hostname and /etc/hosts (Does this need to integrate with postfix?)
  • Set MySQL Password - And it updates sites/default/settings.php

I just haven't had a chance to bundle up the instance yet.

Compressed file?

joshk's picture

With all of that, are we looking at a specially-formatted (directory structure, etc) zip file? IIRC there are limits on total bits in user-data, as well as obviously only having one place to put it, but I've heard that you can supply binary info -- e.g. a tgz or zip file -- which can then be read out, decompressed, and processed by the boot script.

Also, are there any security/privacy concerns with keeping the certs on the instance?

http://www.chapterthree.com | http://www.outlandishjosh.com

There's a 16k limit on

SeanBannister's picture

There's a 16k limit on user-data which isn't much but it's plenty for the time been if we're just passing some variables. I've been close to the limit a few times and ended up trimming out comments in my code before supplying the user-data. I'm interested in allowing users to supply tgz, bz2, zip binary data but there's still going to be a limit, another option would be allowing users to supply a URL and doing something like:

# Place the X.509 Certificate on the server
cat > /root/.ec2/cert.pem << EOF
-----BEGIN CERTIFICATE-----
-----END CERTIFICATE-----
EOF

# Place the Private Key on the server
cat > /root/.ec2/pk.pem << EOF
-----BEGIN PRIVATE KEY-----
-----END PRIVATE KEY-----
EOF

# Supply user-data from an external source
user-data="http://example.com/user-data.sh"

I see no problem with keeping the certificates on the instance. Just need to make sure the permissions are set correctly and remember to remove them when you bundle an instance :)

I tried to register my instance last night but was getting "Client.AuthFailure: User is not image creator". I'll sort it out tonight and release it for testing.

Hm...

joshk's picture

I am seeing it as "other Linux" which could be due to the number of generations it's been through, or some other meta-data that I'm setting incorrectly.

Not sure what-all AWS looks at to make this determination, but definitely somthing to solve before release.

Screenshot:

Hmmm, today it's showing as

SeanBannister's picture

Hmmm, today it's showing as Other Linux. I found someone having a similar problem but no answers http://groups.google.com/group/ec2ubuntu/browse_thread/thread/6a8d0fa9f0...

Just noticed the system log

SeanBannister's picture

Just noticed the system log says:
Linux version 2.6.21.7-2.fc8xen (mockbuild@xenbuilder1.fedora.redhat.com) (gcc version 4.1.2 20070925 (Red Hat 4.1.2-33)) #1 SMP Fri Feb 15 12:39:36 EST 2008

Supporting Work: 64-bit, capistrano, solr, jmeter

ccat's picture

We're building from the Mercury effort for a couple of our current projects. We're hoping to contribute back several of our supporting pieces. I can't commit to absolutely everything here because the schedule will drive a lot of it, but good candidates seem like:

a) our apache solr setup (on a separate AMI)
b) cluster management tools (including capstrano, svn; also on a separate AMI)
c) a 64-bit image of the core Mercury AMI
d) our JMeter setup (1+ AMIs)

Along these lines, couple of questions for the group...

What would people most like to see? Anecdotal feedback from this group will help us set our priorities for what to contribute back. Also, does anyone have a base capistrano setup they'd be willing to share? esp. for wrangling EC2 instances? This is a new tool for us.

jjaa's picture

Full disclosure: I don't know much about multi-server scaling in a dynamic environment, but what I would most like to see is more information on being able to dynamically scale a Mercury-based site as server load increases.

That's the holy grail! :)

joshk's picture

That's the holy grail of all this: high-performance/high-availability cloud-scalable Drupals.

On the DB front, I have been collecting docs and noodling on a best-practice install there. Once we have a script to set up a high-performance DB instance, you can pretty easily scale horizontally using the built in master/slave replication support in Pressflow.

Also, if someone gets the Advanced Cache module working, that will help out as it provides a good way to redirect many of Drupal's object queries (e.g. node_load, user_load) into a memcached instance, which is nice and fast and also takes load off mysql.

http://www.chapterthree.com | http://www.outlandishjosh.com

How to use database replication on PressFlow?

alexis's picture

Hey Josh, thanks a lot for your work and sharing so useful knowledge.

My following comment is the same question I sent you via email earlier but I think it would be better discussing here for the benefit of the community.

I've moved a Drupal 5 site to PressFlow 5 and everything seems to be working correctly. As many have already pointed out there are no schema changes and it's just code replacement.

My current setup consists of three small instances on EC2: one Apache server and two MySQL servers, all running Ubuntu 8.04.

Now I need to setup PressFlow 5 to use the two MySQL servers in a master slave setup. I've already done the MySQL setup part and I can confirm that data entered in the master is correctly replicated to the slave. I used a simple test database and entered some simple SQL on the MySQL console and later created some nodes in PressFlow and could see the data being replicated from the node table in the master to the slave. I'm 100% sure the data is being replicated at the MySQL level.

My problem is I don't know how to modify settings.php in PressFlow 5 to use the master and slave as it's supposed to. There's no INSTALL.txt or any other document explaining this.

If I understood correctly PressFlow should run all INSERT, UPDATE and DELETE operations on the master and all the SELECT operations on the slave, and I could later add more slaves to share the load produced by SELECT queries.

I tried updating my settings.php using the suggestions on this article about Drupal and database replication but it seems that PressFlow 5 still uses the master db server for all operations.

I enabled logging on both database servers to see what queries were hitting each server and I've confirmed that PressFlow is not using the slave server at all.

Any suggestions? What should I put on settings.php to tell PressFlow how to use my master and slave (or 'slaves') MySQL servers?

Once again, thank you for your help.

Alexis Bellido
Ventanazul: web development and Internet business consulting shouldn't be boring

More about using a MySQL slave server

alexis's picture

I ran a few more tests and noticed that my changes in settings.php were being recognized by PressFlow 5:

db_url = 'mysql://user:pass@1.2.3.4/dbname';
$db_slave_url = 'mysql://user:pass@5.6.7.8/dbname';

And I saw two functions in database.mysql.inc and database.inc: db_query_slave and db_query_range_slave.

But still my database logs were showing that all operations from PressFlow were hitting just the master server. I decided to grep a little and found that no module was calling either db_query_slave or db_query_range_slave.

What I've done is choose a few functions, node_load in node.module and drupal_lookup_path in path.inc, and replace ocurrences of db_query with db_query_slave and db_query_range with db_query_range_slave. After reviewing my database logs now I see the SELECT queries corresponding to these functions are hitting the slave and my test site seems to be working correctly.

Is this the correct way of enabling SELECT queries on the slave from PressFlow 5 or am I missing something very obvious?

Thanks!

Alexis Bellido
Ventanazul: web development and Internet business consulting shouldn't be boring

Indeed

joshk's picture

In Pressflow-6, the node_load and pager queries are directed to slave servers. Pressflow-5 implements the slave connection pool, but doesn't utilize it.

It appears the places to make an update (based on the current Pressflow6) would be:

node_load
pager_query

I also think you could add a few others like user_load(), maybe drupal_lookup_path(). My guess is that as we can identify more places in the data where we are making read-only calls which can be a small fraction of time out of date, this list will grow.

http://www.chapterthree.com | http://www.outlandishjosh.com

Testing now

alexis's picture

Thanks a lot Josh, I'm testing now to see if everything works as should and will modify a few other functions to use the SELECT queries on my slave database as required.

What about this scenario, which I haven't tested yet:

  1. User adds or edits a node, this involves an INSERT or UPDATE so it goes to the master database server .
  2. Right after hitting submit the node is inserted/updated and the user is redirected to the node's page, for this node_load a SELECT will run and will grab the data from the slave database server. My question is: will the replication from master to slave be quick enough to show the recently inserted/updated data (to master) on this node_load (from slave)?

Regards.

Alexis Bellido
Ventanazul: web development and Internet business consulting shouldn't be boring

Best effort is pretty good

joshk's picture

And the answer is we can't know for certain. MySQL's replication is on a "best effort" basis, meaning there is a chance that two people attempting to edit the same node at about the same time could, in theory, cause this kind of race condition. However, for most cases it's also pretty fast: you'd have to have really unlucky timing to load a node that'd just been saved before the update was replicated.

David may have some input here having dealt with these issues before, but I doubt there's a bulletproof answer. Certainly if your replication is lagging this is a hazard, another reason to be sure that the slave db credential can't accidentally write data and mess things up.

However, If we're talking about two servers sitting next to eachother the lag should be a small number of milliseconds at most.

If you're running a bank or a nuclear power plant, this is probably an unacceptable hazard. However, for most applications and use-cases, the potential risk here is very remote, and the resulting damage from a worst-case scenario (one node revision lost) may also not be mission critical. There are production sites out there that use this kind of replication, and the mainline use-case of a user seeing their own update is definitely one that's supported.

http://www.chapterthree.com | http://www.outlandishjosh.com

PHP cURL library for Simpletest not baked in?

jjaa's picture

I'm using the Project Mercury AMI, but at least one piece doesn't seem to work. Simpletest carps like so when I try to enable it:

"Simpletest could not be installed because the PHP cURL library is not available. (Currently using cURL Not found)"

cURL is installed at /usr/bin/curl, but it looks like (php -i | grep curl) PHP wasn't compiled with cURL support. Any ideas?

Just add it ...

kbahey's picture

From a root prompt do this:

aptitude install php5-curl

That should add curl to PHP.

Drupal performance tuning, development, customization and consulting: 2bits.com, Inc..
Personal blog: Baheyeldin.com.

Drupal performance tuning, development, customization and consulting: 2bits.com, Inc..
Personal blog: Baheyeldin.com.

Will add

joshk's picture

I will add this in advance of the next release. Thanks!

I feel silly...

iaminawe's picture

I cannot seem to get this seemingly straightforward setup to work.
I launch an amazon instance as per the video on www.getpantheon.com, I have tried both the alpha 4 and 5 versions of project mercury.
Once the instance is up and running I copy the public dns adress into my browser
Page loads for a while then times out...every time... I have yet to see the Project Mercury start page.

What could I be doing wrong?

Thanks

security groups

greggles's picture

You may need to configure the security groups (aka firewall rules).

Thanks

iaminawe's picture

I added TCP port 80 to the projectmercury security group I set up and it works great... at last ....
I also have SSH enabled on port 22

Are there any other ports and protocols I should have enabled on this security group so that the project mercury instance functions properly. At the moment only those two are enabled on the security group.

Sorry but not so clued up on this server admin stuff yet but eager to learn more

This is such exciting stuff and I am stoked to finally have an instance of drupal running on my own little cloud.

Do there exist any instructions on storing this instance with Elastic Block Storage so that you can shut down an instance and restore it to the state you set it up to?

Thanks
Gregg

80 and 22 are enough to get started

joshk's picture

HTTP and SSH are all you need to get rolling :)

High performance

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds:

Hot content this week