Highly-secure Drupal installation

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
themselves's picture

Hi everyone,

What we've been doing around here is trying to put together an implementation framework that would deliver the most secure Drupal possible. Our idea so far is this:

First layer is a Varnish cache - it is the only publically exposed port of the entire infrastructure.

Second layer is a Drupal install with a read-only mysql user (other than the session, watchdog and cache tables of course), so that even if someone elevates permissions in Drupal, they are unable to write anything to the database.

Third layer is the actual editing Drupal install - it has the mysql user with write permissions, and acts as the mysql master. All table changes (other than the session, watchdog and cache tables of course) are replicated to the read-only layer via mysql replication, and the replication user is the only user that has the ability to write to the database read-only DB.

Clearly everything is firewalled off bar that one port on the Varnish cache.

So, this is our thinking so far. One clear problem is files - uploaded files will need to be rsynced back to the read-only server. Also, it pretty much eliminates any chance of the client moving to a community-driven site, as there's no way for front-end users to write to a datastore.

Anyone else got any bright ideas on how to create a brutally secure Drupal install?

Comments

sounds pretty good to me

greggles's picture

This sounds pretty good to me. I think you'd also want to disable certain urls (like /user* and /admin*) from being accessible to Varnish.

Why does the session table need to be writeable on the read-only side?

reflected XSS

grendzy's picture

This setup doesn't address reflected XSS attacks, which are reported to be the most common type, ( http://en.wikipedia.org/wiki/Cross-site_scripting#Non-persistent ), although this hasn't been my experience. Perhaps it's because of the propensity of inexperienced PHP developers to do things like echo $_SERVER['SERVER_NAME']; which even the PHP manual suggests!

Reflected attacks don't require writing to the data store, so the read-only database doesn't help here.

Similarly many XSRF attacks wouldn't require writes.

The best defense against non-persistant attacks would probably be to use a proxy or firewall to filter requests. mod_security is one such tool.

Use HttpOnly cookies.

If you can, take steps to secure the administrators' browsers and operating systems (firewalls, antivirus for windows, etc). If the superuser's laptop gets owned, you've lost everything. Disable Flash; run with the noscript firefox extension (you wanted brutal right)?

That's a pretty good point -

themselves's picture

That's a pretty good point - if you wanted to be hyper-paranoid, then there should be a very strictly limited number of machines even capable of reaching the editing instance, and those should be locked down just as tightly as everything else. I suppose you could take it to military-levels, post an armed guard beside the single machine capable of accessing the editing interface.

As for XSS, it's the sort of problem that can't really be tackled at the infrastructure level, it's more of a code-level thing. As best practice we filter everything that gets displayed back on the screen anyway, so as long as we're diligent, there's code-auditing and intrusion testing at the end, the resulting site should be pretty damn secure.

Additional

rjbrown99's picture

At the Drupal level, I'd also ensure you are using SSL/TLS with SecurePages and SecurePages Prevent Hijack to encrypt, at the very least, user authentication credentials.

Taking it down a level below Drupal, I'm also using -

PHP Suhosin
Apache mod_security

... and a server infrastructure aligned with the baseline standards set by The Center for Internet Security. They publish hardening guides for the operating system as well as major server components such as MySQL and Apache.

Taking it one step further, I also implement some additional controls from the PCI Data Security Standard. They look for a few things that are not directly covered by the CIS standards.

I also do some fun things like feed certain mod_security alert triggers into ipchains rules automatically. Here's an interesting link about this. For example, I get hit with a lot of automatic dfind and phpmyadmin scans. The hosts are auto-blacklisted upon rule trigger to keep the volume of noise down. You have to be careful with this to only block on absolute bad stuff that normal users won't trip on.

I follow all of that up with a Snort sensor to watch for other malicious attacks that may not be blocked at the other levels. The mod_security+ipchains stuff was really to cut down on the noise that Snort sees.

At the non-geek process level, you may also want to consider a threat modeling exercise. Microsoft has been a strong thought leader in this space. Here's a link to their free tool and information.

Google also has a nice free tool called Skipfish to perform some web-specific application security checks. It's here. They also released the Ratproxy which is another way to get some good intelligence on how your app is performing from a security perspective.

usefulness of automated tools

greggles's picture

Aside from the theory, have you really gained value from the automated tools?

I'm somewhat disenchanted with skipfish since it seems to take a long time to configure, run, and analyze. But maybe I'm just doin it rong.

Google also has a nice free tool called Skipfish to perform some web-specific application security checks. It's here. They also released the Ratproxy which is another way to get some good intelligence on how your app is performing from a security perspective.

Not really

rjbrown99's picture

Not really. I also have commercial licenses of IBM AppScan and Qualys, and neither one ends up finding anything of value in an automated fashion. Not because they aren't good tools, but because Drupal is smart enough to work around most of that at the core level. For example, an IBM AppScan of a complex Drupal site with multiple roles and close to 100 modules came back with nothing. Some of that is Drupal, and some of it is good OS, database, and web security practices. (I would always recommend that for a fully homegrown app, though.)

Fuzzing can be useful and we also use that in the dev cycle. For 'real' results l mainly rely on grey-box testing using an intermediate proxy with some amount of code review to help zero in on the places where we might have missed something.

Security

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds:

Hot content this week