Awesome performance on a cheap VPS

We encourage users to post events happening in the community to the community events group on https://www.drupal.org.
Garrett Albright's picture

Aside from being a developer, I also am in charge of managing the web server for our company. The "server" in this case is actually a relatively cheap VPS hosting account - US$30 a month. Currently, it hosts over fifty Drupal sites in a multisite configuration, and it's pretty dang performant, if I say so myself; responding faster than most shared hosting services, and most sites on it easily score 90 or higher on YSlow (with points mostly taken off for not hosting images and stuff on a CDN). I'll share some of the things I've discovered working on this server, and hopefully others can offer their tips as well.

I think the most obvious limitation you'll hit when using a cheap VPS is that of memory. The plan we're using for work has a mere 512MB - and the plan I use for my personal projects is a step lower, US$20 a month for 256MB.

Living with limited memory obviously means limiting the stuff running on that server to the bare minimum. That might take some people out of their comfort zone. Consider CPanel and its ilk. Pretty? Easy to use? Yes, but it's all just a pretty - and RAM-consuming - front for standard command-line Unix tools and configuration file editing. Learn to use those tools and edit those configuration files, and CPanel never needs to be installed. If this paragraph scares you, you might as well stop reading now…

When I look at many Linux-based VPS offerings, I'm surprised by how much crap is installed by default - Apache, FTP and mail servers, spam blocking daemons, etc, etc, all just ready and waiting to chomp up your RAM (and disk space, as well, which may or may not be a concern). Fooey on that. The VPS company I'm using (I won't mention them now to avoid looking like an advertisement) have their accounts configured with just the operating system installed and SSH configured so I can connect to it - nothing else is installed and configured, the expectation being that I can pick and choose what to install myself. I strongly recommend you find a VPS provider which can provision a bare-bones server for you like this.

So what do I install? An FTP server? No; I really only use SSH to connect to the server, so I don't need crappy insecure FTP to run on this thing. Mail servers and spam daemons? No, this is a web server; I'm not going to host mail on it. Etc, etc. All you really need installed on a web server to run Drupal is a web server, PHP, and MySQL. Other things are indulgences which should be considered and reconsidered before being installed.

…And soon, even MySQL won't be necessary. Drupal 7 makes SQLite a first-class database layer for Drupal. Since SQLite runs as part of the PHP process, there's no need to run a separate memory-munching daemon all the time to use it. Besides using less memory, SQLite is also faster for most read operations than MySQL. (chx gave an interesting presentation about the unfortunately-named Mongo DB during DCSF. One of the points against using a dedicated SQL server was that modern operating systems already take care of things such as user permissions and disk swap, so why does this sort of thing need to happen in the application layer? Though SQLite is obviously not a No SQL database, it follows some of the same principles in that regard.) Unfortunately, I suspect many people will still use MySQL out of force of habit even after D7 becomes established, but I can't wait until the day when I can uninstall it for good from my servers.

Let's talk the web server daemon. Note I didn't say "Apache." Because you shouldn't use it! Study nginx and Lighttpd, and choose one to use instead. And I'm not saying to run one of these "light" daemons to serve static files in front of Apache; it's ridiculous to run two web daemons at the same time when one, the lighter and faster one (which won't be Apache), is fully capable of doing everything, including the PHP stuff. I prefer Lighty as its configuration file syntax is a thing of beauty, but nginx has a lot of momentum and will serve you well, at the cost of being more difficult to configure in my opinion. Check out the Lighty and nginx groups here on g.d.o for more information on how Drupal runs on these daemons. (Even on heavy iron, I think it's ridiculous for any server to still be running Apache nowadays, when such clearly superior alternatives exist and are mature.)

Let's talk caching. When you're limited for RAM, you're limited for options… However, on this server, I'm still able to set aside 96MB for opcode caching with XCache. A few days after a server restart, we'll start to see OOMs (out-of-memory errors, which occur when the opcode cache wants to cache something new but doesn't have enough empty memory available to do so), and we can't do any variable caching. But this is enough. A multisite configuration is great for opcode caching because, since Drupal files aren't being repeated in several places on a disk, they're not being repeated in several places in the opcode cache as well - the most commonly-hit file in the opcode cache is content-field.tpl.php, a theming file for CCK, and we only have to cache one copy of it for the entire server instead of one copy of it for each of the 50+ sites. This is all to say that using a multisite configuration to host multiple sites on a cheap VPS is a very good idea, particularly if you're using an opcode cache.

On the other hand, there's Boost, which provides a measurable speed-up without needing to run another memory-eating daemon. Use it if you can!

I mentioned that services beyond PHP, MySQL and Lighttpd/nginx are indulgences, but I've let us have a couple. Denyhosts is a script which tracks SSH login attempts and will block IP addresses which repeatedly attempt and fail to log in to the server via SSH - a classic symptom of a dictionary attack. This is a Python script which necessitates running the Python runtime 24/7, which takes up a measurable chunk of RAM, but I think it's worth it for the touch more security it provides. We also run ntpd, which keeps the server time correctly in synch with time servers on the internet - time seems to wander off otherwise, which is apparently a common problem on virtualized operating systems. But the amount of memory this uses is negligible. And… (sigh…) despite what I said above, we actually do run an FTP daemon on this server for the weisenheimer clients who demand FTP access to their hosting accounts… We don't offer it by default and lock it down as much as we can, though, and I certainly don't have it installed on my personal VPS. Basically, the bare minimum should be installed to do what you need to do. Besides improving performance, this also helps keep your server secure by keeping the attack surface small.

The rest is stuff which applies to heavy iron servers as well - enable CSS and JS aggregation, configure your web server to serve gzip-encoded stuff, etc.

Are any others hosting lots of stuff on cheap hosting plans like this and have some tips to offer? Or do any planning to do so have any questions?

Comments

Good stuff. I stopped using

brianmercer's picture

Good stuff.

I stopped using Denyhosts/Fail2ban. Moving sshd away from port 22 to some random high numbered port stops 98% of bot traffic. Disabling sshd logins for the root user stops another 1%. Key files can be used for regular easy ssh login. Filezilla can also use key files with no password. (For key files with passwords you have to use pageant/gpg-agent.) A 12 character backup password with numbers, upper and lower case and symbols is sufficient to prevent a successful dictionary attack.

Exclusively public key pair authentication

Rainy Day's picture

Not sure what you mean by a backup password, but suspect you mean a combination of public key pair authentication and username/password challenge authentication. On all my servers, i lock out username/password challenge altogether, exclusively using public key pair authentication (which is orders of magnitude more secure than username/password authentication, even with the strongest of passwords). That also eliminates the need for Denyhosts/Fail2ban because a dictionary attack will never be successful.

There's really no point to having a "backup" username/password authentication on the server. The average person doesn't stand a chance of remembering a truly secure password. So it's best to simply remove that weak link, and use public key pairs exclusively for authentication.

One can add an extra layer of security by adding a password to the private key, but that's not a backup authentication method. It simply means that a password challenge occurs on the client side before the private key can be used to authenticate with the server. It's essentially an authentication layer added to the client, in case the private key falls into the wrong hands, and in no way adds extra security to the server, nor is it an alternate way to authenticate with the server.

I find using an iptable rule

threading_signals's picture

I find using an iptable rule to blacklist, blocking root access, using a ten+ character name and having a minimum ten+ character strong password enough to give me the time to stop brute force attacks. This means learning two ten+ character passwords to get root access, which takes at least several days or more to memorize. Public key pair authentication means having to go through hoops to secure a laptop, in case of theft. Blocking keyloggers, an encrypted drive, and a sandboxed virtualized image are about the only other steps worth considering - it's unwarranted unless I make a lot of valuable code or come across client concerns. I wonder if the vps logon is the weakest link though.

But what is the likely threat?

Rainy Day's picture

Public key pair authentication means having to go through hoops to secure a laptop, in case of theft. Blocking keyloggers, an encrypted drive, and a sandboxed virtualized image are about the only other steps worth considering - it's unwarranted unless I make a lot of valuable code or come across client concerns.

You're rationalizing using a far weaker online security system for fear of losing control of your computer? Either you have a lot of computers stolen, or you're obsessing over an unlikely scenario, while trivializing a real, likely threat.

Simply passphrase protect your private key, then change the public key on the target system in the event of a stolen laptop. Or for greater security, put your passphrase protected private key on a USB flash drive.

As for the Windows key-logger problem, simply use a Mac or Linux box (where key-loggers aren't much of a threat).

I wonder if the vps logon is the weakest link though.

It can be. My VPS provider requires public key authentication for this, so not a problem for me.

+1 for keys with

dalin's picture

+1 for keys with passphrases.

The best access systems require at least one from each section:
1) something that you know (password / passphrase, challenge - response, etc.)
2) something that you have (pub/priv key, dongle, number generating fob, whitelisted IP address, etc.)
3) something that you are (retinal scan, fingerprint reader, etc.)

A pub/private key takes care of #1 and #2. For even better protection disallow root login and use sudo/su instead. Rarely is section #3 necessary.

Passwords are rarely a good idea. Most of us need to log-in to dozens of different servers and services. It's unrealistic to memorize that many different strong passwords. Which means they have to either be written down somewhere, or stored in a centralized secure storage. Both of which defeat the point.

--


Dave Hansen-Lange
Director of Technical Strategy, Advomatic.com
Pronouns: he/him/his

"Most of us need to log-in to

threading_signals's picture

"Most of us need to log-in to dozens of different servers and services."

This is true, but I'm not dealing with too many sites at the moment. Logging onto a service that lets you use a dedicated ip address takes care of #2, #1 can be solved with a laptop/corporate pc, #3 by instant message on an encrypted channel to a 3rd device, such as phone, or some kind of authorized phone, but there's spoofing issues.

It's sounds to me like you

dalin's picture

It's sounds to me like you either work for the Swiss Bank, or you might need to look at this more pragmatically.

Sure any combination of security hoops can be bypassed with enough effort. But does your chosen solution make it easier to make mistakes? Or if there's an emergency and you don't have your phone with you, will you be able to get into the system to fix the problem? Or if you get hit by a bus, will anyone else be able to get in?

--


Dave Hansen-Lange
Director of Technical Strategy, Advomatic.com
Pronouns: he/him/his

If keys with passphrases

threading_signals's picture

If keys with passphrases works for you and others, that's fine. #1-3 I've been familiar with, and as for #3 (don't have your phone with you), it's not strictly necessary in getting the risk factor down.

As for #2, a dedicated ip makes sense for me in not having to have too many logon credentials, but not really sure on how serious of a threat spoofing is.

I've outlined what I think is the most robust alternative in not using keys with passphrases, because stuff breaks down, get lost, stolen, etc.

As for "will you be able to get into the system to fix the problem? Or if you get hit by a bus, will anyone else be able to get in?" ..those arguments are the same for passphrases with keys as well.

"trivializing a real, likely

threading_signals's picture

"trivializing a real, likely threat" - this depends on where you live as well. Both approaches can be looked at using math, and the added problem is if the laptop fails while traveling. rsyslog can provide emails using regex for auth.log files, to help deal w/ the problem, but that is an ids compared to an ips approach. RSA fobs and USB drives get lost too. How much security is enough security?

Once I thought about it, denying logon attempts by ip helps for the vps problem as well, in lowering the factor of the seriousness of brute for attempts I think. ipv6 is supposed to have more security abilities, but not up to speed on it now.

One added layer of security is to make the system send a text message which expires after 5 mins, which would be needed for a logon, but if the phone doesn't work, it's sort of back to the vps logon, and having to be at the ip address, unless you're using a third party service which has a dedicated ip. I haven't explored it in detail, but there's open source stuff for sending text messages. Maybe have it send an im as fallback.

Anyway, malware on occasion can bypass just one anti-spyware program, so key-logger problem is still a concern.

Using a mac or linux box isn't really a solution. There are sandbox kits out there, like sandboxie (there's arguments against them too), but again, my environment isn't too valuable at the moment.

On the InterNet, there are no "good" neighborhoods

Rainy Day's picture

I agree with you that where you live makes a difference in the probability of someone breaking into your house and stealing your computer, and that should influence the measures you take to secure it, and the content thereon. But my comment about trivializing a real threat was in reference to allowing username/password authentication on SSH servers. On the InterNet, there are no "good" neighborhoods, so it doesn't matter where you live. All SSH servers connected to the InterNet, whether answering on port 22 or not, will eventually face attacks. So it's a good idea to put the best lock you can on the door.

Allowing username/password authentication on SSH servers is akin to securing a bank vault with a screen door. It might keep out the mosquitos, but not large vermin.

My math skills are rusty, but

threading_signals's picture

My math skills are rusty, but I've been meaning to figure out the ipv4 part. The sum of 33! is slightly better than 122bit encryption, if I did my math right (10+10 name/pass w/ip) but there are reserved address spaces and I forgot how CIDR addresses work. So either longer username and password and/or Ipv6 (vpn over ipsec?) would make it equivalent to mitigating brute force attacks, similar to 128 bit encryption. Using an cellphone/im scheme would add more "bits" as well. Stuff like opendns.com can be used for employer and employee, and to mitigate social engineering, there could be a "honeypot" logon user for such a purpose. Fun to think about, if you like puzzles or math.

Wiki for Config?

kcoop's picture

This is really useful information for anyone wanting to create a budget server, thanks a lot! Any chance you'd be willing to share the details of your config? I'd be willing to flesh out a wiki for install (ala mercury).

Are there any particular

Garrett Albright's picture

Are there any particular details about the install you'd like me to share? I could probably go on all day about it…

The config settings for

kcoop's picture

The config settings for nginx, mysql, and the opcode cache, primarily, for the different memory configurations you've used. The idea being that one could create an instruction sheet for the sysadmin challenged: go get ubuntu lucid, apt-get blah blah blah, edit config files to change X to Y, etc. With the rise of the VPS, it seems like many using shared hosting could benefit from a config at linode or one of its ilk. I recently brought up a mercury server on Amazon, and loved the leverage. Amazon unfortunately starts at 1.7Gb, and $50/month; it'd be nice to have such a thing for smaller sites.

BTW, you haven't mentioned monitoring. Do you use anything like munin?

Well, I could just dump my

Garrett Albright's picture

Well, I could just dump my config files, but I don't think that's very useful since it's just a blob of info with little context, and the settings I've used won't be useful for everyone. (And it may expose security issues.) Copy-and-pasting is the worst way to learn something. But I'll be glad to answer more directed questions like "What settings did you use for X, and why?"

Our "monitoring" is currently very ghetto. We have a shell script on another server set to run every five minutes. When it runs, it uses curl to request a page on one of our sites. If it doesn't get a 200 response, it blasts emails and text messages. It's very basic, but it's worked so far.

Mercury is light?

luiginica's picture

Hi

What is the resources for running Mercury. That is mysql(100mb), varnish(100mb), apache(60mb), nginx(30mb), apache solr(200mb), memcache(??)
That is my guesses. I don't have real data... that is I ask you.
So .. what do you think?

I'm not sure, but since

Garrett Albright's picture

I'm not sure, but since (AFAICT) Mercury is designed to run only on particular "cloud" hosts, I don't think it matters.

You'd probably get better

brianmercer's picture

You'd probably get better feedback from someone actively using mercury in the mercury forums. I've only played around with it.

Mercury doesn't run nginx. Nginx takes 4-5MB.

Tomcat (for apache solr) is at least 70MB, and I'm not sure how high it goes, probably depends on your data.

Mysql is about 40MB minimum, but if you stick a 256MB query cache on there, and you have that much data, it'll slowly fill up to about about 300MB. Other buffers and caches can make it get a bit bigger depending on how many tables you have and whether you're using INNODB or MyIsam. How large you make the query cache depends on how much data you have and how much available memory you want to dedicate. I'm not sure of the default for Mercury with 1.7GB of ram, but I think it's about 250MB. On my linode 360 I have a 48MB query cache that doesn't fit all my data and my msql process maxes at about 82MB which is how much I've decided to give it. Query cache is nice if you're doing mostly reads. If you're writing to the database a lot, then the overhead of constantly updating the cache is a pain. This is another good reason to use memcache to offload the cache tables that do a lot of constant writing. Perhaps when we're further into D7 we'll all be using mongoDB or couchDB or mariaDB which lie between mysql and memcache and many of these mysql/memcache issues will go away.

Varnish should also depend on your data. I think the minimum is about 70M. Varnish should be pretty intelligent about using your available memory for caching, since that's its main purpose.

How about sending email?

jcisio's picture

How about sending email? Exim?

About ssh, I prefer disable password login and use only public key auth ;)

I don't use XCache, but is it right that a opcode uses as much as 90 MB? In my eAccelerator (D6 with about 90 modules) it uses only 20 MB. I hear that someone uses as much as 48 MB, but not 90 MB (for a 512M VPS, it's much!)

The key for VPS performance is super hardware, like SAS 15k rpm RAID 10... If they don't oversell, the performance is great (if you can live with low memory).

How about sending email?

Garrett Albright's picture

How about sending email? Exim?

Good ol' sendmail. But I don't think there's that much of a difference - whatever comes with the OS you're using should work fine.

About ssh, I prefer disable password login and use only public key auth ;)

A good idea in some circumstances, but I wanted the ability to be able to quickly jump on the nearest computer and SSH into the server as soon as I get the text message that web sites are down, regardless of where I am and whether I had set up my key on that computer or not.

I don't use XCache, but is it right that a opcode uses as much as 90 MB?

Keep in mind this server is hosting over fifty Drupal sites, each with their own themes, plus a smattering of other sites. I think we're doing pretty good in that regard, actually.

You set the OpCode cache to

dalin's picture

You set the OpCode cache to use however much you need. One of the machines that I work with runs the staging and production versions of a site that has a very large codebase. The site requires about 50MB in the opcode cache x2 = 100MB.

I would argue that the key for Drupal performance (on anything) is usually caching, not hardware, and if it is hardware that you need, more memory can buy you a lot more than faster disks. If you are doing things right the disk should see very little activity.

--


Dave Hansen-Lange
Director of Technical Strategy, Advomatic.com
Pronouns: he/him/his

Hi Dave, Usually, I'm agree

jdidelet's picture

Hi Dave,

Usually, I'm agree but how increase performance when you can't use the cache (when all informations have to be displayed in real time)? To be or not to be cache ! That's the question ! :).


Julien Didelet
Founder
Weblaa.com

Drupal 7 makes SQLite a

haojiang's picture

Drupal 7 makes SQLite a first-class database layer for Drupal. Since SQLite runs as part of the PHP process, there's no need to run a separate memory-munching daemon all the time to use it. Besides using less memory, SQLite is also faster for most read operations than MySQL.

sounds that sqlite is better than mysql?
how about large database ,
like 100,000 nodes or even more 1,000,000 nodes , finally one of my website reach level 1,000,000 nodes

is it just the memory reason which leads you to use sqlite or else .

could you please give more details of the config ?
if you're using drupal6 , howto switch from mysql to sqlite please? i want to make a test using drupal on sqlite.

sometimes , performance means pain to the other source , like data .
READ/WRITE/LOCK , could sqlite handle a huge concurrency read while a little wirte happen , did you come accross any problem like this ?
or what happen if you kill the process of fastcgi/php5 , are there any dangerous which i mean loss data.

Sorry no sqlite for drupal6.

brianmercer's picture

Sorry no sqlite for drupal6. We'll see how it does when drupal7 goes live.

how about large database ,

Garrett Albright's picture

how about large database ,
like 100,000 nodes or even more 1,000,000 nodes , finally one of my website reach level 1,000,000 nodes

I don't believe anyone knows at this point. Once D7 becomes more widely used, hopefully someone like you who has a site with that many nodes will do some real-world experiments and report back to the community.

is it just the memory reason which leads you to use sqlite or else .

Yes. SQLite stores its database in flat files, which means that backing up a database is as easy as copying a single file - bam. It also means that it's easier to back up incrementally with a tool like rsync, or even manage with a VCS if it supports binary files. Also, the lack of a client/server architecture greatly simplifies "connecting" to a database to edit it; you can just type sqlite3 /path/to/database.db from the command line, and, assuming you have read and write access to the file, you're in - no need for usernames and passwords and server URLs. So besides being "lighter" resource-wise, SQLite databases are a lot easier to manage as well.

if you're using drupal6 , howto switch from mysql to sqlite please? i want to make a test using drupal on sqlite.

As brianmercer mentioned, Drupal 6 cannot use SQLite without some severe hacking. But if you install D7, you can use that to test Drupal with SQL.

READ/WRITE/LOCK , could sqlite handle a huge concurrency read while a little wirte happen , did you come accross any problem like this ?
or what happen if you kill the process of fastcgi/php5 , are there any dangerous which i mean loss data.

SQLite only allows for one process to write to the database at one time. Also, while multiple reads can happen concurrently, all are halted while a write process happens (I think). Thus, it will be inappropriate for sites which need to often be writing to the database; if you have a busy site with the Statistics module enabled, for example (though that's not exactly a good situation to be in with MySQL either). However, since most web sites do far more reading from the database than writing to it, I think that most Drupal sites will stand to benefit from SQLite, generally speaking.

As for losing data when you kill the process, SQLite is ACID compliant, so that shouldn't happen any more than it happens in MySQL.

thx for you detail reply , i

haojiang's picture

thx for you detail reply , i will soon test it on d7.

How about PHP?

Rockland Steel's picture

How do you running PHP?

...and what do you think of using using lightweight web server (Nginx/Lighttpd) couple with Apache mod_fcgid for better resource usage?

Thanks.

I'm running PHP as a FastCGI

Garrett Albright's picture

I'm running PHP as a FastCGI process. I'd like to try switching to SCGI sometime, but though Lighty apparently supports it, documentation was non-existent last time I checked.

As for running Nginx/Lighttpd alongside Apache, I address this in the OP. Summary: It's not "better resource usage" at all to be running two web servers at the same time… Especially on a RAM-limited VPS.

If you must run SQLite with

jcisio's picture

If you must run SQLite with Drupal 6, there is http://coolsoft.altervista.org/en/drupal-sqlite (without any core hacking).

SQLite is never designed to be used on large website. A whole db is in a file, no! With Drupal, about a hundred of file for MySQL is not enough (people want to split node table into different, depending on node type), split into small files (if done correctly) help read/write performance.

The web site says: This page

Garrett Albright's picture

The web site says:

This page describes Drupal-SQLite, a patch to make Drupal 6.x work with a SQLite database.

That says "hacks core, kills kittens" to me.

Well, there's a patch, but no

jcisio's picture

Well, there's a patch, but no hack core (in some meaning). The patch provides a sqlite driver in /includes, change the install.inc file (if you need), and no more! That means, no touch in any core files, except the install.inc

In that case, Pressflow

christefano's picture

In that case, Pressflow (which is used in Mercury) is a maniacal manslaughterer. In a good way, though.

lighty fastcgi settings

Milan0's picture

Hi,

You're post really convinced me to switch to fastcgi + xcache.

What are your recommended settings for fastcgi? i have a little bit more RAM available than you (765mb total, 1 site, but not running lightweight config).

What im really wondering is what to do with PHP_FCGI_CHILDREN and max-procs.?

                   "min-procs" => ?,
                    "max-procs" => ?,
                    "max-load-per-proc" => ?,
                    "bin-environment" => (
                        "PHP_FCGI_CHILDREN" => "?",
                        "PHP_FCGI_MAX_REQUESTS" => "?"),
                    "idle-timeout" => "?",

also i'm interested in your xcache settings!

thanks

Why not php-fpm and apc?

brianmercer's picture

Why not php-fpm and apc?

Re: Why not php-fpm and apc?

Milan0's picture

I thought php_fpm would be voor nginx, and lighty = fastcgi the way to go, but correct me otherwise...:) are there some performance charts somewhere as reference?

Those settings can vary a

Garrett Albright's picture

Those settings can vary a lot, depending on your hardware and how busy you anticipate your site(s) to be. I suggest you check out this page in Lighty's documentation wiki, particularly the "How many PHP processes do I need?" and "Can I have too many PHP processes?" parts. (There's a lot of good info in that wiki, though it's a bit scattered about.)

For what it's worth, here's our fastcgi.server settings:

fastcgi.server             = ( ".php" =>
                               ((
                                 "bin-path" => "/usr/local/bin/php-cgi",
                                 "socket" => "/var/run/lighttpd/php-fastcgi.socket",
                                 "max-procs" => 2,
                                 "bin-environment" => (
                                   "PHP_FCGI_CHILDREN" => "2",
                                   "PHP_FCGI_MAX_REQUESTS" => "10000"
                                 ),
                                 "bin-copy-environment" => (
                                   "PATH", "SHELL", "USER"
                                 )
                              ))
                            )

The numbers are lower on my personal server, both because of the lower RAM limit, but also because the server doesn't get nearly as much traffic.

I think the comments in the xcache.ini file are helpful for configuring it. We have xcache.count = 2, as we seem to have two virtual CPUs to play with, and xcache.size = 96M, giving us two 48MB bins. As we're not using the variable cache, we have xcache.var_size = 0M.

Thanks again for the extended

Milan0's picture

Thanks again for the extended and clear information!

Things are looking great so far, can do a nice consistent hammer: with siege benchmark, 10 concurrent session users (all admin) awesome response times, and site stays up! no more swap and defunct apache servers. Awesome!

I even seem to get more performance out of varnish now, 4500-5700 req's per sec for anonymous users.

from the lighty manual

Milan0's picture

[code]
How many php CGI processes will lighttpd spawn?¶

lighttpd has three configuration options that control how many php-cgi processes will run:

* PHP_FCGI_CHILDREN (defaults to 1)
* max-procs (default 4)
* min-procs is completely ignored in recent lighttpd releases (there is no adaptive spawning anymore)

When lighttpd starts, it will launch max-procs parent php processes. Each parent process then pre-forks PHP_FCGI_CHILDREN child processes. For example, if max-procs are 4 and PHP_FCGI_CHILDREN is 16, lighttpd will start max-procs x ( PHP_FCGI_CHILDREN + 1). In our case: 4 * ( 16 + 1 ) = 68 (4 watcher processes which do not handle requests, 64 real php backends which serve requests).

If you are using an opcode cache such as eAccelerator, XCache or similar it's advisable to keep max-procs at a very low number (1 is perfectly fine) and raise PHP_FCGI_CHILDREN instead. Those opcode caches will create a separate memory space for each parent process, otherwise, which is not what one would call "efficient memory usage" in that case.. If you leave max-procs at 4, you'll end up with four separate opcode memory cache segments.

Note that setting PHP_FCGI_MAX_REQUESTS is recommended to avoid possible memory leak side-effects.
[/code]

http://redmine.lighttpd.net/wiki/1/FrequentlyAskedQuestions

i totally LOVE the

Milan0's picture

i totally LOVE the predictable memory footprints and the amount of users you will be able to handle.

Makes scaling up a breeze and cost effective!

High CPU

Milan0's picture

max_procs 1
PHP_FCGI_CHILDREN = 8

Running a benchmark with siege, 10 concurrent authenticated users hammering the frontpage (siege benchmark mode).
Getting nice responsetimes < 2sec for each page request, avg is about 1.2 i guess.

BUT

top - 21:24:35 up 13 min, 2 users, load average: 7.05, 4.01, 1.65
Tasks: 115 total, 7 running, 108 sleeping, 0 stopped, 0 zombie
Cpu0 : 92.1%us, 7.9%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 90.0%us, 9.6%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu2 : 89.8%us, 8.9%sy, 0.0%ni, 1.0%id, 0.0%wa, 0.0%hi, 0.3%si, 0.0%st
Cpu3 : 92.0%us, 8.0%sy, 0.0%ni, 0.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 812284k total, 460680k used, 351604k free, 7728k buffers
Swap: 1023992k total, 0k used, 1023992k free, 244632k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2921 www-data 20 0 209m 42m 31m S 60 5.3 2:02.12 php5-cgi
2917 www-data 20 0 254m 103m 47m R 54 13.1 1:59.29 php5-cgi
2920 www-data 20 0 216m 44m 27m R 52 5.7 1:59.40 php5-cgi
2923 www-data 20 0 212m 42m 28m R 47 5.3 1:52.88 php5-cgi
2922 www-data 20 0 213m 42m 27m R 46 5.3 1:57.38 php5-cgi
2918 www-data 20 0 207m 36m 28m S 46 4.6 1:54.61 php5-cgi
2924 www-data 20 0 213m 42m 27m R 45 5.3 1:57.06 php5-cgi
2919 www-data 20 0 212m 41m 27m R 41 5.3 1:58.81 php5-cgi
2127 mysql 20 0 83124 19m 5368 S 6 2.4 0:16.34 mysqld
2862 www-data 20 0 5280 2044 972 S 1 0.3 0:01.30 lighttpd
2734 nobody 20 0 438m 7448 5164 S 0 0.9 0:01.49 varnishd
2984 root 20 0 8164 2716 2160 S 0 0.3 0:00.43 sshd
3041 root 20 0 2460 1164 884 R 0 0.1 0:00.52 top
3043 root 20 0 96016 2404 1268 S 0 0.3 0:00.97 siege
1 root 20 0 2524 1344 1060 S 0 0.2 0:00.39 init
2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd

This seems pretty high, is it normal?

That's during a test siege of

brianmercer's picture

That's during a test siege of 10 concurrent non-anonymous users, or that's with normal traffic?

"That's during a test siege

Milan0's picture

"That's during a test siege of 10 concurrent non-anonymous users"

Yes.

I'm running authcache for logged in users only.
funny thing is, when i siege with a session cookie...then at the same time i ab with the same cookie, suddenly the document size for most connections drops a sheer factor 10-15, and ofcourse throughput goes up.

So ab seems to be priming the cache, but siege does not...weird conclusion?

funny thing is, when i siege

dalin's picture

funny thing is, when i siege with a session cookie...then at the same time i ab with the same cookie, suddenly the document size for most connections drops a sheer factor 10-15, and ofcourse throughput goes up. So ab seems to be priming the cache, but siege does not...weird conclusion?

That or the server dies with something like "MySQL error: Too many connections"

--


Dave Hansen-Lange
Director of Technical Strategy, Advomatic.com
Pronouns: he/him/his

will sql proxy be possible?

haojiang's picture

will sql proxy be possible?
since D7 support sqlite , would it be possible that we write a module and let drupal switch from mysql to sqlite to speed up performance?
when select happen , drupal link to sqlite , this will mean a faster speed , is it?
when insert happen , drupal link to mysql
syndicate sqlite and mysql every cron or else, when this time happen , all sql links to mysql

is it an idea just like mysql-proxy?

when select happen , drupal

Garrett Albright's picture

when select happen , drupal link to sqlite , this will mean a faster speed , is it?
when insert happen , drupal link to mysql
syndicate sqlite and mysql every cron or else, when this time happen , all sql links to mysql

I'm not sure, but I doubt that would be practical.

no mail server

drupcha's picture

@Garrett Albright What did you mean by not having to set up a mail server on VPS?
How do you achieve this please?

You will need a mail transfer

brianmercer's picture

You will need a mail transfer agent like Postfix for sending mail, but you can use dns mx entries to have incoming mail directed to another service.

Sadly, last week Google discontinued their free Google Apps for Domains standard edition which was very good for this purpose. I have read that Microsoft Outlook.com can still be used for this. Check out http://domains.live.com.

good to know

drupcha's picture

thank you.