My Nginx config complete or sort of with an appendix on TLS/SSL

Events happening in the community are now at Drupal community events on www.drupal.org.
perusio's picture

I've finally completed my Nginx config for Drupal.

Feel free to comment on it and suggest improvements.

In terms of SSL/TLS I opted for the defaults in Nginx (after 0.8.21) that limit the available protocol versions do SSLv3 or later and also cross out support for MD5 as message digest algorithm and anonymous Diffie-Hellman key exchange.

I've seen some configs, notably by omega8cc, where there's explicit support for insecure cypher suites and SSLv2. What's the reason for that? Are you trying to support non-modern browsers? Even IE6 speaks SSLv3. So why the choice of insecure protocols and cypher suites?

EDIT: Now I've also setup a debian repository with the latest version of Nginx packaged for debian testing or unstable.

Comments

Nice! Thanks for sharing your

Jonah Ellison's picture

Nice! Thanks for sharing your config. I learned quite a bit.

Very nice, thank you. Just

omega8cc's picture

Very nice, thank you. Just added your config to the list in the group header.

As for SSL, my config has to be compatible with some old versions of Nginx still available by default on some older distros to be used with Aegir, and this is why some details may look a bit different, however I will try to improve it, if possible.

Thank you for sharing your config.

Hmm

perusio's picture

Hello Grace,
Thank you for adding my config to the list on the group page. Regarding SSL. I apologize for not being up to par on all that happened in previous versions of Nginx, but AFAIK from the start it supported SSLv3. So enabling SSLv2 I don't think is required.

You can always specify:

ssl_server_prefer_ciphers: on;
ssl_ciphers: HIGH:!ADH:!MD5;
ssl_protocols: SSLv3 TLSv1;

According to the changelog these directives were introduced in 0.2.2.

Just an idea.
António

Nicely done, thanks for

Slovak's picture

Nicely done, thanks for sharing!

Looks good, thanks. I don't

brianmercer's picture

Looks good, thanks.

I don't turn off logging for robots.txt. I'm pretty sure that awstats uses that info to help decide who is a spider.

I think that

rewrite ^ $scheme://example.com$request_uri permanent;

needs to be
rewrite ^ $scheme://example.com$request_uri? permanent;

or you end up with double query strings.

Right. Good catch.

perusio's picture

In fact the $request_uri variable already has the query string. So if I tried to access:

www.example.com/index.php?q=node/10

I would get the query string doubled. Fixed now.

Thanks Brian.

Really nice! Some neat tricks

ximo's picture

Really nice! Some neat tricks I hadn't seen before.

Only one small correction:
limit_conn arbeit 10; on lines 17 and 57 in sites-available/example.com... "arbeit" isn't a common zone name as far as I know ;)

Common?

perusio's picture

Well I sort of got inspired by Grace's config where she names the zone gulag. So being "versed" in pop culture phrases I used arbeit it comes from WWII. It also stands for something gruesome and brutal like a Gulag. You'll have to guess where it comes from exactly ;)

I'm close - any help?

ericrdb's picture

I appreciate the great compilation and well documented config. It's helped me learn quite a bit.

I'm very new to nginx and stuck. On a fresh Ubuntu 10.4, I used Brian Mercer's PPAs for nginx and php-fpm. I cloned your config and changed example.com to my domain (both the filename and occurrences in the file). I also verified the correct socket path in your config as well as Brian's php5-fpm.config

All services are running, however nothing is getting served and there are no entries in nginx's /var/log/nginx/access.log. I'm assuming one of the 444 responses?

My server isn't nameserved yet, only accessible by IP - would that complicate matters? Could anyone point me in right direction?

Thanks!

444 means that no Host header

perusio's picture

is defined or is illegal.

You can see the documentation about server_name and a very good explanation of how it works.

Nginx use the HTTP Host header to determine the server_name directive it will match. If none is found that matches that the default_server configured through the listen directive.

This config uses default_server to provide a catch all "illegal" name '_' to match anything not legal according to the specified server_name(s).

Forged Host headers are usually an indication of something or someone trying to tamper the site. So they should be blocked. Hence as a security measure it's wise to block them.

In your case you have two options:

  1. Comment out the server block that specifies the default_server and returns a 444.

  2. Specify a name for your server. Depending on this machine being deployed live on the wild or being a development machine just acessible to a private network or the loopback (localhost) you'll need a DNS record for that server_name in the first case or an entry in /etc/hosts for the later.

If it's a local machine and properly filtered (block outside access to 80 and 443) then you can comment out the server block that returns the 444. That's how I do it. I only use that server block for live machines.

HTH,
António

excellent

ericrdb's picture

Thanks so much for your reply, commenting out the server block helped but I was still stuck, even with the IP nameserved.

I finally got it working:

  1. Turns out that one cannot completely remove Ubuntu's stock /etc/nginx directory and replace it via a git clone of your code. Your nginx.conf file includes /etc/nginx/sites-enabled/*, however that directory doesn't exist in your repo. So nginx wasn't loading any site configurations.
  2. Once I added the sites-enabled folder, I also needed to copy over example.com to the sites-enabled folder and modify it there.

Once I changed those two items, I got up and running. You might consider updating the repository to include a sites-enabled folder and/or documentation about dropping in your repo and copying over the example.com config to the sites-enabled folder.

Thanks again for your great work on making these available for the community.

--
Eric DB
www.mydbsolution.com

how about this

ericrdb's picture

Re-reading my comment, I'm afraid it may sound aloof or disengaged. Here's one approach for clarifying your readme, under "Introduction" or a new category "Installation"

Installation
1. Remove Ubuntu's default /etc/nginx configuration directory (or move to your user's /home directory as a backup).
2. Checkout a copy of the git repository to /etc/nginx.
3. Create /etc/nginx/sites-enabled directory.
4. Copy /etc/nginx/sites-available/example.com to /etc/nginx/sites-enabled/<your domain name>.
5. Change the "example.com" references inside the sites-enabled/<your domain name> file to point to <your domain name>.
6. Configure your php communication method (tcp/ip or unix socket path) in drupal_boost_drush.conf and drupal_boost.conf. Depending on how your php is set-up, this might be found in /etc/php5/fpm/*.conf.
7. Reload nginx configuration "/etc/init.d/nginx reload".

Maybe there's a better way? HTH,

Yes

perusio's picture

It can be improved. Thanks it's just that I assumed (incorrectly) that the part about having a script for enabling/disabling a site was clear. I've created a small script that creates/removes the symlink. I'll add the part about php-fpm. I don't use it so I couldn't have written about it.

Thanks,
António

Done

perusio's picture

I followed your suggestion and created an Installation section on the README. I hope now things are clear. I cannot create the sites-enabled dir since it's empty and git doesn't track empty dirs, AFAIK.

Thanks

Excellent

ericrdb's picture

Looks great - thanks for updating. And, you're right about no empty directories with git, good call.

--
Eric DB
www.mydbsolution.com

This is an interesting site

brianmercer's picture

This is an interesting site for comparing the quality of SSL implementations:

https://www.ssllabs.com/ssldb/index.html

Cool

perusio's picture

I already knew about SSL Labs rating guide, but I never tried to run a test. I did now. I used a very old site (my first Drupal site with almost 7 years and still running) with my config. I got a 88 (A). Really cool considering I'm using a Level 1 free certificate. I'ts PCI compliant. Really nice. Hélas, it's not FIPS compliant.

I also need to investigate further the issue about session resumption. Supposedly Nginx supports and in the config ssl_session_cache is enabled. So I don't understand the:

Session resumption    No (IDs assigned but not accepted)

I'm getting. But then I'm not versed in the intricacies of SSL/TLS.

Anyway looking at the leader board, getting 88 and seeing 91 as the best rated is heart warming. Also PCI compliance signifies that Nginx groks ecommerce sites.

Fixed

perusio's picture

Updated the config for making SSL/TLS session resumption to work.

At least the SSL socket must be set as default_server.

listen [::]:443 ssl default_server;

It works also with the regular HTTP server as default.

This is so because session resumption takes place before any TLS extension is enabled, namely Server Name Indication. The ClientHello message requests a session ID from a given IP address (server). Therefore the default server setting is required.

EDIT: After a debate in the nginx mailing list the best option is to move the ssl_session_cache directive to the http context. Now that's how it's set up in my config.

What are your thoughts

brianmercer's picture

What are your thoughts concerning

keepalive_timeout            75 20;

A value of 75 seconds seems fine for SSL but very long for non-ssl. And why send the browser 20 seconds?

Thanks & fixed

perusio's picture

You're right.

It doesn't make sense.

  1. Most clients ignore the Keep-Alive header. The values should be equal.

  2. Yes 75 (Nginx's default) is too big for a decent client. Although it makes sense with SSL, since there's a lot more going on below the HTTP layer.

  3. I've fixed it. Now the http server gets 10 seconds and the https gets 75. Both values are equal. Even if most clients don't care about the Keep-Alive header it's always good to send it, some clients will pay attention to it.

I'm wondering since

brianmercer's picture

I'm wondering since nginx.conf includes everything in sites-enabled, isn't it loading all the non-server files also like blacklist.conf, drupal.conf, etc. It would load them on the http level and then overwrite them on the server level? Do they need to be in another directory?

Not really

perusio's picture

The sites-enabled directory just has the sites that are enabled. Meaning there's a symlink to sites-available. The symlink is created manually or by a script, in my case nginx_ensite, that I've talked about before in this group. Only the files that are symlinked are loaded.

You're right. I misread it.

brianmercer's picture

You're right. I misread it.

Hmm. Drupal 7 and https

perusio's picture

Did anyone tried to run D7 with https?

I tried using this config and:

  1. If https is forced I get a 404.

  2. Without https forced the theme isn't rendered.

I tried the

<?php
$conf
['https'] = TRUE;
?>

and also with FALSE and the result is the same. What's your experience?
Thanks

EDIT: It works if I set

<?php
$base_url
= 'https://example.com:489';
?>

No dice without that. I'm running it on a non standard port.

My setup is a little

brianmercer's picture

My setup is a little different.

It was working fine with Drupal7-beta1, but I tried installing with Drupal7-beta3 today and it's giving errors while installing the modules during the initial install. Did you install beta3?

With beta1 I made no alterations to the settings.php. It's using the default without base_url defined.

Yes

perusio's picture

It's beta3. Without the $base_urlit doesn't work at all :(

On a different note, I'm working on the portuguese Drupal community site and trying to enable SSL I get a very strange error, which I suspect is from PHP CGI. On the browser I get "No input file specified", using curl I get a 404. Has anyone ever found such thing? If I revert the site back to plain HTTP everything works ok.

It seems that D7 beta still needs work.

Try     fastcgi_param HTTPS

brianmercer's picture

Try

    fastcgi_param HTTPS on;

in your php locations.

You'll probably need a separate include file for SSL e.g.

server {
  listen 69.164.210.108:80;
  server_name e.brianmercer.com;

  root /var/www/e.brianmercer.com/public;
  access_log /var/log/nginx/e.brianmercer.com.access.log;

  include /etc/nginx/config/drupal7-default;
}

server {
  listen 69.164.210.108:443 ssl;
  server_name e.brianmercer.com;

  root /var/www/e.brianmercer.com/public;
  access_log /var/log/nginx/e.brianmercer.com.access.log;

  ssl_certificate /etc/ssl/mycerts/www.brianmercer.com_combined.crt;
  ssl_certificate_key /etc/ssl/private/www.brianmercer.com.key;

  include /etc/nginx/config/drupal7-secure;
}

Thanks

perusio's picture

I tried that but no dice. It also happens that in my machine I had beta2 and not beta3. I upgraded and yes it gives a lot of errors. So I went back to beta2. On drupal-pt.org it runs beta3, so I think that reverting back to beta2 might do the trick.

EDIT: Well it doesn't work on the drupal-pt site, but it works here. I dowloaded beta2 using drush. Did a cache clear all. And it's working with the $base_urlsetting.

Drupal 7-beta3 works without

omega8cc's picture

Drupal 7-beta3 works without issues with Nginx and SSL enabled/forced in Aegir. I'm using standard Octopus Aegir install and the result is: https://skitch.com/omega8cc/rnnq2/fullscreen#lightbox

There is no need for base_url set at all. But of course you should have fastcgi_param  HTTPS  on;. Compare how it is done in the Nginx config for Aegir.

Yes I'm able

perusio's picture

To get it working but I have to include the fastcgi.conf file at the server context. Strange since it works flawlessly with D6 at the http context.

As for the drupal-pt.org machine. I think is something related with OpenVZ and the special kernel it needs. It gives always the no input file specified error even when all the fastcgi parameters are set at the specific location.

This site runs on Nginx and

omega8cc's picture

This site runs on Nginx and has rating 93: https://www.ssllabs.com/ssldb/analyze.html?d=www.governmentexchange.com so it is possible, however they probably enabled ONLY TLSv1 (SSLv3 has to be disabled) to receive FIPS-ready and are using patched (or older) Nginx with newer openssl, since it shows: "Secure Renegotiation Supported". The 88 rating is a standard result with default SSL Nginx settings.

I think I'm using the

brianmercer's picture

I think I'm using the defaults and get 91: https://www.ssllabs.com/ssldb/analyze.html?d=www.brianmercer.com

To get to 93 I'd have to exclude SSLv3 like you said and that seems to eliminate support for MSIE6, at least in my virtual WinXP IE6 machine. Although, these guys get to 93 with sslv3 and 128-bit ciphers: https://www.ssllabs.com/ssldb/analyze.html?d=login.paylocity.com so I'm not sure how that works.

It looks like using stronger

omega8cc's picture

It looks like using stronger key RSA / 4096 bits instead of RSA / 2048 bits helps to get better rank. Interesting that for me the "Secure Renegotiation" is "Not supported". Maybe it is not upgraded (to the latest) openssl on my Debian test VPS? Otherwise I'm using latest Nginx without any patches, so renegotiation should be (still) disabled by default?

Of course it is not a good idea to eliminate SSLv3, I tried it only for testing there.

[EDIT] Eliminating SSLv3 is required to get the FIPS-ready working, while in https://www.ssllabs.com/ssldb/analyze.html?d=login.paylocity.com they got high rank thanks to enabled also TLS 1.1 and 1.2.

By the way, Safari supports TLS but Apple don't reveal which version - see also: http://en.wikipedia.org/wiki/Transport_Layer_Security#Browser_implementa...

Ahh yes. They bumped up

brianmercer's picture

Ahh yes. They bumped up their score with that TLS 1.2.

I don't know what effects Secure Renegotiation. I'm not doing anything special. My OpenSSL package is from the Ubuntu 10.04 repo: version 0.9.8k 25 Mar 2009.

Well I get secure renegotiation working

perusio's picture

I'm running Debian stable like you, but mixed and matched with stuff from testing, unstable and even experimental (gnutls for example) using apt pinning.

I think to get FIPS compliance you need 4096 bit keys.

The test is a bit dumb, since it doesn't support SNI. I have several https servers on a single IP and the test is unable to get the server identification. It fails for all but one virtual host.

Hmm

perusio's picture

I think that the support for TLSv1.1 is not very robust on OpenSSL. You need version 1.0.0 to get it AFAIK. I'm running 0.9.8o-3 (fixed the latest vuln). I've suggested on the Nginx mailing list for a competing crypto module for Nginx that uses Gnu TLS instead of OpenSSL. That way we would get rid of the insecure SSLv2 and get support TLSv1.1 and TLSv1.2, not forgetting SSLv3 for those people running utterly crappy browsers like IE6.

After all there's a mod_gnutls module now for Apache.

if in php.ini

Xaber's picture

if in php.ini cgi.fix_pathinfo != 0

rewrite in location ~ .php$

location ~ .php$ {
  if ( -f $request_filename ) {
    fastcgi_pass unix:/tmp/php-cgi/php-cgi.socket;
  }
  fastcgi_index index.php;
  fastcgi_param script_FILENAME /scripts$fastcgi_script_name;
  include fastcgi_params;
}

Insecure config?

perusio's picture

First I think the advisable thing to do in security terms is to have cgi.fix_pathinfo = 0 in the php.ini file. Also your regex should be: \.php$. I think it's better to enumerate all PHP files that are allowed to be run and end the config with something like:

location ~* ^.+\.php$ {
       return 404;
}

I think it's a safer approach.

Can't start Nginx

kone23's picture

Hi,

I'm really new to nginx - so I might be doing something wrong here.
I've followed your instructions closely but when I try to start Nginx, I get this error:

Starting nginx: [emerg]: bind() to [::]:80 failed (98: Address already in use)
[emerg]: bind() to [::]:80 failed (98: Address already in use)
[emerg]: bind() to [::]:80 failed (98: Address already in use)
[emerg]: bind() to [::]:80 failed (98: Address already in use)
[emerg]: bind() to [::]:80 failed (98: Address already in use)
[emerg]: still could not bind()
nginx.

I understand that the port Nginx usually listens to is already in use, but I have no idea by what. Apache does not run on my server...

Any advice appreciated :)

Thank you

Restart and not reload

perusio's picture

The config assumes hybrid sockets, so each socket supports both IPv6 and IPv4.

Try doing a restart /etc/init.d/nginx restart instead of a simple reload.

Nothing wrong with using an IPv6 address as long as your OS supports IPv6.

You can see which TCP sockets are being used with lsof -i tcp or using netstat like Brian suggests.

in config file use listen

Xaber's picture

in config file use

listen 80;

not

listen [::]:80;

Is there anything listed on

brianmercer's picture

Is there anything listed on port 80 in

netstat -plnt

Thank you all - still a remaining problem

kone23's picture

Thank you all for helping me with that.
I replaced my listen command by:
listen : 80
and nginx was able to start !

Yet now I get a 502 Bad Gateway. Do I also have a problem with PHP-CGI not listening to the right port?
Here is the result of netstat -plnt:

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      11825/php5-cgi 
tcp        0      0 127.0.0.1:3306          0.0.0.0:*               LISTEN      11526/mysqld   
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      21863/nginx    
tcp        0      0 0.0.0.0:30000           0.0.0.0:*               LISTEN      11601/sshd     
tcp        0      0 0.0.0.0:25              0.0.0.0:*               LISTEN      11959/master   
tcp6       0      0 :::30000                :::*                    LISTEN      11601/sshd 

Sorry for being such a newbie - and thanks again for any help.

Your PHP CGI

perusio's picture

Is listening on a TCP socket. My config assumes a UNIX socket. Do the following:

Replace fastcgi_pass unix:/tmp/php-cgi/php-cgi.socket; with fastcgi_pass 127.0.0.1:9000;.

Thanks Perusio, It worked !!!

kone23's picture

Thanks Perusio,

It worked !!!

Dropped Connections

kone23's picture

So I was able to have Nginx work and serve normal Drupal PHP pages - or pages cached by Boost - thanks to you guys advices.

The only problem that I have now is that Nginx serves white pages sometimes, or can't load all js or css files. In Safari, I get the message:

Safari can’t open the page “http://whatever.net/” because the server unexpectedly dropped the connection. This sometimes occurs when the server is busy. Wait for a few minutes, and then try again.

I turned off all caching - compression - and Boost - I still have the same problem.
Any idea why ?

I run a Pressflow install on a Media Temple (ve) Server 512 MB - the smallest one - maybe that's the problem ?
I was using Apache2 formerly and it was working fine - could handle a few users on the site - I suppose Nginx could perform even better than Apache ?

Thank you all

Well

perusio's picture

What do the logs say? Have you tried it with another browser or using curl or wget? That's a very anomalous situation. Nginx can handle much more load than Apache and consumes much less resources.

I haven't used Safari ever, so I can't relate to that. But there are people on the list complaining about Safari and Keep Alive connections. By default Nginx disables it, there's now a patch, included in 0.9.0 and later, that allows you to control that setting.

That symptom seems to be nothing new: E.g., http://forum.nginx.org/read.php?2,4743,4744

Logs say nothing

kone23's picture

Both Firefox and Safari get white screens - Safari throws that error, which I thought might be useful.

My Logs from Nginx say nothing! Weird.

It seems like Nginx drops the connection loading either the entire page, or sometimes just loading images or js/css files.
It's really annoying.

As a reminder, I use pressflow, not drupal. I don't know if that's part of the problem.

Thanks

Can you check the headers?

perusio's picture

In firebug can you check the headers being sent? I have no experience of Media Temple, my understanding is that is a shared hosting platform, so it could be that you're CPU challenged. Just a random guess. Nginx makes heavy usage of OS features like epoll and sendfile, meaning it takes full advantage of modern OSes.

My suggestion would be to install the site in your machine and see if you can reproduce that behaviour. It can even be something related to PHP and the kernel. It's strange that there's nothing in the logs. That points possibly to a OS/machine setup issue. I'm just guessing it's impossible to account for the combinatorial explosion of possibilities.

Share your site's URL if you feel comfortable and perhaps we can help you more.

Headers checked

kone23's picture

Hey Perusio,

Thanks for following up :)

So I've checked the headers but it did not help me really. I noticed a few issues though with the Ad module calling a php script directly. It returns a 404 error, which is normal behavior considering your config right ?

I will use your contact form to send you my site URL.

Yes

perusio's picture

That's correct. You have to add the script like this (I'm assuming that serve.php is the relevant script, I have no experience with that module):

location ~* ^/(?:index|serve|boost_stats)\.php$ {
    fastcgi_pass unix:/tmp/php-cgi/php-cgi.socket;
    # Filefield Upload progress
    # http://drupal.org/project/filefield_nginx_progress support
    # through the NgninxUploadProgress modules.
    track_uploads uploads 60s;
}

As per the other issues try increasing the keepalive_timeout start with 65 65 and if it works lower the value until the issue surfaces again.

No chance yet

kone23's picture

Thank you, I have tried to change the keepalive_timeout but with did not get any success. I still get white screens and some js/css/pictures won't load - randomly.

I even tried to downgrade my php version to 5.2.10. No better result.

I'm stuck again. Any help will be highly appreciated :)

Controlling the CGI process

perusio's picture

How are you controlling the PHP CGI? Are you using an init script? Have you stated that the process must (re)spawn when serving requests?

AFAIK, there are two ways of controlling the PHP CGI:

  1. Using php-cgi and an init script.

  2. Using php-fpm that has its own controlling layer.

I use 1. because I'm too lazy to rebuild the gazillion of debian packages with the php-fpm patch for 5.2.14. I had no problems with php-cgi so far. I use monit for monitoring the php-cgi process and doing a restart if needed.

Brian and Grace can chime in on their experience with php-fpm.

CGI process controlled

kone23's picture

Hi Perusio,

Sorry to get back to you that late.

So I was using php-cgi and init script at first - then heard php-fpm was better - so recompiled PHP to use that. I implemented monit as well.

Yet, I was having the same problem as before:

  • Certain files : js, css, images not loaded
  • Some pages not loaded at all
  • Nothing in the logs

I ended up using another configuration, more simple - with the same config for php and cgi. It works now.

I am sorry I was not able to use your config, which integrates everything that I needed, especially Boost.
It's also a little frustrating to not understand what was going wrong.

Let me know if you would like to troubleshoot that - I am ready to help.

Thanks again,
Nicolas

Yes definitely

perusio's picture

I want to know what may be causing problems.

Can you post the config you're using here or in any pastebin kind of service?

Thanks,
António

Access to Drupal guts is allowed

akamaus's picture

Hi,
recently I tried the config from git://github.com/perusio/drupal-with-nginx.git

I replaced my /etc/nginx and tweaked example.com to point to my test site.
I was surprised to see what all files inside 'modules' directory are freely accessible.

Is it a bug or a feature?

I fail to see how that can be done

perusio's picture

Can you provide an example of an URL that exposes the modulesdirectory?

some example urls

akamaus's picture

Of course.

For example, this url gives me the code of the module I'm developing right now:
http://porolon.maus/sites/all/modules/webform_sms/webform_sms.module

This one from drupal core works too:
http://porolon.maus/modules/taxonomy/taxonomy.info

Brian said in reply to my original question that .module, .inc and similar files are blocked. But seems they are not. I tested this on nginx-0.7.64. porolon.maus is an alias in /etc/hosts. It points to an address inside my vpn.

I understand css and js must be served from within module directories. And what security by obscurity is not a good thing. But still…

Forgot to mention, http://porolon.maus/files/test.php is launched by php. I'm new to drupal and not sure, how hard is to make it load a php code into files, but that scares me.

I'm afraid

perusio's picture

I can't reproduce the issues you reported. Suffice to say that, for example,

Forgot to mention, http://porolon.maus/files/test.php is launched by php. I'm new to drupal and not sure, how hard is to make it load a php code into files, but that scares me.

can never happen since there's a last regex in the drupal_boost.conf and/or drupal_boost_drush.conf that matches any PHP file. You should get a 404.

# Any other attempt to access PHP files returns a 404.
location ~* ^.+\.php$ {
    return 404;
}

You can verify that the URIs you mentioned are not accessible by trying the site drupal-pt.org. That's a site that runs D7 with Nginx dev release and uses my config.

my blunder

akamaus's picture

Well, I guess I forgot to disable chef-client and it silently reverted my old config. All is fine now. Sorry for noise. I should have double checked all that stuff before complaining. Thanks for the great work, Perusio.

Thanks

perusio's picture

But I'm just curious if your chef template is a recipe on the wild or just something you tried? If a recipe on the wild then someone needs to mark it as poisonous.

Yeah. Those things shouldn't

brianmercer's picture

Yeah. Those things shouldn't happen if you're using perusio's config.

# Replicate the Apache <FilesMatch> directive of Drupal standard
# .htaccess. Disable access to any code files. Return a 404 to curtail
# information disclosure. Hide also the text files.
location ~* ^.+(\.(?:htaccess|txt|engine|inc|info|install|module|profile|po|sh|.*sql|theme|tpl(?:\.php)?|xtmpl)|code-style\.pl|/Entries.*|/Repository|/Root|/Tag|/Template)$ {
    return 404;
}

should prevent the first issue.

Please check your nginx configuration and post it here if it's different than https://github.com/perusio/drupal-with-nginx

Unfortunately you can't

brianmercer's picture

Unfortunately you can't restrict the module directories because modules have their own .css and .js in there, and those must be publicly accessible.

Perusio's config and mine does block access to .inc, .txt, .module, .info files in there (and anything else that's not css/js/jpg), but it's really not a big deal since they don't contain any sensitive stuff, just the stock modules from drupal.org. Most configs don't restrict that stuff.

If you've put sensitive stuff into module directories you're doing something wrong.

I was under the impression

perusio's picture

that the OP was talking about PHP files. Static asset files (images, CSS, JS) are of course accessible. Hence my reply.

If you block access to them the client won't be able to render the page as you intended.

Hello perusio, Thanks for

threading_signals's picture

Hello perusio,

Thanks for this nginx conf; I haven't digested everything but I already picked up some good practices. I saw that Brian Mercer is contributing to Nginx documentation over at nginx.org and there's an nginx conf in the boost issue queue modified by him, another one in groups by Grace, and this one. I've started with a D6 conf from the nginx.org site by another author, and am in the process of merging the various confs for my environment.

Grace's conf is probably the most comprehensive, Brian's modification seems easier to start with, and I'm looking through the lines of this one.

Some questions/comments/nitpicking/appreciation:

I know you use Debian, but I'm thinking perhaps you built from source. I've installed from squeeze packages, but have yet to determine if that has modules such as upload support. It comes with a conf.d directory, and git doesn't allow empty directories? Anyway, I put everything aside from nginx.conf into that directory.

The auth basic support, ssl conf lines, and nginx_ensite are a nice touch. Here's my suggestions.

Script support doesn't carry over for all Oses I think, perhaps limit support for Debian/Ubuntu and wait for a module.

There are two server blocks. There are some wildcard options .example.com can be used with domain access/dns wildcards to redirect. Not the best way to redirect if performance is a top concern and if you rely on domain access, but I'm using memory based apc. You can also do: server_name example.com www.example.com; Not sure why rewrites are a concern to change to the www subdomain, unless robots.txt and seo comes into play. If people type in www.example.com as opposed to example.com more, you can switch the order. I'm in the middle of figuring out how to get rid of the trailing slash for the frontpage url.

Put the disable all methods directive (HEAD, GET and POST) and put that inside blacklist.conf and rename the file. Also include a directive to stop image hijacking in the renamed file.

What is : # location ^~ /progress {

report_uploads uploads;

} ? I'll also take a look at your documentation.

Since nginx loads stuff sequentially into memory, I try to do the same, and also alphabetically for regex matching. For regex of nginx, I don't see the value of multiple locations to avoid regex, and would rather spend time on comments and not having to scroll. Nginx probably does it's own form of regex at the location level anyway or doesn't have specialized indexes available, like in a relational database. I would think that using combining locations would be fine, since it should be loaded in memory, unless it can be proved otherwise. An understanding of how nginx handles regex would help. Like some regex implementations still try to match after a match hasn't been found, or cannot be found, ex: regex /pat, and the uri is bat. Some implementations would stop immediately after the first letter, if it's a character literal. If Igor put the same care in location based matching as he did with his regex implementation, I think using character literals in order, restraining matches (any type of checks), and using wildcards would be fine. It probably just depends on the order the regex is done. So if locations are in order, the regex matches should also be done in order. That also goes for the nginx.conf file directives, where I'm blindly putting stuff in order of how I think it loads, and how I group the lines to manage them.

I'm going to take a look at Grace's config after this, since I noticed it has og directives. Afterwards, the fastcgi_params directives. I didn't see a need for additional files either, like fastcgi.conf, since I put that stuff at the bottom of the fastcgi_params file.

I've made changes to my files, and a lot of it looks good so far, thanks. =)

Well

perusio's picture

Let me address each issue separately.

  1. I build my own Nginx packages. Yes they have upload
    progress support. They're built for sid, since as described in http://debian.perusio.net I roll on a mixed squeeze/testing/unstable setup using apt pinning. Please check the README for explanation of the upload progress settings. You need a module for that to work with filefield.

  2. All exact locations and the corresponding configs are implemented as an hash table. Meaning that given a key (exact location) we get the config and the processing related to that location. No order involved. A purely declarative thing.

  3. Regex based locations are procedural in nature, meaning that they're matched sequentially. The first that matches a given pattern will be the one that gets processed. Hence working with regex based locations is prone to errors due to the side effects of their procedural nature.

  4. Script support will work in any OS that has a filesystem layout similar to what Debian does. I.e., nginx in /usr/sbin and the sites-available / sites-enabled scheme for managing virtual hosts.

  5. Yes I can do www.example.com, but I want example.com to be the canonical domain. If you're happy with both hostnames then by all means omit the first server block and add the www.example.com hostname to server_name.

  6. If you disable all methods then you'll get a 405 (I haven't tested it) and you'll send server information along the way. You can use error_page to "rewrite" the returned status to a 444. With444` you get an empty reply.

Nginx

Group organizers

Group notifications

This group offers an RSS feed. Or subscribe to these personalized, sitewide feeds: