Hacker News new | past | comments | ask | show | jobs | submit login
Battle-ready Nginx – an optimization guide (zachorr.com)
221 points by funkenstein on Nov 17, 2013 | hide | past | favorite | 40 comments



What purpose of the article if in the documentation at nginx.org/en/docs/ you can find the same?

And, btw, you are giving bad advices. You are wrong here: "By default, nginx sets our keep-alive timeout to 75s (in this config, we drop it down to 10s), which means, without changing the default, we can handle ~14 connections per second. Our config will allow us to handle ~102 users per second."

No, the keepalive connections doesn't limit nginx anyhow. Nginx closes keepalive connections when it reaches connection limit.

"gzip_comp_level sets the compression level on our data. These levesls can be anywhere from 1-9, 9 being the slowest but most compressed. We’ll set it to 6, which is a good middle ground."

No, it's not "middle ground". It kill performance of your server. With 6 you will get 5-10% better compression, but twice slowness.

"use epoll;"

What's the purpose of this? The docs says: "There is normally no need to specify it explicitly, because nginx will by default use the most efficient method."

"multi_accept tells nginx to accept as many connections as possible after getting a notification about a new connection. If worker_connections is set too low, you may end up flooding your worker connections. "

No, you have completely misunderstood this directive. It isn't related to worker_connections at all.


And even more:

"send_timeout 2;" Mobile clients from another continent will "thank you" for this setting when they cannot open your site.

"error_log /var/log/nginx/error.log crit;" A way to be unaware when something is wrong with your server. Nginx produces not only "crit" errors, but a bunch of very useful warnings, that need attention.

"limit_conn addr 10;" Chrome and Firefox usually open more than 10 connections. And btw, have you ever heard about NAT?

"Most browsers will open up 2 connections" 15 years ago this was true.


"Chrome and Firefox usually open more than 10 connections"

"and our value is 10,"

Both comments are similar in that there's no explanation why.

The correct value for limit_conn needs to be a balance between whatever your page designer or testing addons measured under normal operation, vs DOS/DDOS harm reduction (not prevention, just... reduction) where setting it to 100000 is probably a bad idea unless you're intentionally doing something really bizarre.

I liked the article for what it is, "explain which settings in nginx can be fine tuned in order to optimize performance for handling a large number of clients". It does a really poor job of explaining how to close the loop by benchmarking and monitoring followed by methodically determining which setting to fine tune and doesn't say much about config file version management either, but that's OK, it self described as a shopping list of performance oriented config options, and at that specific sub-task it delivered successfully. One minor area of improvement would have been to bracket the story with what comes before and after in the process... so your monitoring systems and operations procedures indicates xyz which implies you should ...


  > Chrome and Firefox usually open more than 10 connections.
According to browserscope.org both browsers open only 6 connections per hostname.


For http connections that's true. Websockets have a separate pool though, and a much higher cap (200 in Firefox). Nginx recently added websocket support.


And gzip_min_length should probably be set to the MTU size


Looks like the author of the linked blog post is reading HN. They have modified the article in an attempt to address your criticisms.


.. and that's a good thing, the circle of HN life.


For anyone who is interested in nginx tuning, please follow the H5BP nginx repo: https://github.com/h5bp/server-configs-nginx, which is very well documented already and still being maintained.



Thanks, this is useful—the comments especially so.

It's a shame there isn't one for reverse (HTTP) proxies: fine tuning proxy settings: proxy_buffers, buffering, etc, and other related settings when dealing with a back end application.


There is a great guide about using varnish with nginx to make WordPress fly on perfplanet blog. I recently blogged about this. http://serenecode.org/2013/11/nginx-php-fpm-5-5-zend-opcache...


This post was worth it just for me to discover that this exists! Thank you!


I have to agree. If only I had known this about this repo back when I when I had a VPS. The comments for each option are explained well. The Nginx help docs are helpful but sometimes its nice to see a more detailed approach and though not every option will be right for "your" circumstances. It is nice to see.


Not sure about the H5BP config setting the keep-alive timeout to 20s (but would need to test to look at what resource consumption is like compared to say 5s)

The OP recommends turning off gzip in MSIE6 when it was only the very first versions of IE6 that had a problem with gzip and it was fixed in later versions.


>> The OP recommends turning off gzip in MSIE6 when it was only the very first versions of IE6 that had a problem with gzip and it was fixed in later versions.

And IE6 users are known far and wide for how fervently they upgrade. ;-)


No configuration is perfect in all scenarios. The good thing in using github is you can submit pull request or report issue when needed :)

[1] Why `gzip_disable` is added: https://github.com/h5bp/server-configs/pull/92

[2] Why `gzip_disable` is removed: https://github.com/h5bp/server-configs/issues/145


additionally, better for security: https://gist.github.com/plentz/6737338


Thank you! Do you know of any equivalent for Apache?


Interesting stuff. Thanks for posting it


Could line 21 of nginx.conf actually be any more arrogant or presumptuous?


Good introduction to nginx. However, the guide states: "Keep in mind that the maximum number of clients is also limited by the number of socket connections available on your sytem (~64k)".

This is incorrect. The system can open ~64k connections per [src ip, dst ip] pair. In the case of a webserver listening on just 1 port, it means you can open 64k connections per remote IP, which is why some people can write about how they handle a million connections on a single server.


That's true for incoming connections, but if you're proxying back to something else then the limit does apply to the outgoing ones.


Only if you don't use HTTP/1.1 on the proxy side.

    proxy_http_version 1.1;
    proxy_set_header Connection "";
In the proxy definition.


Also useful for nginx: adding the pagespeed module https://github.com/pagespeed/ngx_pagespeed

"ngx_pagespeed speeds up your site and reduces page load time by automatically applying web performance best practices to pages and associated assets (CSS, JavaScript, images) without requiring you to modify your existing content or workflow."


pagespeed is impressively helpful, great news when nginx support was announced (a tipping point for me), just wish I knew going in that I had to compile for SPDY instead of using apt


I'd like to add that using [gzip_static][1] might also be a good idea since nginx doesn't have to gzip your files over and over again and you can gzip the files yourself with the highest compression possible (reducing file size).

[1]: http://nginx.org/en/docs/http/ngx_http_gzip_static_module.ht...


"Chances are your OS and nginx can handle more than “ulimit -a” will report, so we’ll set this high so nginx will never have an issue with “too many open files”"

If the limit is a hard limit it doesn't really matter what nginx decides to do, does it? I had to increase the limit by hand, outside of nginx.


I would love to see some before and after in the wild stats using this configuration. Whilst it would be an apples versus oranges comparison, it would at least show that this config works compared to the default. Maybe a Blitz.io rush test?


If you set an application to use more file descriptors than ulimit -n returns, then either the application will be smart and fix its configuration by using MAX(configured limit, ulimit -n) or it'll start dropping requests because it's assuming it's allowed to open more file descriptors.

Increasing an application's maximum file descriptors past ulimit -n is bad advice. The proper way is to increase the limit in /etc/security/limits.conf (note that assigning a limit to * applies it to every user but root, so if you really want to assign a limit to every user, you must assign it to both * and root) and then increase the application's max file descriptors. Restarting the application is usually required, although on newer versions of Linux, changing limits for running processes is possible.


You can also use "sudo service nginx reload" instead of restarting. Helps if it's in use and you don't want to drop any active users.


My favorite comment from this whole blog: "(warning, a neckbeard and an operating systems course might be needed to understand everything)"

Thats actually true with a fair amount of what people fiddle around with. I see a lot of tuning advice based on what I can only assume is guessing. I guess this is as good of a "caveat emptor" as anything.


I would love to see optimization guides with actual benchmarking.

It's like saying `for(var i=..` is faster than `.forEach` without given any numbers.

Always test for performance, do not blindy follow guides or copy paste configuration files in your web server.


I wish I could find a guide like this for Apache as well. Computing max clients and other options seems like pure guess work and constant failure =/



This isn't really 1-to-1 with the article. I meant something like max_clients, max_requests_per_child, etc.

The best I know if is my co-workers' efforts here: > https://github.com/genesis/wordpress/pull/64


Thank you for this write up. Out of sheer curiosity since I love benchmark numbers, how many concurrent users do you think this config can handle?


I wish i had read these post before my devnull-as-a-Service was on HN.


You explain the "what" but not the "why".


breif --> brief




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: