Hacker News new | past | comments | ask | show | jobs | submit login
Optimizing Nginx for High Traffic Loads (martinfjordvald.com)
95 points by ichilton on April 30, 2011 | hide | past | favorite | 22 comments



I would also recommend that you gzip any static assets and use http://wiki.nginx.org/HttpGzipStaticModule to serve these pre-gzipped files.

No point in re-compressing the same files over and over again. I normally gzip all static assets as part of the deployment process.


Perhaps even more importantly than not recompressing... If Nginx is reading gzipped files, rather than full size files, it reduces the pressure on filesystem cache. This increases the odds that the system won't have to go to the disk to find the requested resource.


Using this module and WP-supercache's gzipping feature has made a wordpress blog network I run astonishingly fast.


Note the absence of any particular setting which will cause it to crash if you get a link to your blog retweeted.

glares at Apache


Some useful Nginx optimizations, but I've always found that the bottlenecks are far worse elsewhere. Nginx is a champ at serving static files, and I have it proxy all requests upstream to Apache (and use mod_wsgi because the majority of what I work with are Django apps). Optimizing the DB schema and queries have had the most positive performance increases by far, followed by Apache tweaks, and then finally Nginx config changes. But I really love the setup I use for how easy it is to get a Django app spun up.


If your app uses a database, you've got bigger problems than optimizing Nginx for high traffic loads.

That said, I absolutely love Nginx because it's so damn lightweight, and easy to setup and maintain.


Even if your app does not use a database then you've still got bigger problems. Optimizing Nginx is often a bit of a joke as you don't really get a lot for your effort. I think reducing the IO impact of Nginx is probably the best thing you can do. CPU wise you're kinda stuck with the awesome stock performance.


At scale, all components cause "bigger" problems: http server, app server, database/datastore, file system, load balancer, etc. Saying the database is a bigger problem is only true in specific situations.


> The biggest optimization happened when you decided to use Nginx and ran that apt-get install...

I recommend against doing this on Ubuntu Lucid without first installing the unofficial nginx PPA, unless you want to get stuck with v0.7.65.


I actually use apt-get to install it at first, which gives me boot scripts. At least on Debian, you can apt-get remove it and the init scripts will still be there. Meaning, I don't have to go find one on the internet. Might not work outside of debian :)


I should do that. I assume there are no issues uninstalling nginx and installing the PPA and then nginx right?


Thank you for pointing this out, I always tell people to do this so I should really take my own advice.


Well, it doesn't negate your statement. apt-get is still a great way to install nginx.


I dont know, I compile from source the issue is that modules have to be compiled at runtime and everyone needs different sets, so there is rarely a PPA with what you want unless you build your own.

Its a build model that does not work so well with binary package managers, works better with say the Gentoo use flag model.

Oh and if you are not using any nginx modules you are missing out on a lot of its power, you should be able to move big chunks of your app like auth right into nginx, use redis and memcache directly, and so on.


I think he's got a little math problem. Many browsers will open up to 4 connections (possibly more, if config tweaked) to overlap requests for content, so you might want to consider this when configuring your worker_connections.


This is not a math problem. It's a perception problem. In fact, even your assumption is wrong, on the number of connections a browser will open. Modern browsers will be found opening 6 to 8 connections, by default.

http://stevesouders.com/ua/report.php


Each worker can handle many thousands of concurrent connections so this isn't really an issue. This is where nginx and Apache differ. Generally you want one worker per CPU core.


I know how nginx works, I use it all the time. He is saying if you have 1024 woker_connections, at 2 per connections per user, you'll get 512 users per worker process. But the truth with most modern browsers is it is 4 connections per user, woker_connections = 1024, that is more like 256 users. So if you really want to service 512 users, you need to set woker_connections to 2048. This is the math problem I was highlighting, a simple division problem.

I believe you are talking about worker_processes


Oops -- you're right I misread your comment. Sorry about that.


too bad there's not so much said about optimizing latency, serving a lot of traffic is not that difficult, but getting the best latency to serve your files 50ms faster can make a big difference


This is kind of outside the scope of Nginx.


Optimizing == Customizing as per your business needs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: