Perhaps even more importantly than not recompressing... If Nginx is reading gzipped files, rather than full size files, it reduces the pressure on filesystem cache. This increases the odds that the system won't have to go to the disk to find the requested resource.
Some useful Nginx optimizations, but I've always found that the bottlenecks are far worse elsewhere. Nginx is a champ at serving static files, and I have it proxy all requests upstream to Apache (and use mod_wsgi because the majority of what I work with are Django apps). Optimizing the DB schema and queries have had the most positive performance increases by far, followed by Apache tweaks, and then finally Nginx config changes. But I really love the setup I use for how easy it is to get a Django app spun up.
Even if your app does not use a database then you've still got bigger problems. Optimizing Nginx is often a bit of a joke as you don't really get a lot for your effort. I think reducing the IO impact of Nginx is probably the best thing you can do. CPU wise you're kinda stuck with the awesome stock performance.
At scale, all components cause "bigger" problems: http server, app server, database/datastore, file system, load balancer, etc. Saying the database is a bigger problem is only true in specific situations.
I actually use apt-get to install it at first, which gives me boot scripts. At least on Debian, you can apt-get remove it and the init scripts will still be there. Meaning, I don't have to go find one on the internet. Might not work outside of debian :)
I dont know, I compile from source the issue is that modules have to be compiled at runtime and everyone needs different sets, so there is rarely a PPA with what you want unless you build your own.
Its a build model that does not work so well with binary package managers, works better with say the Gentoo use flag model.
Oh and if you are not using any nginx modules you are missing out on a lot of its power, you should be able to move big chunks of your app like auth right into nginx, use redis and memcache directly, and so on.
I think he's got a little math problem. Many browsers will open up to 4 connections (possibly more, if config tweaked) to overlap requests for content, so you might want to consider this when configuring your worker_connections.
This is not a math problem. It's a perception problem. In fact, even your assumption is wrong, on the number of connections a browser will open. Modern browsers will be found opening 6 to 8 connections, by default.
Each worker can handle many thousands of concurrent connections so this isn't really an issue. This is where nginx and Apache differ. Generally you want one worker per CPU core.
I know how nginx works, I use it all the time. He is saying if you have 1024 woker_connections, at 2 per connections per user, you'll get 512 users per worker process. But the truth with most modern browsers is it is 4 connections per user, woker_connections = 1024, that is more like 256 users. So if you really want to service 512 users, you need to set woker_connections to 2048. This is the math problem I was highlighting, a simple division problem.
too bad there's not so much said about optimizing latency, serving a lot of traffic is not that difficult, but getting the best latency to serve your files 50ms faster can make a big difference
No point in re-compressing the same files over and over again. I normally gzip all static assets as part of the deployment process.