Overhead is not negligble because you are only printing hello world, the only I/O here is writing to repsonse socket. When you do other I/O, where the requests depend other sources, the overhead will be neglible because your http requests will be actually waiting for some other I/O to happen, which is the case of almost every program. Plus, you will have nice features with Nginx, where you don't have them in plain Go.
I'm wondering how this works for real world situations in which the HTTP server must slowly trickle responses back to the client. In the past I've seen major speed ups by using a store and forward proxy.
Since this server always prints hello world it would stand to reason that Ngnix should be caching the result.
The initial purpose of this test is to compare the different ways of connecting Nginx to Go.
It doesn't makes sense to test against a heavy task in Go instead of this single static string. Nginx in front of Go will not perform better under this circumstances. It will only perform better if static content is being served directly by Nginx or if caching is enabled, which is not the purpose of this test.
Following what some folks suggested, I also made some recent changes, like swapping ab for wrk and tuning nginx to disable gzip and enable keep-alive connections.
I will take the advice provided the day I deploy a Go app that prints a single static string to the stream. In other words, absolutely never.
This is the core problem with such benchmarks - that 'overhead' quickly becomes proportionally irrelevant when you're actually doing something worth doing. But with Nginx in front, suddenly you have so much flexibility without reinventing the wheel, including load balancing, mixing server technologies with ease, not dealing with static junk in your go code, proxy caching (recently used this to really good effect with a Go service, putting zero caching in the Go code and instead using standard http expiration headers to allow Nginx to do the magic), anti-DOS, streaming compression, security, SPDY, and on and on.
I'm not sure how big the issue is, but I would add the ability to run your Go app as an restricted user.
Using Nginx or another webserver in front of your app means that you won't have to deal with privilege seperation yourself. Just run the Go binary in a chroot as an restricted user and let Nginx deal with the binding on port 80/443.
In the upstream block add:
In the server (or location) block add: