This immediately comes into my mind when I see the charts.
It is also interesting to see that only one Node process can barely serve 10k connections.
----
After I check the source at [0]. They use a simple http module to serve the requests while they use `http.ListenAndServe` which spawns go routines, hence more CPU utilization.
I'd avoid the cluster module, its not the recommended way to scale Node.js. It exists mostly as a way to make naive Node.js benchmarks perform better in multi-CPU comparison testing... like yours! :-) Many cloud providers charge per-CPU, so node's "mostly not threaded approach" is reasonable, IMO and that of many other users. Node is scaled by spinning up more instances, each instance being one of the cheapest "one CPU" variety. I'd be more interested in seeing a version of your benchmark that limited each of the language instances to a single CPU, and/or that spun up multiple node instances equivalent to one multi-cpu instance of go/elixir --- this latter may sound weird, but its a "cost equivalent" comparison, which is ultimately what's important: transactions served per $$.
One of the benchmark goals was to test "scheduling" efficiency of each runtime. In other words to show how well it scales given many-core instance, which often is more economical in transaction/$$ sense.
Question: how using cluster module is different from spinning up multiple node instances?
Well I believe few people use the built in http module in go. Not sure if you're testing would allow for 3rd party frameworks but the Iris framework loves to claim fastest go web server. Gorilla or gin are also popular.
As for the structure of it you would like have everything split out into goroutines with a worker pool of goroutines ready to ferry the data from request to backend and back to client.
I don't have any problem with their Go implementation. Instead, it is very impressive to me who has little knowledge in Go.
I just want to point out that Node runs with a single thread so it does not max out CPU utilization. It would be nice if the author does the benchmark again with multiple processes of node js.
It is also interesting to see that only one Node process can barely serve 10k connections.
---- After I check the source at [0]. They use a simple http module to serve the requests while they use `http.ListenAndServe` which spawns go routines, hence more CPU utilization.
[0] https://gitlab.com/stressgrid/dummies