Hacker News new | past | comments | ask | show | jobs | submit login

This immediately comes into my mind when I see the charts.

It is also interesting to see that only one Node process can barely serve 10k connections.

---- After I check the source at [0]. They use a simple http module to serve the requests while they use `http.ListenAndServe` which spawns go routines, hence more CPU utilization.

[0] https://gitlab.com/stressgrid/dummies




It's an unfair comparison then if they don't even use Node.js to its full potential. Apple to oranges comparison.


Author here. Planning to run the same test using cluster module with one worker per CPU.

What would be the most performant way to serve HTTP in Go?


I'd avoid the cluster module, its not the recommended way to scale Node.js. It exists mostly as a way to make naive Node.js benchmarks perform better in multi-CPU comparison testing... like yours! :-) Many cloud providers charge per-CPU, so node's "mostly not threaded approach" is reasonable, IMO and that of many other users. Node is scaled by spinning up more instances, each instance being one of the cheapest "one CPU" variety. I'd be more interested in seeing a version of your benchmark that limited each of the language instances to a single CPU, and/or that spun up multiple node instances equivalent to one multi-cpu instance of go/elixir --- this latter may sound weird, but its a "cost equivalent" comparison, which is ultimately what's important: transactions served per $$.


One of the benchmark goals was to test "scheduling" efficiency of each runtime. In other words to show how well it scales given many-core instance, which often is more economical in transaction/$$ sense.

Question: how using cluster module is different from spinning up multiple node instances?


FastHTTP is the fastest server for Go, it's going to crush Elixir by something like 10x, it's really fast like the fastest c++ / rust libraries.



Well I believe few people use the built in http module in go. Not sure if you're testing would allow for 3rd party frameworks but the Iris framework loves to claim fastest go web server. Gorilla or gin are also popular.

As for the structure of it you would like have everything split out into goroutines with a worker pool of goroutines ready to ferry the data from request to backend and back to client.


There are so many options to choose from. But don't use Iris.

See https://www.reddit.com/r/golang/comments/57w79c/why_you_real...


Why would you not use the standard library’s http package. I would!


That’s the idiomatic way to handle requests in Go. What’s the problem?


I don't have any problem with their Go implementation. Instead, it is very impressive to me who has little knowledge in Go.

I just want to point out that Node runs with a single thread so it does not max out CPU utilization. It would be nice if the author does the benchmark again with multiple processes of node js.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: