Hacker News new | past | comments | ask | show | jobs | submit login

>> "While single-process performance is quite good, eventually one CPU is not going to be enough;"

Every article on this sort of thing seems to just gloss over this part. Why isn't one CPU enough? What is using it? Serving static files certainly won't. Doing simple things won't...

Does anyone have any use cases / experience for when this was the case? :/

edit: Fine downmodding fanboys. I get it. Use whatever you like. Meh




Since Node.js is still fairly new technology people are starting out with 'hello world' examples such as static file servers. Obviously specialised servers like Traffic Server or nginx handle these cases faster.

That said Node is a programming environment so the question is, when on a multi-core machine (which all DC machines are) how can we scale to use all the cores so we can do much harder stuff.

What about a node system to deal with 100k concurrent long-poll connections? When some of those are active they could be really active, requiring all the cores, etc. There are lots of scenarios in which more compute power is useful.


I agree there's cases where more CPU power is useful, but I'm just not sure it's a good idea to firstly assume you need it before it's an issue, and secondly to split the whole thing (networking IO) over multiple cores, rather than just shell out the CPU heavy stuff to multi-cores.

Networking IO isn't CPU heavy. There's no reason to increase complexity and slow throughput in the hope that more CPUs will help...


Part of node.js's appeal comes from writing all the server code within Javascript, even when it'd be more efficient breaking pieces out into separate programs. In that case, worrying about CPU usage for the server itself makes some sense.

Not saying I agree with the design choices (I'm more of a multiple language / "hard and soft layers" person, and I don't care for Javascript), but I think that's the reason.


If you're just serving static files why would you be using Node?


so what is the common use case for using Node, and what in that use case eats CPU?


Application logic is not free.

The article mentions that using NodeJS as a simple HTTP proxy with no application logic can sustain only 2100 reqs/s before a 2.5GHz Xeon is maxed out. NodeJS uses CPU more efficiently than other HTTP stacks, but its I/O engine is not infinitely scalable.


>> "The article mentions that using NodeJS as a simple HTTP proxy with no application logic can sustain only 2100 reqs/s before a 2.5GHz Xeon is maxed out."

That sounds fairly lame to me. Proxying network traffic isn't a CPU heavy operation. Worst case you have to move a few bits of memory around.


Thats specious, you really have to know what you're proxying, and squid and varnish supposedly get much less throughput. (Google around http://deserialized.com/reverse-proxy-performance-varnish-vs... ) "Moving bits" in memory is not a measure of anything.


Overhead of the operating system and more likely a massive amount of packets per second will easily peg a single core.. I did some tests with nginx (comparable to node.js) and it easily pegged a xenon 2 cpu quad core 8GB ram (all 8 cpu's were 90+%) with a paltry 8055.77 rps over 2 x 10gbit ethernet but then this is more likely an OS / fine tuning limitation




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: