I'm sure there is plenty of data/benchmarks out there and I'll let that speak for itself, but I'll just point out that there are 2 built-in core modules in Node.js, worker_threads (threads) and cluster (processes) which are very easy to bolt on to an existing plain http app.
So think of it this way: you want to avoid calling malloc() to increase performance. JavaScript does not have the semantics to avoid this. You also want to avoid looping. JavaScript does not have the semantics to avoid it.
If you haven’t had experience with actual performant code JS can seem fast. But it’s is a Huffy bike compared to a Kawasaki H2. Sure it is better than a kid’s trike but it is not a performance system by any stretch of the imagination. You use JS for convenience, not performance.
IIRC V8 actually does some tricks under the hood to avoid malocs which is why Node.js can be be unexpectedly fast (I saw some benchmarks where it was only 4x of equivalent C code) - for example it recycles objects of the same shape (which is why it is beneficial not to modify object structure in hot code paths).
JITs are a great magic trick but it's nowhere near guaranteed you'll get good steady performance out of one, especially when the workload is wide not narrow.
Hidden classes is a whoooole thing. I’ve switched several projects to Maps for lookup tables so as not to poke that bear. Storage is the unhappy path for this.
(to be fair the memory manager reuses memory, so it's not calling out to malloc all the time, but yes a manually-managed impl. will be much more efficient)
Whichever way you manage memory, it is overhead. But the main problem is the language does not have zero copy semantics so lots of things trigger a memcpy(). But if you also need to call malloc() or even worse if you have to make syscalls you are hosed. Syscalls aren’t just functions, they require a whole lot of orchestration to make happen.
JavaScript engines also are also JITted which is better than a straight interpreter but except microbenchmarks worse than compiled code.
I use it for nearly all my projects. It is fine for most UI stuff and is OK for some server stuff (though Python is superior in every way). But would never want to replace something like nginx with a JavaScript based web server.
V8 does a lot of things to prevent copies. If you create two strings and concat them and assigned to a third var, no copy happens (c = a+b, c is a rope). Objects are by reference... Strings are interned.. the main gotcha with copies is when you need to convert from internal representation (utf16) to utf8 for outputs, it will copy then.
I was a Java programmer out of school. My first specialization became performance. It’s the same problem with JavaScript. There’s at least an order of magnitude speed difference between using your head and just vomiting code, and probably more like a factor of forty. Then use some of the time you saved to do deep work to hit an even two orders of magnitude and it can be fast. If you’re not an idiot that is.
bcantril has an anecdote about porting some code to rust and it was way faster than he expected. After digging he found it was using btrees under the hood, which was a big part of the boost. B-trees being a giant pain in the ass to write in the donor language but a solved problem in Rust. It’s a bit like that. You move the effort around and it evens out.
> With JS, we have a problem that (mainstream) software hadn't faced before, which is client server architecture, that the client might be a cruddy cheap phone somewhere and/or on a dodgy link.
2000 was the peak of the dot-com boom. In the US, half of Silicon Valley was developing for client-server and nearly every person with a computer was using client-server applications.
To give a clearer example, Napster, where I worked, had 60 million users, mostly in the US and we were far from the largest.
The cruddy cheap phones one might complain today is several times more powerful than the servers we used then and the dodgy connection today are downright sublime compared to the dial-up modems over noisy phone lines.
The internet was beyond a frontier was clearly happening. But it's insane to me to pretend that most software was internet software.
Yes that was the excitement the enticing future & an amazing wave breaking upon us. Many were tuned in. But Napster is far more exception that proves the rule. Very very very consumers had any experience with networked software with client server, and I think you d have to be an absolute fucking moron to pretend that most of the world when we were thinking of computers them were thinking of networked client server systems. Get fuckign real. I appreciate that we the alpha geeks saw what was happening. But that doesn't embody what was actually happening, doesn't capture what people saw or felt.
Napster was the fucking envy. It pioneered something else. I admit it was mainstream, but as an exception that proved the rule, as something totally totally different that we all loved.
Architecture is far more important than runtime speed. (People are so easily swayed by "JS SUCKS LOL" because of experiences with terrible & careless client-server architectures, far more than js itself being "slow".)
The people ripping into js suck up the interesting energies, and bring nothing of value.
If we are discussing C10K we are by definition discussing performance. JavaScript does not enter this conversation any more than BASIC. Yes of course architecture matters. Nobody has been arguing otherwise. But the point is that if you take the best architecture and implement it in the best available JS environment you are still nowhere close to the same architecture implemented in a systems language in terms of performance. You are welcome to your wishful thinking that you do not need to learn anything besides JavaScript when it comes to this conversation. But no matter how hard you argue it will continue being wishful thinking.
We are discussing tech where having a custom userland TCP stack is not just a viable option but nearly a requirement and you are talking about using a lighter JS framework. We are not having the same conversation. I highly recommend you get off Dunning-Kruger mountain by writing even a basic network server using a systems language to learn something new and interesting. It is more effort than flaming on a forum but much more instructive.
Techempower isn't perfect, but a 33% loss of speed is really not bad versus best of the best web stacks. Your attempt to belittle is the same tired heaping scorn, when the data shows otherwise. But it's so cool to hate!
Libuv isn't the most perfect networking stack in the planet, no. It could be improved. We could figure out new ways to get io-uring or some eBPF based or even dpdk bypass userland stack running. Being JS does not limit what you do for networking, in any way. At all. It adds some overhead to the techniques you choose, requires some glue layer. So yes, some cpu and memory efficiency losses. And we can debate whether thats 20% or 50% or more. Techempower shows it's half an order of magnitude worse (10^0.5, 33%). Techempower is a decent proxy for what is available, what you get, without being extremely finicky.
Maybe that is the goal, maybe this really is a shoot for the moon effort where you abandon all existing work to get extreme performance, as c10k was, but that makes the entire lesson not actually useful or practical for almost everyone. And if you want to argue against techempower being a proxy for others, for doing extreme work & what is possible at the limit, you have to consider a more extreme js networking than what comes out of box in node.js or Deno too, into a custom engine there too.
It's serious sad cope to pretend like it's totally different domains, that js is inherently never going to have good networking, that it can't leverage the same networking techniques you are trying to vaunt over js. The language just isn't that important, the orders of magnitude (<<1 according to the only data/evidence in this thread) are not actually super significant.
Look you seem to have made up your mind and are unwilling to listen to knowledge or experience. The tech industry does not do well with willful ignorance so I wish you luck with all that.
No, I'm pointing out that you have no evidence at all & and aren't trying to speak reasonably or informatively, that you are bullying a belief you have without evidence.
I'm presenting evidence. I'm clarifying as I go. I'm adding points of discussion. I'm building a case. I just refuse to be met and torn down by a bunch of useless hot air saying nothing.
Nothing about current js ecosystem screams good architecture it’s hacks on hacks and a culture of totally ignoring everything outside of your own little bubble. Reminds me of early 2000s javabeans scene
Reminds me of early 2000s JS scene, in fact. Anything described as "DHTML" was a horrid hack. Glad to see Lemmings still works, though, at least in Firefox dev edition. https://www.elizium.nu/scripts/lemmings/
Unqualified broadscale hate making no assertions anyone can look at and check? This post is again such wicked nasty Dark Side energy, fueled by such empty vacuous nothing.
It was an incredible era that brought amazing aliveness to the internet. Most of it was pretty simple & direct, would be more intelligible to folks today than today would be intelligible & legible to folks from them.
This behavior is bad & deserves censure. The hacker spirit is free to be critical, but in ways that expand comprehension & understanding. To classify a whole era as a "horrible hack" is injurous to the hacker spirit.
Worker threads can't handle I/O, so a single process Node.js app will still have the connection limit much lower than languages where you can handle I/O on multiple threads. Obviously, the second thing you mention, ie. multiple processes, "solves" this problem, but at a cost of running more than one process. In case of web apps it probably doesn't matter too much (although it can hurt performance, especially if you cache stuff in memory), but there are things where it just isn't a good trade-off.
And I have confirmed to my own satisfaction that both PM2 and worker threads have their own libuv threads. Yes very common in node to run around one instance per core, give or take.