From the authors philosophical standpoint, I think this is indeed very interesting. I hate having to open up multiple instances of the dev tools since they are tied to the tab instance.
The DevTools team did the work a while back to pull everything they needed out of process and establish the devtools protocol to enable remote debugging.
What the author has done is to take the Chrome DevTools (which is already basically a standalone web app), and package it up with Node-webkit as a standalone app.
I think this is cool, not from a technical perspective necessarily (since the engineering work was mostly already done by the DevTools team for remote debugging), but rather from a usability perspective. Slight tweak to the UX and it changes the mental framing people have of what the chrome DevTools actually is.
I'm also really curious about the future of Dev Tools and hope to see it applied to other uses. I think there is a range of tools we could be building on top of or around Chromium/DevTools/etc to provide real-time workflows for animations, visual programming, shader authoring, WebAudio editing, and plenty more.
Any post/tool that mentions "cross-browser debugging protocols" has my immediate +1.
Doing so from outside the browser (which gets me that much closer to staying in my IDE) has my immediate +100.
I understand the fancy side of debugging tools is still being actively evolved (interacting with/highlighting DOM elements), but surely we're to the point where a basic JS debugger protocol (break point, skip, go in, go out, etc., ...with source maps...) is doable.
How scalable is Node these days, considering that javascript has no real multi-threading support, and most of the things you do for a user in Node are thus blocking other users?
I know it is possible to run things in different processes, but it seems to me that this is kind of a hassle (like a workaround).
Most of the things you do for a user are not blocking, that's the whole point of node. Instead of blocking, almost everything is done with asynchronous callbacks.
You are not blocking other users - the CPU is busy and couldn't do other stuff even if it tried. With using one process per core (plus a spare core for "other stuff") you get full CPU utilization. You are right that node is not the right tool for doing large CPU-bound tasks as part of HTTP request handling. If that was your point: yes, that's true and that won't change. But if most of what you're doing is forking out to other services and some light assembly (JS is a scripting language after all), then node is a very nice fit.
But what if your tasks need to talk to other systems, that may use mutexes to control concurrent access. In that case, your HTTP request handling logic will just block on those mutexes, and all is lost, since the event loop will be stuck.
Now, you could of course rewrite those 3rd-party systems to use your event-loop instead of plain mutexes, but the point is that threads are a more natural fit for multiprocessing, from a software-engineering point of view, because they allow one to compose systems more easily.
In other words, threads have been invented for a reason. Otherwise, we might as well just go back to the Windows 3.11 era, with its cooperative multitasking model.
Your calls to third party systems should be through a callback interface.. they should not be blocking inline. The event loop will continue.
There are plenty of ways you can process out of band requests... you could use a generic-pool with a size of one instance if you really need to... However, if you are really blocking access to a single client at a time, it's likely node won't be your bottleneck.
The CPU can only do one thing at a time even with threads. It just hides that fact from you better.
In an app that grabs a page from a database and returns it, the node.js app will receive a request from a user, make a request to the database, receive a request from another user, make a request to the database, then wait for either database request to return, then return it to the user, then when the other returns, return that to the user.
Your program doesn't deal with one whole request at a time, it chops it up into lots of little "wait for this event" parts and node.js has an event loop that listens for events, which can happen in an arbitrary order. So you can be waiting for hundreds of events to happen for hundreds of requests, and as node.js doesn't make you wait for the first one to process the second, nobody's holding up anyone else.
Threading does nearly exactly this automatically... except that you need memory per thread in order to maintain the illusion that each thread has complete control over the CPU (its own stack, etc), and it can also split up CPU-bound workloads automatically. node.js needs much less memory per event it's waiting for.
In thread-per-connection server applications, the concurrency is exactly the number of threads spun up for the pool of connections. In node, it achieves that concurrency without a thread pool for the connections themselves, and thus you get less resource usage (memory mostly) for the same number of connections as the daemon scales up.
Node's real draw, aside from pretty easy concurrency, is the package ecosystem.
Not to mention the overhead of switching threads... Moving context in and out of secondary memory (RAM, disk) is pretty big. And when you get to thousands of threads per core, the context switching becomes very costly.
Some modern systems use thread pools, and manage state switching internally to avoid this overhead at the CPU layer... just the same it is costly.
I worked on a simulator a few years ago that was being written in many threads (each virtual character in play would have its' own worker thread)... this lead to a much lower number of users/system than should have been needed... switching to an event-loop and message bus reduced the overhead significantly.
FYI to run Chrome in remote debugging mode on a Mac