Hacker News new | past | comments | ask | show | jobs | submit login

Personally, I view node.JS as a step backwards: it's great to provide the idea of a single thread holding multiple connections, with high-performance non-blocking I/O within each thread; problem is, in their case, there's also a single thread per UNIX process with no primitives for communications between processes (unlike e.g., Erlang/OTP): if you'd want good performance, you want an event loop per logical core.; completely independent processes may be acceptable for simple, stateless web apps, but with most anything else you risk losing a great deal of efficiency without correct primitives. For example, if you have on-disk state, unless the threads are able to share memory (either by running within the same process or using UNIX shm) you're risking a situation where threads are competing for resources like OS page cache (provided you're not doing your own in-process caching and direct I/O, but even in this case, you've now lost the ability share a resource between any two connections without copying).

Some e.g., Asana have added fibers to Javascript (in their case V8): http://asana.com/blog/?p=49

The previous post from RethinkDB is quite interesting in terms of motivation for coroutines: to me, this superficially seems like SEDA. Here, instead of stages in the pipeline (thread pool per stage, first stage being processing events from epoll/kqueue, thread per core in each pool, each thread holding state machines, communication between threads via a queue) you are using coroutines for clearer code.




I don't think you want an event loop per core, that's back into the realm of concurrent madness, and I don't think that's how Erlang works (Erlang folk: correct me if I'm wrong on that).

It's not a bad thing that data has to be copied to be shared between Erlang-style processes. That is part of the reason why things "just work" when you move that process to another machine, or data centre. If the implementation can be smart and "cheat" by sharing data in the same process that's fine, but it is an implementation detail and not a property of the system.

The way Erlang works is that you have a single event loop and threads are spawned to handle i/o and such (the n:m threading model, N green threads are mapped onto M OS threads). When a process blocks its execution is suspended.

Node employs an n:m threading model as well except "processes" in node are just functions. One big difference is that if you make a blocking call in node the whole process is blocked. There's only an event loop, no scheduler or anything that OS-like involved. The Erlang model is clearly superior imo, but Node is far more accessible (for better and worse).


> I don't think you want an event loop per core, that's back into the realm of concurrent madness, and I don't think that's how Erlang works (Erlang folk: correct me if I'm wrong on that).

If you don't care about performance, sure (there's nothing wrong with not caring about performance e.g.,: low volume web applications, where server side Javascript can shine, IMO). However, I can give you a command line you can run which will plot a very nice graph of throughput vs. number of selector threads in a specific system that I've been working on.

Yes, it will bring you back to "concurrent madness". What you _want_ is having primitives that make it possible to deal with this madness, not handwave it away. In Erlang they're actors themselves, optimized for efficient delivery of message within a single node and remotely, supervision of processes that fail, ETS, mnesia; in Java they're Doug Lea's beautiful concurrent collections (java.util.concurrent). You're assuming multi threading pthreads or synchronized/notify/wait (Java before java.util.concurrent).

Note that there are two models for this: one is Erlang's as well as traditional UNIX IPC -- message passing; the other is shared memory multi-threading with concurrent and lock free collections (Java), which also goes nicely with the idea of minimizing mutable state (Haskell, Clojure, Scala). One is good for one kinds of applications which optimize for worst case latency (Erlang shines there), the other is good for another, which optimize for average throughput (JVM shines there).


I'm getting an "Error establishing a database connection" on that post. Kind of ironic.

I agree about your point regarding IPC, but that's something they can always add later. It's unfortunate that they've gone the callback route instead of innovating like Asana has done with coroutines.


> I agree about your point regarding IPC, but that's something they can always add later.

Add some form of IPC as an afterthought? Sure. Create powerful primitives for IPC as done in Erlang/OTP? No.


Agreed again.


Thought I'd chime in and mention em-synchrony, EventMachine(Reactor Loop) + Fibers(coroutines).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: