Hacker News new | past | comments | ask | show | jobs | submit login
Interview with Ryan Dahl, Creator of Node.js (mappingthejourney.com)
393 points by WhiteSource1 on Aug 31, 2017 | hide | past | favorite | 228 comments



A serious problem with Node was the callback pyramid of doom.

This means deeply nested callbacks such that the indentation wanders steeply to the right. When you do loops or have exceptions programming got very confusing. In effect you have to do a continuation passing transform. It's not difficult but the results are utterly unreadable. For example a simple for loop could be transformed into recursive function because this way it is easier to keep state between callbacks. It's a nightmare.

What I found surprising in the early years of Node that some people in the mailing list had a «Real programmers don't use Pascal» attitude.

Bruno Jouhier's developed streamline to simplify and automate the continuation passing transform. One could almost program in a «blocking» style. I found his work really amazing. However for that Bruno got attacked in the mailing list.

Then Marcel Laverdet developed fibers. A different and equivalent way to solve the callback pyramid are fibers, coroutines and generators. This way we don't need the continuation passing transform. But the reception in the mailing list was at best only lukewarm. Anyway, the developers of Meteor saw the potential and decided to base the server side of the platform on fibers.

And now JavaScript has generators and async/await. And lo and behold: with these official solutions the hard core programmers swayed and accepted «Pascal».

In my opinion, Ryan Dahl has missed an opportunity. Node was by large not ready and finished when he left. He should have tried to convince the community to find a solution for the callback hell.

I think I understand people like him. They are always on the lookout for fresh ideas.


This was easily foreseeable when he started on Node.js. The Twisted framework for Python had existed for years (decades?) before Node.js was created. Anyone familiar with programming in that style knew of the issues with "callback hell".

I was the primary instigator of generators for Python. Part of my motivation was to do something like CSP (e.g. call-with-current-continuation, call/cc) without having to redo the whole language runtime and also kill off alternative Python implementations.

Generators in Python are limited to a single stack level. They have been extended to better support async IO coding. They work and are probably nicer than using callbacks but are still pretty ugly IMHO. The message passing that Erlang does is the best solution to this kind of concurrency. You can't really bolt that on a side of an existing language though.

Anyhow, I think Node is successful largely because Javascript is the language understood by web browsers. The non-blocking coding style is not some kind of magic bullet as Ryan originally suggested it was.


Somehow I never experienced callback hell.

It always felt like every other code-nesting problem to me.

I mean, nobody talks about conditional hell, everyone acknowledges that you should flatten out your conditionals.


I'm inclined to agree. To me callback hell is a symptom of lazy, or perhaps just ignorant, programming. It was never a necessity: if things are getting out of control, create a named function and pass in the name as the callback to make things more readable.


When starting with Node.JS I did experience callback hell for about half a year before I found a good tutorial on how to manage the callbacks, and it finally clicked, since then, managing callbacks has become a second nature. The magic trick is to use named child functions instead of anonymous and self calling functions, so you can take advantage of the closure. I find programming this way even easier then writing in a serial non async language, because of the heavy use of function state instead of global state, and I love that in Node.JS modules are not just a "include file" and actually a object that works just like any other object, and can be used in function scope and closures. So instead of importing a bunch of variables to the top scope, you can require them locally and just by looking at the function you can see where all the variables come from, you don't have to know about the outside world to understand what the function does!


In case you are just referring to the browser, callback hell is way more invasive in nodejs than in the browser.


Callback hell was real, especially if you worked on projects that decided to join the node.js bandwagon in its early days. Then promises happened.

https://twitter.com/winterbe_/status/541868081308790784


Why post a contrived meme as an example? Especially one with a sane solution[1] just below.

[1]: https://pbs.twimg.com/media/B4W2AmaCAAA-heo.png


It's an exaggeration, yes, but it does correctly summarize how I felt when I had to work with node.js code bases before stuff like promises became mainstream.


An exaggeration could be written for any language. A decent programmer would not write code like the one you showcased as an example for callback hell.

This https://docs.spring.io/spring/docs/2.5.x/javadoc-api/org/spr... is not an exaggeration and I still would not use it to showcase how I felt when I had to work with Java developers.


When green threads, fibers etc. were a hot topic in the node.js community I was pretty involved in core development.

As far as I remember, we were never so much against the idea of it, but we didn't believe an actually good implementation was possible.

The primary concern was scalability. At the time supporting high concurrency networking (10K+ connected clients) was a top priority for us.

The solutions proposed by Laverdet and Jouhier used libcoro or fibers to achieve the desired "cooperative threading"; this may not involve actual OS threads but it does require the creation of a thread stack for every fiber. To handle 10K connections you would need 10K stacks so you have to allocate 40GB memory right there, too much.

We were aware that there were more effectient ways to do it, but this would have big changes to the V8 javascript engine. We didn't have the resources (nor the skill) to get it done and take on the maintenance burden.

There were also concerns about the surprising language semantics it created; suddenly callbacks might run "inside" a function, e.g. after it's called and before it has returned, not something a javascript user would normally expect to be possible.

Nowadays with async/await the callback-hell problem is actually solvable (to the extent that node-fibers solved it, anyway). It'll take time though before that becomes noticable, because a lot of packages need to change their APIs to take advantage of it.


Just a small heads up: Bruno Jouhier's solution is a transpiler just doing the continuation passing style transform. In other words, one could program in a synchronous style and the code got transformed to callback style. Later he added features to be compatible with fibers.


When your code wanders steeply to the right, then it is time to create named functions, variables or otherwise flatten it. Just like when the code nests too much for any other reason.

But I agree that js attracted "Real programmers don't use Pascal" kind of programmers at first. Pascal style programmers perceived js as pure hell and treated it as abomination. Through, I am not sure whether they were swayed or rather moved elsewhere in disgust.


But that's the thing. With callbacks, code that is logically the same (except that it's async) that wasn't nesting too much before, was suddenly nesting too much after.

And conversely, you can reduce nesting to, basically, zero in any imperative language, if you replace all your conditionals and loops with a bunch of labels and gotos.

Which is to say, how much code nests is very much a language design issue. Good PL design should not cause unnecessary nesting where it's not warranted by the structure of the solution.


Your view is that the Node community finally "saw the light", but the other interpretation is that they buckled under pressure.

I love callbacks. The best thing about them is that you always know precisely what will happen when a line of code runs. Not so with promises.

Another nice thing is that you always know which line of code will run next. Not so with async.

I see async in the language spec (and class too) as a concession made to people who couldn't be bothered to learn how to use JavaScript the way it was intended. Like burgers on the menu at a Mexican restaurant.

And no judgement against them: I'm happy for them that they got the ruby-like JavaScript they always dreamed of.

But I wish those people would be a tad less boastful and understand that they turned people like me into exiles in our own language community. I used to have a language that promoted small single purpose modules with functional interfaces. That's gone. It's mostly synchronish OO interfaces now. And bigger and bigger frameworks. My folk are going to have to basically move out. I don't see an alternative at this point but to fork NPM.


Finally :) I've been arguing for fibers, coroutines and generators with non-blocking IO as an alternative to bending everything backwards to fit in event loops for a long time. And begging for salvation from ever dumber and clumsier programming languages and more ignorant hipster programmers. I ended up writing my own language to investigate further since no one else seemed to be interested in stepping far enough out of the box.

https://github.com/andreas-gone-wild/snackis/blob/master/sna...


Many are forgetting the initial reason node.js became popular. Consider the popular server-side landscape before node.js. It was dominated by Java, Python, etc. The primary way these ecosystems handle concurrency is OS-level threading. There was nothing else in the popular languages. Each language had some niche library that did non-blocking I/O (Twisted for Python, Netty for Java), but these all had a critical flaw, which is the rest of the ecosystem didn't support it. Basically every library, almost all existing code used the threading model. And when you mix threaded libraries with a non-blocking I/O server, it completely falls apart because the threaded code blocks.

Then came node.js. Look at the ecosystem it came into. JS itself has very little standard library, and nothing for I/O. It has a highly optimized VM supported by the resources of Google. It's a language known by many developers. node.js took this well developed clean slate and built a non-blocking I/O server on top of it. It offered a concurrency model on the server side that's completely new to many developers, an alternative to the traditional threading model everyone knew. Since server-side JS didn't exist yet, it forced all libraries to be written in this non-blocking way. Every package in NPM is written to support the core design of node.js which is non-blocking I/O. And it was exciting to many developers, it was a reason to rewrite everything in this new way.


IMO, the biggest reason Node.js got popular was because of JavaScript. You just need to learn one language to do full stack development. No need to deal with Python or Java or whatever. Being single-threaded is a side effect of using V8, which is mostly single-threaded. Since you only have one thread, you need IO to be non-blocking.


I disagree because Rhino and Narwhal were already around. Libuv and the event/callback approach to IO (which is the part that's familiar to front-end devs) are the secret sauce to Node.js.


Yeah, I think this is the primary reason it really had traction. There was definitely some initial hype around async and performance, but there are a lot of ways (especially now) to achieve that without Node. The real win for me is a familiar language with a huge community and package ecosystem on the server that you likely have to use anyways in the browser.


Agreed. And if it had just stayed at JS and Node, I wouldn't have minded. But then they had to go and inflict NPM on the world, and what should have been a tidy stack got idiotically messy.


On that note, coming back from node.js has been a really frustrating experience for me. I want to scream every time somebody suggests threading as a solution to non-blocking database calls and HTTP requests. It's made me painfully aware of how often parallelism is suggested in place concurrency.

C# in particular does a great job at conflating concurrency and parallelism. Both fall under the same Task namespace and it drives me mad to no end. I've also seen a couple instances of Execute and ExecuteAsync both blocking because the underlying driver didn't support async, with no indication other than my async calls blocking.

It's quite different from the async-by-default sort of thing that's going on in the node.js world.


> I want to scream every time somebody suggests threading as a solution to non-blocking database calls and HTTP requests.

Rather than screaming, perhaps read this (from the developer of the Python world's most popular ORM) to see why they might be suggesting it:

http://techspot.zzzeek.org/2015/02/15/asynchronous-python-an...

tl;dr: If your database is properly setup and provisioned it shouldn't be a source of blocking I/O (and if it is, throwing thousands of concurrent async connections at it isn't going to help you). For standard CRUD applications, your program will spend much more time in CPU processing the database response than it will waiting on the database to deliver it.

Furthermore, when querying a traditional RDBMS like postgres or mysql, safe concurrency is ultimately enforced by the ACID capabilities built in to the database itself i.e. transactions. Whether you're using async, threads, multiple processes or something else entirely doesn't matter. Given the database itself controls how much concurrency it can support, real world tests (at least for Python) show that threads give you better performance than event driven callbacks.

Of course, threads are completely wrong for the HTTP side of things and this is where async (e.g. nginx or node) offers all of its advantages. That doesn't mean it's wrong to put your DBMS behind a thread pool.


A little tip from someone coming back into Java after using modern javascript / node.js lately: Java 8's CompletableFutures are basically Java's Promises, and they have good support for doing steps in specific thread pools. (Regular Futures are way too limited: if you want to do an action as soon as a Future resolves, then you have to dedicate a thread just to blocking on that Future, which is not exactly what you want if you have a lot of simultaneous asynchronous operations of multiple steps going on.)


but why? Go's model is perfect. They're not real threads, it's basically very similar to Node, except that you can write code without a mess of callbacks and it's easier to reason about...


Go's model doesn't even statically prevent data races (as Rust's does). You might think Go's model is good, but it's a far cry from perfect.

Although I'll be happy to point out why Go's model isn't even good. Go's lack of ability to perform meaningful abstraction combined with the low level of its concurrency primitives means you end up having to copy/paste and slightly tweak dozens of lines of flow-control logic every time you use them.

This[1] is an excellent critique of Go's concurrency model.

[1]: https://gist.github.com/kachayev/21e7fe149bc5ae0bd878#so-wha...


Nothing's perfect. Go's is Erlang's with a shared memory model. That creates a lot of trade offs and limitations. In some places, it's a perk. In others, it's not.

No such thing as perfect, only trade offs.


Not just that. It can use multiple OS threads and multiplex the goroutines on actual threads so programs can actually leverage multiple cores.


Go's model breaks down as soon as you have to invoke into non-Go code, because the latter has no idea about goroutines.

This is a general problem with all "green thread" abstractions - they are not OS level, or if they are (as e.g. fibers are on Win32), nobody cares to use them.

Promises, on the other hand, can be mapped to a straightforward callback-oriented C API, thus allowing a fully async execution stack end-to-end across different languages. You can even go one step further and define a uniform API for all such callbacks, as WinRT did.


async await though


And no shared-memory CPU parallelism.


No threading bugs though.


Elixir does it very well, I think.


It's hardly the fault of the language if people are lying in their method signatures. I'm not aware of a mainstream PL with a type system that is powerful enough to avoid something like that.


In this case, the .NET-provided base class defaults to using the synchronous version when the async version hasn't been overridden.

https://msdn.microsoft.com/en-us/library/system.data.common....

> Providers should implement this method to provide a non-default implementation for ExecuteReader overloads.

> The default implementation invokes the synchronous ExecuteReader method and returns a completed task, blocking the calling thread. The default implementation will return a cancelled task if passed an already cancelled cancellation token. Exceptions thrown by ExecuteReader will be communicated via the returned Task Exception property.

Found that after several hours of tearing my hair out trying to figure out why my async calls were still blocking...


This is a bug in the library. The correct way to do these "async but not really" wrappers is to call the synchronous implementation in a separate thread pool worker thread, and wrap that into a task (as Task.Run does). This is hardly optimal perf-wise, but it is truly asynchronous wrt to the caller. If I remember correctly, it's what Stream does for its default implementations of async methods.


> Many are forgetting the initial reason node.js became popular. Consider the popular server-side landscape before node.js. It was dominated by Java, Python, etc. The primary way these ecosystems handle concurrency is OS-level threading. There was nothing else in the popular languages.

What? I am sorry but this is not true.

Coroutines have been a concept for...well a long time:

https://en.wikipedia.org/wiki/Coroutine

Green threads also have been around for a while.

> but these all had a critical flaw, which is the rest of the ecosystem didn't support it. Basically every library, almost all existing code used the threading model. And when you mix threaded libraries with a non-blocking I/O server, it completely falls apart because the threaded code blocks.

I am not sure how you can say this while using Python as an example. Python has the GIL which, typically viewed as a limitation, actually prevents the situation your assertion suggests. In fact, one of the largest barriers to removing the GIL is the wealth of libraries and C-extensions to Python that implicitly rely on the thread safety guarantees/limitations of the GIL (as covered by Larry Hastings as part of his first Gilectomy talk).

To be clear, there are reasons Node became popular, but let's not pretend that it brought some sort of concurrency revolution to the "server-side landscape" in 2009.

Edit: Heck, NGINX was an asynchronous, event driven web engine released in 2004 and written in C:

https://en.wikipedia.org/wiki/Nginx


>Coroutines have been a concept for...well a long time: ... Green threads also have been around for a while.

What web frameworks were built around using them? Was there a good ecosystem of libraries that were easy to use with the web framework without introducing blocking I/O?

>I am not sure how you can say this while using Python as an example. Python has the GIL which, typically viewed as a limitation, actually prevents the situation your assertion suggests. In fact, one of the largest barriers to removing the GIL is the wealth of libraries and C-extensions to Python that implicitly rely on the thread safety guarantees/limitations of the GIL (as covered by Larry Hastings as part of his first Gilectomy talk).

Aren't all python threads backed by OS threads? And pointing out that threading has been used by too many Python libraries to change supports the previous comment's assertion that non-blocking I/O was hard to use exclusively in languages like Python because too many libraries used blocking I/O.

>Edit: Heck, NGINX was an asynchronous, event driven web engine released in 2004 and written in C:

Nginx is pretty specialized. Not really sure it's useful to compare it to a programming language / web framework with an ecosystem of compatible libraries like PHP, Ruby on Rails, django, nodejs, etc.


I often wonder why Python's gevent didn't become more popular. It would monkey patch the socket libraries to yield to an event loop, allowing you to magically use blocking code and libraries with practically zero code changes, and zero callback hell. I imagine it had to do with marketing - when Node was released, everyone thought the spaghetti horror that was Twisted was the only other event-loop-based game in town.

Ironically, now that we have async/await to replace nested callbacks, Node finally has the code readability that gevent users took for granted all this time.


> I often wonder why Python's gevent didn't become more popular.

Quite honestly, it wasn't very good. It worked magically until the magic ran out. Mysterious breakages caused by stuff deep in gevent interacting badly with other libraries hampered my ability to use it effectively. For instance, you couldn't use it with python-requests directly, you need to use another library that makes requests compatible, unless you roll your own magic, which was often very complicated. And, at least in my experience, gevent stuff tends not to work with Python 3.

Now that asyncio is a thing, I'm glad I never have to debug gevent-related problems anymore.


It should also be noted that there's another statically typed language that provides multiplexed M:N threading (aka "green threads") and a programming style that looks synchronous while actually using asynchronous file and network IO behind the scenes (so all of the perf benefits of JavaScript without the awkward, nested callbacks and/or promises).

The language I'm referring to is none other than Haskell (which has been around in some form since 1990).

Here's the earliest paper I know of detailing the internals of the Glasgow Haskell Compiler's IO event handling system (from 2010, I believe): https://static.googleusercontent.com/media/research.google.c...

And here's a more recent update: http://haskell.cs.yale.edu/wp-content/uploads/2013/08/hask03...

(Just in case of link-rot, the titles are respectively: "Scalable I/O Event Handling for GHC", "Mio: A High-Performance Multicore IO Manager for GHC")

Here's a short example:

  main = do
      putStrLn "Hello, what's your name?"
      name <- getLine
      putStrLn ("Hey " ++ name ++ ", you rock!")
Unfamiliar syntax aside, there's not a callback in sight -- it looks just like standard, imperative, synchronous code.

It drives me crazy when people tout Node's approach to concurrency as being ideal, as if callbacks are a necessity.

They're not.

Haskell, among other languages, is an existence proof that you can have lightweight threads and asynchronous IO without callback spaghetti. And I haven't even gone into Haskell's threaded parallelism, Software Transactional Memory, etc.


Your still missing the point. It's not that it was possible to do so. It's that you could quickly write web services idomatically that did so by default. Your example shows a simple hello world and doesn't speak to the point being made. I am sure you can get a more relevant example going in Scotty, but it appears that it didn't show up until 2012.


> Your still missing the point.

No, I'm not missing the point.

> Your example shows a simple hello world and doesn't speak to the point being made.

Hmm... Let's do an analysis:

* My example demonstrates reading from a given file descriptor (FD 0 -- stdin -- in this case), and that reading happens asynchronously (as I had pointed out, and even provided white papers describing how this works). Again, this is despite the syntax looking like a naive, synchronous program. The GHC Run Time System handles asynchronously reading from the FD and scheduling threads as appropriate such that OS threads aren't tied up waiting on what would otherwise be a synchronous read(2).

* As a bonus, it also demonstrates writing to standard out.

Now, one could take exception to the fact that my example doesn't give any evidence that similar asynchrony should be expected in a web stack. One would be wrong: as anyone with, say, as little as a year of experience writing web applications on POSIX systems should know, the Berkeley sockets API exposes sockets as file descriptors, so the same generic async event handling system in GHC's RTS applies equally well to networked IO. GHC and Node (via libuv) uses similar primitive kernel APIs at the end of the day, the only difference being that Haskell doesn't require that you reify the callbacks in the source language.

Idiomatic? Check.

By default? Check.

So no, I think I do get the point, but if you still disagree, I'm all ears.

If my tone sounds harsh, it's because I spent the time to provide links to two papers and even gave a small example of what the syntax would look like for a minimal program that demonstrates IO. In turn, you're telling me that I'm missing the point. I would argue that, if you read either of those two papers, and you knew about the implications with respect to networked IO (if you didn't, you could easily research the Berkeley sockets API, kqueue(2), poll(2), etc -- all mentioned in the papers), you would see what I'm getting at with my example code and the papers referenced. I can only posit that you haven't read the papers, or if you did, you must have not done so in earnest. Hopefully you can see how I would be frustrated when I've put in time to research this space, and when I share something, I'm met with what (in appearance) amounts to a dissimilar level of diligence and, ultimately, an empty dismissal.

If something else is going on, I would be more than happy to issue an apology.

> I am sure you can get a more relevant example going in Scotty, but it appears that it didn't show up until 2012.

Open source web frameworks for Haskell go back to (at least as far as) 2009.

Happstack: (2009): https://hackage.haskell.org/package/happstack-server-0.1

Yesod (2010): https://hackage.haskell.org/package/yesod-0.0.0

Snap (2010): https://hackage.haskell.org/package/snap-0.3.0

I didn't provide a "hello world"-like example of a web app because my command-line example only requires a trivial inference on the part of the reader to realize that the same asynchronous treatment of FDs there applies equally well to sockets, and thus web app programming in general; I deemed a full fledged web app example as more complicated than necessary to get my point across.

For what it's worth, here's an example in Happstack, straight from the documentation:

  module Main where
  
  import Happstack.Server (nullConf, simpleHTTP, toResponse, ok)  

  main :: IO ()
  main = simpleHTTP nullConf $ ok "Hello, World!"
And ...

  $ curl http://localhost:8000/
  Hello, World!
Also, a note on tense:

> It's that you could quickly write web services idomatically that did so by default.

I wasn't questioning the rationale for using Node back in 2009. My point is this: eight years have passed since Node was released, and alternative solutions have since been released. Somehow the Node community (and the greater web dev community in general) are oblivious to these alternative approaches.

I suspect this is because most web devs see the callbacks as evidence that IO is asynchronous in Node, and at the same time lack the requisite experience to intuit how another language might implement asynchronous IO while providing a syntax that looks like the usual synchronous model. Now, they could actually read the research papers put out, but they don't, and so everyone keeps going on about the technological revolution that is Node and its approach to IO, oblivious to alternative solutions that (arguably) provide a more convenient programming model (and is, if we throw in support for multiple threads, more computationally powerful, too).

Such laziness and/or lack of curiosity is pervasive in the industry, and that's frustrating.


Your assertion is essentially that an example of reading and writing from/to a file descriptor shows all needed for all aspects of a networked application. I think that is absurd. And unless you have a multithreaded multiuser shell there is not even a really way to empirically test your program without going through absurd hoops. And in fact if you run an old enough version of Haskell, or non-ghc it might not be multithreaded/evented.

Your indinctment that the reason people aren't using your language of choice as lazy is ironically lazy. There are plenty of practical reasons for using different languages and tools.


  function sendMessage(message, toClients, callback) {
    var messagesSent = 0;
    for(var i=0; i<toClients.length; i++) sendTo(toClients[i]);
    
    function sendTo(client) {
      client.send(message, function messageSent(err) {
        messagesSent++;
        if(err) console.warn(err.message);
        if(messagesSent == toClients.length) callback();
      });
    }
  }


> Edit: Heck, NGINX was an asynchronous, event driven web engine released in 2004 and written in C:

In the interview, he mentions himself that Node was inspired by Nginx.


Remember the "C10K challenge"? Show me what else could achieve those ten thousand concurrent connections back then (hint: no python).


Erlang -easily- acheived the C10k challenge back then.


Node.js brings callback hell to the serverside world of threads and we're supposed to be grateful? It's like you don't know the history of computing. Callback-based code was the original programming model in UNIX dating back decades! Threaded code came about because it's far easier to write it than callback-based code.

The ONLY reason to do callback-based code is for performance reasons: for network-IO-heavy apps there is less OS-level overhead from managing thousands of connections with a few threads using the OS-provided non-blocking I/O operations than managing a connection per thread. Even then, if you actually want to do better than connection-per-thread, you have to avoid the POSIX API for non-blocking I/O and use whatever OS-specific API is available for better performance (epoll, kqueue, or completion ports).

The greatest joke of all though is that Node.js is dirt slow because Javascript is dirt-slow. Yes, you may be able to invent some microbenchmark where Node.js looks good, but the only reason it does in that benchmark is because the Node.js app is spending 99.9% of it's time executing code in a C library. Any real-world Node.js app is going to be slow because Javascript in general, even on V8, is anywhere from 10-100x slower than C or even well-written Java.

And worst of all, Node.js is single-threaded! If you want to scale onto all your cores you have to go multi-process, which is an incredible pain in the general case. So, you switch to a runtime and framework that forces a more difficult way of programming on you (callback hell) but which is better for general networking performance, but you use a dirt-slow, single-threaded runtime while doing so. You are basically ending up with the worst of both worlds, and any decent development team will run circles around you in performance and productivity with just about any other backend stack.


V8 is not dirt slow. You may dismiss benchmarks where Node looks good as "invented" but there are countless real world examples of people moving from other languages to node and experiencing a huge speed increase.

I know of one startup that had a fortune 500 company sign up to their service and their dotnet APIs couldn't hold the load and the cost of scaling them wasn't sustainable. They rewrote the hardest hit APIs in node over a weekend and were shocked at the improvements. I also know another startup that had a similar experience, they moved off of Ruby to node and were shocked at the speed increase that they were getting. They couldn't be competitive with their competition with their old codebase.

Despite what I just said above speed isn't necessarily the most important factor. Infrastructure is cheap and developers are expensive. If I'm asked what the best programing language or platform is my answer is usually the language and platform that your team is most productive with. Only a few companies get successful enough that speed is the deciding factor in platform choice, if we say that productivity is the most important factor then the JS/Node combination is right up there as one of the more understood and productive systems.


You are spot on as I have been developing in node for the past 2 months. Coming from doing python and c# in the past, I picked up the asynchronous nature + promises and their "gotchas" within a few days of writing and rewriting an activemq module. All of our other requirements/stories have been developed at such a high speed relative to my past projects. Combined with docker and a QA team that writes automated tests with postman and protractor, our continuous integration is at a high maturity level.


I should definitely make a macro at this point:

Callback paradigm has not been used in node development for 3-4 years at this point.

    return someHttpCall()
    .then(response => someOtherHttpCall(response))
    // Do 2 async operations in Parallel:
    .then(response => Promise.join(
      someCacheThing(response),
      someDBThing(response)
    ))
    .spread((cacheResponse, dbResponse) => sendBackToClient());
This isn't callback hell. Without creating or managing threads, I have established serial as well as an example of branched concurrent logic that is entirely nonblocking. Every promise is handled as a try catch, so error handling can be done at the end of the chain with a single `.catch`.

Clustered express application puts the performance well within what you would expect from Go/Phoenix, all of which are operating at an order of magnitude above Python/Ruby.


I'm well aware of promises, thank you. I'm also well aware that there's very little difference in reality between a promise-based style of coding and a callback one. You still lose stack continuity, it's still much harder to chain operations that hop stacks, it's still a nightmare to debug compared to stack-based programming, etc.

If you've ever actually implemented a promise it's even easier to see how little promises actually get you. Given a callback-based API, it's absolutely trivial to wrap it in a Promise-based API. Fundamentally this is because promises give you very little in the end on top of callbacks beyond some syntactic sugar.


First of all, I have absolutely implemented promises from scratch, and it IS trivial. However, "callback hell" is a reference to writing code with deeply nested callbacks, particularly with manual error handling at each level, both of which promises completely negate.

If you want to change the topic from "callback hell" to debugging across stacks, then we can do that. There was some work with "domains," since moved to AsyncListener/Continuation Local Storage to keep context across stack calls, but I don't disagree that improvements in this department are the place that NodeJS paradigm would currently benefit from most.


It's still "callback hell" because you're still fundamentally writing code where you invoke an API and pass in a callback to handle the results. The fact that the callback you pass in happens to be a promise that allows you to stage other callbacks and/or promises doesn't really change things enough for the label "callback hell" to be inaccurate in my opinion.

A different programming paradigm, for instance, would be the use of delimited continuations such that the API you invoke suspends the current call stack, executes the I/O operation, then resume the call stack with the result of the I/O operation. Very few languages support that. One reason is that natively supported continuations in the programming language has far reaching consequences in terms of performance.

I'm not an expert on golang, but I believe this is essentially how goroutines are implemented: https://groups.google.com/forum/#!msg/golang-nuts/t3g8NbITYE...

Also keep in mind that I'm not 100% against callback-based programming. I have, for instance, implemented a non-blocking client for a custom TCP-based protocol using Netty. The client performed better than a blocking version that it was meant to replace. I played around with using Promises in conjunction with the library but ended up preferring callbacks to Promises for a number of reasons, some of which are specific to the JVM.


> A different programming paradigm, for instance, would be the use of delimited continuations such that the API you invoke suspends the current call stack, executes the I/O operation, then resume the call stack with the result of the I/O operation.

guile has recently gotten guile-fibers which works like this. It is really an amazing way to do concurrency.


Callback hell may as well be synonymous with stack incoherence. The hell arises from not knowing from whence you came.


>>> I'm also well aware that there's very little difference in reality between a promise-based style of coding and a callback one.

Then you're not using promises as they are intended to be used.


How do you get stack continuity in anything that is multithreaded unless the debugger adds special support for it?


By programming using synchronous blocking code within a single thread, of course.


How about that await though.


Promises made some very valuable improvements to code patterns relative to callback-based implementations, but as a broader "nodejs vs synchronous style code" its still much harder. You sound pretty versed, but other folks might struggle. Here is an example explanation:

https://pouchdb.com/2015/05/18/we-have-a-problem-with-promis...

If we're trying to show the simplicity of modern nodejs code with non-blocking I/O, I would suggest we lead with examples like:

const foo = await queryFooObject(id); await writeToFile(foo);

True, there's still plenty of value in promises, especially for things like:

await Promise.all(severalParallelQueries);


As with any tool, there is a right and wrong way/time to use it. Node.js would not be the monster success and influencer it is today if it was the 'worst of both worlds'. For example, using a single threaded environment can solve so many issues that are present in a multi-threaded environment, such as reducing possibility of race conditions (though you can still write crappy single threaded code that introduces race conditions). When used in the right way, it can be and is best of class. In other instances, it is a boat anchor. Just my 2 cents


So if I don't like Java and C, what's your suggestion I should use instead to get the best of both worlds and not the worst?


Any language that uses the actor model for concurrency which is basically every language created in the last decade. Elixir, GO, Akka framework for Java and C#, Pony, Dart, Erlang, ELM, SCALA, ...


F# with it's built in actor model, robust OCaml based syntax, great support for functional Akka[.net] actors, simplified concurrency and asynch programming support, and very own ELM implementation is a great contextual answer for anyone who wants to be on the front lines of lightweight non-blocking concurrent & parallel web development in .Net land.


C# is fairly efficient while still having managed memory. And, of course, golang was designed to hit the sweet spot of automatic memory management, decent performance, and non-blocking I/O.

Beyond that, there's things like Erlang and Elixer, which I am less familiar with. My understanding is that the Erlang VM is significantly slower than JVM or CLR implementations but generally faster than scripting language runtimes.


Go, Erlang, Elixer, Haskell, Clojure, Scala.


[flagged]


I have written servers in C with epoll that were used in production, thank you. And yes, I'm aware of async/await and its limitations. State-machine-style transformations from blocking-looking code to non-blocking code are definitely helpful, but in the end still fall short in terms of usability to a full blocking style of programming.

To even begin to approach the ease-of-use of blocking-style programming you need something akin to delimited continuations in your programming language.


I think the only new thing that node.js brought to the table was a nice way to run Javascript on the server. Node.js doesn't use threads, but as you said, their event-driven non-blocking I/O model wasn't new (Python had Twisted and Java had Netty). In addition, Java has had non-blocking I/O (in the form of the nio) packages for quite some time.

In my opinion, Node.js became popular because people wanted to run Javascript in a server environment.


>Node.js doesn't use threads, but as you said, their event-driven non-blocking I/O model wasn't new (Python had Twisted and Java had Netty).

The new thing that it brought was an entire environment that used non-blocking I/O exclusively without any pre-existing thread-based code and libraries. With node.js, you don't have to wonder if a library you want to adopt supports non-blocking I/O.


That's exactly the point and what people are missing. If you used Twisted you didn't have access to the entire Python ecosystem and there was a lot of work that was required to get standard packages to work in a non blocking way.

Also callbacks for all that they're derided now were an easy paradigm to understand to get into the whole philosophy of async. The first prototypes of node supported different paradigms before callbacks were settled on IIRC.


I remember distinctly at the time that the fact that there was excitement around node.js because it was built from the ground up to use evented IO exclusively. All the libraries, the core runtime etc. People in other languages (C) were for sure doing it, but not in the holistic way node was. As opposed to java and python where you needed to avoid using the library calls that were blocking.


Fair enough, I agree that Node.js uses the evented model (AFAIK) almost exclusively. Still, the evented model is not unique to Node.js nor did they invent it. In my opinion, it was the dedication of Javascript developers and their reluctance to use another language that popularized this model: they had no choice.


Yeah. Ryan Dahl was pretty clear at the time that he was inspired by people using C and achieving really high connection counts on single machines. So he for sure was inspired by others, and obviously built off the same foundations in the kernel.

At some point Isomorphic Javascript became the main reason to use it, as golang has largely stolen the non-blocking IO crown with its implementation of green threads.


There were lots of ways to run JavaScript on the server before Node, however all the ways I am familiar with suffered from having either poor support, poor performance, or being marginally difficult to get to work.

JavaScript on the server before Node was a second-class solution.

On edit: so yeah, a nice way.


You can write JavaScript in classic ASP, there are many server JavaScript, but they are serial/blocking, and NodeJS is async/non blocking.


Your claim is that Node was popular because people were excited about writing servers using 100% non-blocking I/O? I guess I agree that's a part of it but we should note that in 99.5% of those cases the servers would've performed better written in Java using blocking threads.

I think the other parts of it are: 1) many people wanted to write server-side Javascript, 2) people did not want to use Java, and the concurrency stories for PHP, Ruby, and Python were (and still are) really crummy.


Errrrr what? No, node.js became popular because it allowed the legions of front end devs to overnight become full stack devs without learning anything but a new framework. To a lesser extent it also provided an alternative to PHP in the "get a simple crud app up and running with minimal knowledge" space, but I think that's far dwarfed by the ability of front end devs to start leveraging their skills on the backend.


There weren't really 'legions' of front end devs when node came out. People were just learning backbone.js and the idea of SPA's being viable was still forming. It was not easy to pick up if you were coming from something like rails, because it had no well documented complete framework solution. The people that were doing node in the early days were legit back end folks. I am now a front end developer, but I learned on rails and I remember having a hard time 'learning' node. It had no debugger (I was used to something like pry) and was generally a collection of 'low level' libraries.


    Each language had some niche library that did
    non-blocking I/O [...] Netty for Java)
Non-blocking IO has been in java standard library since 1.4

More then Netty supported it. Netty, Jetty, Dropwizard, Grizzly, VERTX. Apache Tomcat supported non-blocking IO 5 years ago.


To recap for the non-Java reader: the problem was that the Servlet specification did not initially support suspending and resuming without suspending and resuming the thread handling the request. The Servlet 3.0 specification supports it, but before you could not use multiplexed IO within containers that required the servlet interface, even though IO multiplexing worked perfectly fine (and, as you said, some containers allowed it via a custom interface, since it is pretty much trivial to add).


This. I'am allways surprised by the amount of developers that don't understand the difference between threads, epoll-select and forking and io bound vs cpu bound.


As a Python/Ruby web dev, exactly this. Dealing with uwsgi/passenger was so painful to me. And asyncio in python3 still boggles me.


I like Python a lot, and use it for a number of things even though my position doesn't require it where I work, but uwsgi was the thing that kept steering me away from it. I'll probably make myself master it at some point, but it just felt fumbling at first take and left a bad taste.


Yup.

Node.js isn't quite as old as you make it out to be, there were plenty of mature tech stacks out there with PHP/Perl, Ruby, C#, etc. Many of them perfectly fine to work with. Node.js came along with a very different conceptualization of scalability relative to other web-servers, and that was what drew people to it. Today we have things like varnish, nginx, fast-cgi, etc. as well as much faster servers which make it a lot easier to scale out web stuff without having to redesign it from the ground-up, but node.js was for its time a hugely advantageous way to approach scaling. And it still is for many use cases, but today it's matured into something that has a lot more to offer.


Not to be a pedant, but all of the projects you mention in "today we have" pre-date Node and were commonly available.


Was it?

I had the feeling before Node.js, everything was PHP.


Most things are still PHP thanks to Wordpress.


They're both appealing to "web developers" of relatively less experience for similar reasons.


> Since server-side JS didn't exist yet,

I'm being petty, but serverside JS did exist before node (e.g. Narwhal, which I believe is where jsgi and a lot of the commonjs stuff originated)


There was also Rhino in the Java land (I remember working with Helma before Node came out), JScript on IIS, some projects embedding SpiderMonkey.

I think node.js finally made server-side JS take off because 1) it was based on V8 which was the new, performance-oriented JS engine powering Chrome, and 2) it was self-contained (i.e. didn't rely on Java on one hand, wasn't just a scripting layer of something larger on the other).


> some projects embedding SpiderMonkey

yea, I remember the woes of building CouchDB from source....about half the issues generally seemed to be SpiderMonkey related.


On the other hand you could use E4X in CouchDB views, which was almost like JSX, but native.


Server side JS is older than that. It existed over 20 years ago. Does anyone remember Netscape Livewire?

It was basically "ASP-style" server-side javascript that ran on Netscape's "Enterprise" web server. It was a little unstable, but pretty advanced for the time (had DB connectivity, etc.)


While we're at it, ASP itself had JavaScript (well, JScript, technically) support alongside VBScript, out of the box. It wasn't particularly popular, probably because anyone doing ASP was likely to be using the rest of MS stack - and that meant VB for desktop LOB apps, usually.


OS-level threading is often pretty good -- you can usually fork and join tens of thousands of threads per second. (I believe that the initial NPTL benchmarks showed something like 20 microseconds for thread spawn.) It's not the tens of millions per second that you get with green threads, but thread creation has an unfair reputation of being slow.

It was once true, but that was 20 years ago.


You're getting a lot of pedantic replies, but your post is spot on. At the time it arrived, asynchronous operations were close to impossible in PHP, very difficult and awkward in .NET, and very uncommon in Java. In Node it not only was possible, it was impossible (without going to great efforts) to do otherwise.

People have a short memory, but at the time getting single to double digits of requests per second on a beefy server was entirely typical. Yes, someone somewhere could demonstrate an alternative, but node completely changed what was normal. It was extremely high performance for the time.


> People have a short memory, but at the time getting single to double digits of requests per second on a beefy server was entirely typical.

Umm, what? It was not a challenge to get to double digits of requests per second on a beefy server in 2008, at least not in any popular language (including PHP and Ruby). It was (and still is) easy to scale requests of "blocking IO" languages by spawning additional threads at the server level.


Along those lines: Has anyone seen benchmarks that show significant benefits for non-blocking IO?


I said entirely typical, not that it was a dick-sized "challenge".

A completely standard developer on .NET or PHP in 2008 was building pages that rendered at less than 10 requests per second. This is experience, not a guess, given that my role was improving the performance of those disasters. A completely average developer on node was building pages that was 10x to 100x better performance.

Secondly, simply spawning threads is laughably non scalable.


Not the OP, but I knew what you meant I soon as I read it. But, did the typical .NET app really not make use of the Asynchronous Programming Model that was part of the standard library?

I ask because in 2008 I was only about 3 years into professional web development, and even I knew about stuff like the C10K problem[1] and that I/O Completion ports were apparently one of the few things that Unix people admired about Windows.

[https://en.m.wikipedia.org/wiki/C10k_problem]


Their use was incredibly rare. It was common to find advice columns telling you that you don't need them.

https://blogs.msdn.microsoft.com/rickandy/2009/11/14/should-...


> Secondly, simply spawning threads is laughably non scalable

How else are you going to do it with a non-threaded language? I've ran well into the 10s of requests per second running "standard" PHP code on practically vanilla Apache configs. This was far from atypical and is no way any kind of challenge on a beefy machine.

My job was not to clean up disasters though. Maybe if your job was to clean up disasters, your typical was much different than mine?


The problem with multi threading like in PHP is that you need locks. It will not be a problem if you have low traffic but once you get hundreds of requests per seconds there's a high chance for errors like "double posts" where both threads say "post don't exist" and then both threads make the post, and you end up with double posts. Or racing conditions, double spending, etc. With NodeJS you get rid of all those problems because it's single threaded.


The server spawns the threads and isolates the entirety of a single request to a single thread. It's not true language multithreading.


JavaScript is single threaded, but all IO operations like network and disk uses threads in Node.JS, So there are still racing conditions, like when accessing the file system. But all other operations are single threaded, like storing data in memory. In for example PHP, where the server spawns the threads everything becomes multi threaded. In node.js instead of using locks etc, you use callbacks, so when something is finished, a function is called ... It's just like event listeners in the browser. Many people complain about Node.JS being single threaded, but it's actually a big relief, they probably haven't been dealing with issues like locks and racing conditions.

To get cpu bound parallelism in Node.JS you have to spawn child processes. There's a built in module in NodeJS called "cluster" that abstracts this a bit. When talking to a child process you could just as well be talking to another machine over the network, the code will look the same, and scaling horizontally across many machines will be easier, compared to languages that solve concurrency by using threads instead of non blocking IO.

There's a learning curve though. It took me about six months to learn how to do async programming, manage messages between different processes and machines, and callbacks, I do remember the "callback hell", but now it has become a second nature, it's like brushing my teeth, but more fun. I don't think half a year is that bad, you would need just as long time to get comfortable in any other language. And I consider JavaScript very newbie friendly, you don't need a CS degree to write JavaScript.


> In for example PHP, where the server spawns the threads everything becomes multi threaded.

No it doesn't. The only way to do truly multithreaded code in PHP is using pthreads.


>because the threaded code blocks.

Python and Java threads don't block on I/O, only on CPU. Event loops (i.e. node) also block on CPU.

AFAIK, node's advantage over threads was just that it offered much higher concurrency for I/O constrained tasks for the same memory (and was easier to programme for people coming from the front-end world).


Python and Java threads block the thread on I/O. Which is wasteful, because you have a huge chunk of memory allocated for the stack of that thread, being reserved but not actually in use.


Yep, hence node's advantage of offering much higher concurrency for the same memory usage.


All this and the developers that were driving new line of biz products were coming from the front end ( web and mobile ), of course all front ends need backends and when these these folks went looking they naturally leaned toward JavaScript the same language they were using on the front end and Async model


Forgetting? It's mentioned in the 2nd paragraph of the article,

He showed us that we are doing I/O completely wrong and also taught us how to build software using pure async programming model


> That said, I think Node is not the best system to build a massive server web. I would definitely use Go for that. And honestly, that's basically the reason why I left Node. It was the realization that: oh, actually, this is not the best server side system ever.

Really interesting. I can imagine others would refuse to give up on the thing they'd worked so hard on. But Dahl has the self-awareness to just step back and say "huh, guess this isn't so great after all".

Huge props to the guy.


Agree. It takes a lot to walk away from something and say you were wrong.

But, sadly, part of the saga of nodejs could have been avoided by people looking at the history of computing. If you look at Go's concurrency model it was entirely mapped out in the 1970s in CSP. There wasn't really any need for go the 'single thread, non-blocking' route that nodejs took and evangelize it as nirvana. There was a ton of distributed systems work done in the 1970s. Just look at Lamport's clocks paper.

I'm not criticizing Dahl directly here, more a sense that people ignore history in a field that has only had history for far less than a century.


The fact that Node survives as a platform for Fullstack JavaScript shows how serendipity works in invention: he set out to build a high performance network server and ended up building one of the largest platforms for building web applications.

The reasons it ended being very big are clear in hindsight but are hard to predict:

- Sharing code between server and client

- Server side rendering

- Lower barrier to entry than other platforms (learn once write anywhere)

- Well-designed package system

It was also helped by the fact that folks decided to write JS tooling in JS, be it minifier, transpilers, or test runners, without nodejs we would've built then on top of something else.


What do people mean when they keep saying "server side rendering" in this specific Javascript/Node.js context? I ask because I've seen quite a few people tout that term as a recent innovation.

It was my understanding that "server side rendering" was the way the web worked since the beginning, with the server generating the markup that is provided to the browser to render the page. Nobody called it "server side rendering" until we had "client side rendering", where Javascript executed in the browser provided the markup instead of whatever program was running on the server.

I'm not trying to lecture or be snarky here; I'm genuinely confused as to what people mean when they say "server side rendering" as a new thing. Like, is there something novel that I'm missing because the term is overloaded with a definition I don't know?


It means you take your client side JavaScript code and render it on the server for performance and SEO.

Previously you had to either duplicate the code in different languages or pick one or the other (either use angular or rails).


Running the same language on the client and the server also has some deeper implications. Like, it becomes easy to imagine an app that runs in a hybrid way with computations shared between the client and server, i.e., "offline mode". This is a pretty big deal, and it's one of the major benefits mostly ignored by the vocal critics of "single-page apps," as seen in the latest article on the HN front page.


If you need "offline mode" why not just write a native app? It will be far better in every way - looks, performance, battery drain, you name it.


Nobody likes to download and install stuff. Just imagine the state of the web if you had to install a app for every website you visit ? Oh...you need to add iOS and Android to the list of platforms. That makes it 5 platforms. 5x the work.


I don't want to ask everyone to install a native app, I don't want to use several different native app UI libraries, etc. I'm one of those people who thinks that the web browser as an app platform is a pretty good idea.


Make a native app for every independent platform?


Windows, Linux, OSX - there just aren't that many platforms these days, it's perfectly do-able. If your core logic is in ANSI C (or equivalent) and you just need a GUI for each platform, even easier.


It's still 3 more platforms, which means 3 more developers, which leads to more $$$. Not everyone is VC backed with multi-millions.


But one of these "offline" apps has to work in Firefox, IE and Safari, so really, it's no different


As long as you stick to the APIs implemented in each browser, your code can be exactly identical. Realistically, you'll probably need to add some polyfills and edge case handling for IE, but it's much less than developing for different full-on platforms.


With the major exception that all evergreen browsers (most browsers in wide use today) quickly officially support new web standards, but Windows, OS X, and Linux are unlikely to ever officially support each others' native toolkits.


Don't forget iOS, Android and Windows Phone.


Because not every news website and blog needs an app. I think everybody is tired of seeing the "Install our app" banners for every website you visit.


News websites and blogs don't need to work offline. Even if they did, "save page as" or an RSS reader was the way to go.


Not just that, but to accomplish it means that you need to provide the correct client-side APIs on the server side, so that your code renders the same in both places. Doing this nicely is a lot of work.


In this context, I think it usually means that the same rendering paths are available on both the client _and_ the server, whereas formerly it was often the case that the server was responsible for the bulk of the rendering and the client just made minor adjustments to the DOM in response to events.

I agree that it's a bit of a misnomer, but I can see why people would call it that given the path we took to get here: first we rendered almost everything on the server, and then it became popular to render everything on the client, but now we're finding that a compromise is best, and that compromise is easiest to achieve when you have similar technology on both sides.

("We" in the above is intended to represent some vague idea of the web development community at large, not any particular individuals or groups.)


They're referring to the ability to execute the same code on the client and server and get the same output, which can provide a great deal of flexibility when optimizing for performance.


Potentially a better term is "isomorphic", a label which is specifically applied to setups where you run the same rendering code client-side and server-side. That's still a comparatively new concept, at least as something that's actively supported by frameworks and reasonably straightforward to implement.


That's an awful term because it has nothing to do with the actual meaning of the word "isomorphic".


I'm inclined to agree, but it seems to have stuck, and is at least more specific than "server side rendering".


Well, unless you're talking about a homomorphism with an inverse the term seems pretty good for me: the same (iso) shape (morphos) between server-side and client-side.


I was confused with this initially as well.

What I came to understand, was that "server-side rendering" refers to whole application lifecycle steps: - The server generates initial HTML markup, based on the request. - The browser can parse & render the already generated HTML (instead of waiting for the client-side JS to load first, then have it create / modify the DOM). - From that point onward, all changes in UI happen through client-side rendering.

You could do this with Django template / JSP pages / Jade templates as well. Just write your templates, and some jQuery sprinkled here and there, to dictate how user interaction should change the page.

But that's not all there is to it.

We slowly took away SSR, with client side JS frameworks, like Backbone, Angular 1, Ember etc. Applications started to have client side routes, even with ugly hashbangs in the URLs.

It was easier to write entire application logic of your UI in a client-side framework with two-way data bindings; than writing the server side in a template language, and put in some JS here & there to handle interactions. Especially, if your application was big enough.

But this started having problems with SEO, and initial load times.

Say, your app has a homepage URL, of the form https://company.tld/, and a products URL, of the form https://company.tld/products.

If a user goes to homepage, and clicks on the "products" link in the navbar, the page content of /products route would load in the client side, via the framework.

But, if a user directly hits the /products URL from their browser (can be from history and omnibox prompt), the browser would first load the home page with assets, then the client side JS would take over, and route the user to the /products URI endpoint.

New server-side rendering paradigm gives you best of both worlds - get to maintain a single codebase to render your page (be it server or client), while give the initial fast loading, so that your page is interactive quite fast.

This is not easy to do. One big challenge for most server side rendering solution is that, after initial render, the JS in the page would try to re-render the page; thereby creating an impression of a "flash".

Yes, if you're using a framework like React or Vue, which uses VDOM, this won't happen in most cases, because of how a virtual DOM based renderer works. But there are still cases, where you might need to explicitly do something to prevent the re-render.

There are other challenges too. For instance, having the page server-rendered, and hydrating the state.

In the search of maintaining a single unifying codebase for both server and client (isomorphic rendering), we often do things that cannot work on server. For instance, chunk splitting your front-end bundle based on routes. Or CSS media queries.

To sum it all up, "server-side rendering" isn't just about initially rendering the page on server. It's so much more than that. It's mostly about how we can maintain a single codebase (now that we have JS on server too), and run on two different platforms.


> Lower barrier to entry than other platforms

I think this is the main reason Node caught up.

SSR and code sharing are only useful in full stack projects, not in APIs. I don't know the numbers, but my conjecture is that there are more RESTful APIs written in Node than any other type of server project.


I wouldn't call npm "well designed".

It's progres bar was the bottleneck causing slow downs.

The size of node_modules directories is basically a meme now.


It's the least shitty of all other designs. It picked the best design decisions from existing package managers. By default it is project local (as opposed to global like what rubygems, pip, and others are), it's recursive and largely reproducible (fully reproducible using lock files). The resolution algorithm is also deterministic and simple, you can implement it in an hour.

Although other languages are catching up they have to do it as add-ons or third party tools -- for example python pipenv https://www.kennethreitz.org/essays/announcing-pipenv


I don't think amasad was referring to npm but the module system.


Both issues are irrelevant as to whether npm was well designed.

One is an implementation detail of a secondary part of its operation (progress indication), and the other is about packages having many dependencies, not about npm causing node_modules directories to grow itself.


Just use pip for a minute, and you'll quickly realize that npm is pretty well designed ;)


I don't think npm became the biggest package repository by accident so it is sure "well designed".


> The size of node_modules directories is basically a meme now.

Not a node user since many years, so I can only guess, but isn't the cause of this is bad community practices (micro-packaging, many different things that do the same thing, etc) rather than the package manager itself? You can get hundreds of deps in any language if the practices are similar.


> But, sadly, part of the saga of nodejs could have been avoided by people looking at the history of computing. If you look at Go's concurrency model it was entirely mapped out in the 1970s in CSP. There wasn't really any need for go the 'single thread, non-blocking' route that nodejs took and evangelize it as nirvana. There was a ton of distributed systems work done in the 1970s. Just look at Lamport's clocks paper.

Non-Blocking IO is still the best we have because its a limitation in syscall interfaces. Node.js, nginx, go, etc all use epoll and their ilk under the hood. In fact, in go goroutines only ran by default on a single OS thread until go 1.5. The ergonomics of call-back based non-blocking io is the issue here, not non-blocking io itself. Because of javascript's history on the browser where callback based events are the only apis it made sense to copy this style of api for IO so stuff like `setTimeout` worked in both cases. Node's selling point with regard to non-blocking io was never about ergonomics, it was the fact that you had a dynamic scripting language where the entire ecosystem was using non-blocking io using the same standardized event loop. Contrast to the other popular "web" languages of the day, Ruby and Python which have had some non-blocking io for years but usage is isolated because most of the ecosystem doesn't support it at all, and when they do they completely different and incompatible low-level io stacks.


I agree that we regularly fail to learn from history. However, it's worth recognizing that trying to learn from history can be incredibly challenging. There's a mountains of research to filter through. Sometimes you discover a relevant document, only to find out it's not readily available or that you have to pay an exorbitant amount of money for access. You might pay a lot of money for something that doesn't even end up being useful to your problem. When there have been multiple attempts at solving the problem, each with mixed results, how do you actually identify the good parts from the bad ones?

A lot of tech knowledge isn't easily discoverable. Heck, sometimes things don't even make sense without understanding their historical background. The filesystem layout in the unixes is a great example, on my system I have: /bin, /sbin, /usr/bin, /usr/sbin, /usr/local/bin, /usr/local/sbin.


A good example - I have no idea why unix filesystems are like that, and after some idle googling I still don't know. Can't even figure out where to start to find the history of those decisions. Certainly am finding what kind of stuff goes where.


Some old Unix devs/users ran out of disk space, so they mounted more disk to those points. This caused boot problems since some tools (like mount) needed to be available. All of that together, along with people blindly following convention, and you have the mess of the unix filesystem. Yep... [1]

[1] http://lists.busybox.net/pipermail/busybox/2010-December/074...


oh my god, that's awful


It is clear however that JS itself has benefited from Node in that it really increased attention on the language. It seems doubtful we'd have ES6 etc. without it


While I agree that non-blocking isn't a panacea, I think that your attitude that it was an unnecessary experiment is a little dismissive.

The fact is that green threads aren't a panacea either, and exploring the callback based pattern led to some really interesting development, evidenced by the explosion of javascript on the server .

At the time that node took off, a huge amount of server code was python with the GIL, and there needed to be something to shake up the status quo.


While you are sort of right, the non-blocking thing was basically necessary to get JS on the server working. And while he was focused on the I/O thing, he accomplished a lot of things by accident, including making a really convenient server platform for JS.


Right, but the whole reason for JS on the server was because JS had no I/O, so he could do the non-blocking thing.

JS on the server was the means, not the end.


More than that. Go's model is just threads with a particularly idiosyncratic implementation, in userspace instead of in the kernel. This itself is nothing new, as it was tried by the Linux NGPT project in the early '90s. (NGPT was abandoned for being inferior to plain old 1:1 threads, which suggests that Go's approach is not the end state either.)


AFAIU a lot of the complexity and slowness of "traditional" M:N threading approaches were due to the requirement to follow POSIX semantics (signal handling, preemptivity, etc.) within a purely library/OS based approach (no changes in the code generated by the compiler).

If you're doing a green threads implementation for a high level language runtime, those restrictions don't apply, to an extent. I think e.g. Erlang, Haskell or Go are examples of "green threads done right".

(Now, I do think Rust did the right thing in getting rid of green threads, but that IMHO is more a result of the space Rust is in rather than a general indictment on the utility of green threads)


Go still has a lot of issues around the inability to preempt except at function call boundaries, which is an issue that doesn't exist in 1:1 threading. Fairness and priority inversion are likewise issues with M:N that apply equally well to Go's implementation. Signal handling is too, although any GC language pretty much has to sacrifice POSIX signal handling already...

Green threads done right strikes me as something like Windows' user mode scheduling (or Google's switchto patch that sadly never made it upstream), in which threads really are 1:1 as far as the kernel is concerned, but manually scheduled by userspace. This requires kernel support, but it fixes every issue except preemption, which is better to just not fix. (Few Go programs actually benefit from multicore CPU scaling; for CPU bound tasks a better use of optimization time is just not writing in Go, which tends to result in better speedups than trying to scale Go due to Go's compiler being relatively immature.)

In my ideal world, kernel-scheduled and user-scheduled threads would exist in the same language, and programmers could choose which one they want on a thread-by-thread basis. I/O calls would be equally compatible with either threading model (which could be done in a zero-overhead fashion with the proper kernel support), eliminating the problem of sync/async incompatibility. This can only really be done today on Windows, unfortunately...


Win32 has been offering fibers for ages, but no-one picked them up. Even .NET started designing around them (there are some vestigial remains of that in CLR hosting APIs), but abandoned that effort before it ever shipped.


I'd suspect NGPT was abandoned because they couldn't use growing stacks while staying compatible with C calling convention (but I wasn't able to find any confirmation of this by googling). Go doesn't have this issue because it uses another calling convention.


Stack growth is orthogonal to the M:N vs. 1:1 distinction. You can have large stacks with M:N or small stacks with 1:1.


Are you saying we could use a 1:1 threading model with growing stacks (by growing stack I mean a stack that starts for example at 2 KB and is grown as needed)?


Sure. There are syscalls that allow a great deal of customization as to stack placement.


Thanks. I didn't know about that. What is the cost of context switching compared to userspace threading?


It might be because it's relatively easy to end up doing something computing related without studying it directly.

This was the case for Dahl and many others. It's definitely possible to catch up on the history of the field while you're working in it, but not everyone can find the time to do so.


You couldn't have got CSP from writing a web server API to run with V8.

To get CSP you needed green threading built into the runtime, and that's a much taller order.

So it's hard to say that the not-critizing-directly Dahl should have learned from the past. We'd probably never have heard of him if he'd set out to build a new esoteric CSP language or runtime, just like we haven't heard of anyone around that time other than the golang team.


> If you look at Go's concurrency model it was entirely mapped out in the 1970s

Isn't that Go's problem, though — that there isn't a single thing in the language that wasn't entierly mapped out in the 1970s?


Using ideas from history isn't a problem. And Go is unique in that it combined all these learned lessons into something new.


> Using ideas from history isn't a problem. And Go is unique in that it combined all these learned lessons into something new.

what learned lessons, like pragmas and struct tags?


A lot of is more in what was taken away rather than what was added.

As someone who is occasionally accused of being too pro-Go on HN... I actually have no problems thinking of Go as a really nice and refined 1990s language. Another type of "Java done right". (I say "another" because I think C# has a really good claim to that as well, albeit in a very different direction.) There's a place for that in the world. As much as I love the cutting edge of programming languages too, I'm not sure that we're anywhere near as far along on knowing how to build really big systems with them as a lot of people think we are. (There's only one way to get there though, and I absolutely encourage people to keep trying.)


> A lot of is more in what was taken away rather than what was added.

Given the fact that C is basically the blueprint for Go, I'm not sure what things were taken away from C, syntactically speaking. It's more like Go added garbage collection to C, and a few other bells and whistles and told people : "this is 21th century programming", without actually thinking about what C got wrong at first place.


"I'm not sure what things were taken away from C, syntactically speaking."

Pointer arithmetic. Go has "pointers" but they're really more references. Some other things depend on how you look at them; Go "removes" manual memory management from the language by adding GC, it "removes" the fact that in C values are essentially untyped and very easy to penetrate down to that untyped layer by making all values carry their type around regardless, they "remove" the limitations on function pointers to provide real closures, they "remove" trivial array overflow by adding checked access, there's a lot of rough edges removed from C at the language level even if it required "adding" some runtime support.

I mean, by the time you've removed pointer arithmetic and manual memory management, you are by no stretch of the imagination in C anymore.


What mistakes does Go copy from C?


Getting popular despite being not an "objectively" better language.


"A really nice and refined 1990s language" really is a perfect description of Go (and also explains why some people love it with a passion, while others can't understand what all the fuss is about).


It is only a problem to someone whose thought process dictates anything old is bad and anything new is good.


> It is only a problem to someone whose thought process dictates anything old is bad and anything new is good.

What do you call new? generic programming, nullable types, type classes? all pioneered around 1973? Can you explain what "new" is?


Lamport's paper was only the very beginning of distributed systems.

And CSP was invented by mathematicians for themselves, not for real life programming. As opposed to Erlang, which was invented for real life programming of telephony applications later in 1980s. But people keep ignoring history and think that CSP is suitable somehow, and not actor model. CSP is an even bigger mistake, don't try to make it look like it isn't. Learn from history.


He doesn't give any specific reason why he thinks that Go is better. I've used both Go and Node.js and I came to the opposite conclusion.

I'm not a huge fan of Goroutines spawning threads in the background. I think that spawning threads and processes should be explicit because there is a big performance penalty when multiple threads have to share a CPU core because of context switching.

With Node.js, you have more control over the process count so you can minimize the amount of CPU context switching that happens by making sure that each process gets its own CPU core.

That said, I do think that on a conceptual level, Go feels cleaner than Node.js... But since async/await was introduced I feel that Node.js has the upper hand again.


Well the thing is, Goroutines are not threads, they're coroutines. The go runtime uses up to a real thread per cpu core, and all your goroutines are run on those (fewer) real threads. The runtime also manages a shared thread pool for blocking syscalls e.g. local disk access and some kinds of name resolution depending on OS. But starting 10 goroutines is much cheaper than spawning 10 threads.


I think that happens fairly often. The first person to solve the problem learns a lot in the process. The people who come later don't see the problem, only the solution.


Well he does work for Google. We don't know if there is any pressure for him to be be a Govangelist.


I see what you mean, but as a similar example, I never thought anyone at Google wouldn't use Google for search, but some use DuckDuckGo -- https://news.ycombinator.com/item?id=15068047

Plus, he said he was using Go before he joined Google.


TJ moved to Go too, and he doesn't work at Google.


Was I defending Node.js or something? Just stating the obvious.


Ryan is so smart. I like the way he thinks. This blog post [1] is still one of favorites.

[1]: http://tinyclouds.org/rant.html


In a similar vein, see "We Who Value Simplicity Have Built Incomprehensible Machines":

http://prog21.dadgum.com/139.html


This is pretty awesome. I had never read this before. Simplicity is so easy to strive for, yet so hard to achieve sometimes.


"And then, you know, maybe little servers to... maybe little development servers, and here and there, maybe some real servers serving live traffic. Node can be useful, or it can be the right choice for it. But if you're building a massively distributed DNS server, I would not choose Node." -Ryan


I find it interesting that he ended up at Google Brain working on deep learning research after writing Node. There seems to be a trend in the industry of taking people who are exceptional in one area and putting them on AI problems (e.g. Chris Lattner). I wonder how effective that cross pollination is.


He has two math degrees from decent (R1, AAU) schools, so I wouldn't say it's a huge leap; he probably has better training for modern AI than most people working in the area with CS degrees.


A lot of AI is super complicated large software systems and building those don't require academic ML knowledge rather they require knowing how to build good software.


These AI projects require a lot of math. Someone who studied math and is also proven to be able to produce working software seems to be a great fit.


Pretty cool how he took a very unusual career/personal route to become such an important figure in the programming world. Good reminder for a parent like me that getting your kids into a "top" school isn't a must to succeed.


I don't know about that. He went to some pretty good schools and got some pretty advanced degrees.

The guy is obviously at genius levels because he is a really advanced programmer even though he didn't really get involved with it much later in his life.

I've been programming on and off my whole life, and I can barely understand 1/5 of the concepts he's talking about.

Reminds me of that scene from Silicon Valley "He's a 10x, and I am a 1x".

That's just some "god" given talents right there.

But yes, I generally agree success should not be predicated on what school you went to.


I mean, he went to two "Top 50" schools studying math. Do you mean he didn't go to an Ivy League or a school traditionally known as a top CS school?


Yes correct (and not saying that top schools are really top; good article on selection bias a few weeks ago on this).

I was primarily pointing at the fact that he started in Community College, which is sometimes looked down upon.


Personally, I don't particularly consider schools in the "top 50" to be "top schools." I would consider schools in the top 10, maybe 20 for a particular major to be the colloquial "top schools" in that major.

So for Math and CS, yes, this mostly means mort of the Ivies and 10 or so other schools, including MIT and Stanford.

In general I'm of the opinion that there is a lot of "top school" inflation in the United States. It's better to consider schools on a major by major basis, especially at the graduate level, where your advisor might be more important than the school.

The reason I restrict the top to the top 10 (and top 20 for leniency) is because the admission standards are vastly different the first 10, then the next 10, then the next 10 - 20, and at that point there isn't a significant difference in "attainability" anymore. I term it this way because I think the word "top" is only useful for signaling, not for real qualitative comparison.

As a specific example: NYU is listed as a top 30 school for computer science, but while it's a good program, it seems odd to list it as a "top CS school" - is it really that much better than the next 10 or so, or is the term just diluted? Likewise, NYU is in the top 10 for mathematics, which actually seems sane from my perspective.

And in the top 25 schools for CS is Rice University - you need a new word to call the group that MIT, NYU and Rice are in, because "top" is no longer all that meaningful.


I was always curious why Ryan Dahl left the community. He finally answered that question; not the way I thought he would though:

"I think Node is not the best system to build a massive server web. I would definitely use Go for that. And honestly, that's basically the reason why I left Node."

Go may be an excellent choice for massive non-web servers, I don't have enough experience in it to say. For the product I work on, though, Node.js is the way to go. It's the best framework that allows us to use the same exact code on the server and on the client to create a fast progressive site.


Given free choice I'd use Elixir, but working with Node and Python at work have to say that for async web services to my surprise I prefer Node more and I def. like Yarn more as package manager.


I can't help but feel like Google gently encouraged him to promote Go over Node. Or that being told by coworkers (goworkers?) for years-on-end that Go is better than Node has had an effect. Especially considering he talks about green threads like he's not quite sure what he's talking about. At any rate, there are many innovations where the creators weren't fully aware of their impact and eventually come to hate their own designs, so no love lost there.


Node lets you do single thread non blocking.

Go lets you do multi threaded non blocking.

That's a pretty substantial delta in the server world where you can easily have 24+ cores available.

Memory and cpu footprint of go is also drastically lower to node as it is a compiled language. That makes a difference when you are running in a cloud paying per GB of memory and per core.

I say this while actively developing in node.


The thing is, in real-world scenarios with databases, http, and regexs, golang and node.js have very similar performance with node beating golang in some cases.

Furthermore, not everyone is developing cloud-based apps and trying to optimize for metered cost structures.

If you need raw, parallelized, computational power, then I will without-a-doubt agree that Go is a better choice. However, to say that one would always use Go for any server (even webapps? come on...) truly doesn't make any sense.

The only other benefit-of-the-doubt explanation I could come up with for such an explanation is that it seems like the creator has spent more time in academia than in "the real world" of development, and perhaps the persuit of "the ultimate async system" is more important to him than practicality.


It seems weird to me to have an interview with Ryan Dahl today, August 31st, without talking about the political struggle the Node Foundation has been going through over the past week. http://www.zdnet.com/article/after-governance-breakdown-node...


What would he have to say about it? He stopped working on Node before there even was a Node Foundation.


If you think back to when he quit node, it seemed to when it was due to not being interested in anything not involving the technology.


You wont get much done if your focus is on politics and fashion.


There's an audio version for the lazy like me (it's a podcast)

https://api.soundcloud.com/tracks/340298200/download?client_...


A lot of people talk about callback hell. For me, the biggest issue of Callbacks is that you don't have a single point to catch thrown errors. We need finer control than a global catch-all handler.


Ryan Dahl's advice for building high performance web server: Use Go.


nginx is faster then caddy


And better all around.


But, but what were the colorization projects? What was the domain?



They need to fix their ui on mobile, there's no way to listen to the podcast and most people listen to podcasts on the go I would imagine.


Even if Node is not the perfect server-server environment, you can go quite far just with knowing JS.


In this thread: people who don't use Node.js making wild presumptions about how it works. Callback hell? Been a few years since that's been a problem.


It depends :tm:.

There are still a lot of codebases that are callback heavy. For one, callbacks are faster than promises in a lot of node versions by a very large margin. In other code bases, legacy rules still apply.

Promises don't work well for some more complicated logical flows (though they're pretty damn perfect for the common ones) and async is still pretty new.

Just because it's not a problem you deal with doesn't mean that it doesn't still affect plenty of devs.


You can mitigate callback hell using classes, which are far quicker than promises and easier for humans to parse than callback chains.


Genuinely curious, how? Do you have some example code to post?


this new paradigm of model view controller

New... in the 1970s https://en.wikipedia.org/wiki/Model–view–controller#History

It boggles the mind how little "web devs" know about the history of the field. No wonder they keep reinventing the wheel.


Maybe he's talking about when it got popular, not when it was literally invented...but I guess you can just be pedantic and miss the point.


The GP's interpretation was arguably uncharitable, but your reply was uncivil. When you have a good point, please don't use it to make the thread worse.


I was seeing frameworks in ColdFusion referencing MVC back around 2001.


Yeah, MVC was well known to the Visual C++ community in the 90s, and it was a well established concept among commercial (as opposed to academic) programmers even back then.


[flagged]


Please post substantively on HN or not at all.

https://news.ycombinator.com/newsguidelines.html


Software Engineering in 2017 : Personality cults and mostly a lack of appreciation for the history of CS.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: