Hacker News new | past | comments | ask | show | jobs | submit login
Why I am switching to promises (spion.github.io)
124 points by shawndumas on Jan 7, 2014 | hide | past | favorite | 90 comments



Oh goodness... if only there was a way for the computer to manage all the bookkeeping of what happens after what, in a way where multiple things could run at approximately the same time, without the programmer having to thread "promises" or "callbacks" through every single piece of non-trivial code.

We could even give a name to these sequences of operations... maybe call them "strands"... or "filaments"...

Using libraries in Node.js seems to be a matter of 1) finding a library, 2) taking a hammer and smashing it into your existing choice of continuation-tracking mechanism. We end up with N x M different ways of using a package, depending on what style the package author chose vs what style the programmer chooses. "nodeify" is a bandage on top of a missing limb.

Do people really put up with this? Just look at how many different things were benchmarked in this article---I counted 21. How many more ways of reinventing basic programming-language design is the JS community going to come up with before we realize that the emperor has no clothes?


>How many more ways of reinventing basic programming-language design is the JS community going to come up with before we realize that the emperor has no clothes?

Yes, JS was a rushed language that was then left to rot for 15-odd years. Unfortunately it's the language that powers the web, runs everywhere, and (along with HTML/CSS) is the only thing that all the big guys agree on.

It's getting better but it is what it is.


Yes. JS runs everywhere that a modern browser runs and powers the web's client side.

Those are excellent reasons to use javascript to power your web applications client side as opposed to say Flash or ActiveX.

However, those have absolutely no bearing on writing your server code in JS.

The only reason to run JS on your server is so you can write the server side and client side in the same language. That's a debatable reason for running JS on your server.

Node incredible scalability around IO is also provided by a host of other not rushed and left to rot languages. Erlang, Go, C++, even Java has libraries for it.

When it comes to server side you can make a pretty good argument that the "emperor has no clothes".


If I am building an HTTP(S) API server that is I/O heavy (which many are), I will pick Node every time until something else comes along to replace it. That is what Node is specifically designed for and does it well. What is really great is that almost all non-trivial Node modules are also async, support streams, etc. I don't have to add a bunch of wrapper code to make some module/library behave async which I have to do in many of the other popular languages (C++, Java, Python, etc). Now is Node the best for everything? Nope, definitely not.

I do agree with you that being able to share code between client and server is a poor argument. It is a rare case in my experience where anything but the most trivial JS code can be shared between client and server.


    If I am building an HTTP(S) API server that is I/O heavy
    (which many are), I will pick Node every time until
    something else comes along to replace it.
This is exactly what I'm talking about. There were languages/frameworks that beat node already out there before Node came along. It mystifies me why people think there weren't. Again the only motivation that actually makes any kind of sense for creating Node is shared code between client and server.


Yes. I am well aware of how to build an HTTP API server in many other languages. I have been doing this sort of thing for ~15 years which doesn't make me right, just experienced. The problem with most of those languages and frameworks is that as soon as you need something outside of that framework or core language, it is uncommon that the library is setup for async I/O, streaming, etc. You end up writing a lot of wrapper and/or glue code to string everything together. Or worse you find an amazing library, but because of its design it is difficult to make it do what you want so you resort to reinventing the wheel albeit poorly. Because of the nature of Node and its module ecosystem, much of that problem is solved.

It isn't about using Node because it is cool or I like Javascript (which I hate BTW). It is about accomplishing my goal of an efficient HTTP API server much more quickly with Node than any of the other languages/frameworks that have come before. In my opinion, this is the niche Node does the best and if I have the choice I will choose it until another language/framework comes along to beat it.


every async io framework I've come across has something like runInThread(), which returns a callback.

e.g. http://twistedmatrix.com/documents/10.1.0/core/howto/threadi...

not really much boilerplate...


True, but now threads are involved. Which is totally fine and might be preferable depending on the use case. But for I/O specifically, utilizing select/poll/etc is usually faster, less memory overhead, and you don't need to add the complexity of threads. Like everything it depends on the use case. In my experience, threads are preferable when the API server does a lot of CPU work. But for I/O, select/poll/etc is usually preferable. It just depends.

Node (and Javascript) is definitely missing proper support for threads which is a negative.


Hardly the only reason. Look at something like Meteor, which is running not only the same language client and server side, but there is a large overlap in the actual code as well, which makes a ton of sense to me. For instance, you can run the actual, same, validation code against a form and before the data is inserted into the DB.


You're underestimating NPM and the community of modules around it. Node is well-rounded, and it's capable of handling most programming tasks. I use it to write utility scripts, which it's pretty good at. When I make a service, Express is a nice DSL-like framework.

It gets lots of stuff done, so I'd say it's pretty well-clothed.


Clojure/java also have lots of libraries and tools around them, as well as proper promises (where you can block and wait if you want), channels/goroutines, futures, threads, agents, reducers, etc. Erlang / Haskell / even Go are saner languages with better concurrency stories.

I guess there's something to be said for not having to learn more than one language. Probably something vulgar.


No I'm really not. NPM is a good package manager. I'm sure it has good modules as well. But NPM isn't the only package manager out there. Other languages have excellent package managers too. Clojars, CPAN, Ruby Gems, even erlang has a package manager now.

Those are minimum reqs for a language and community. They aren't differentiators. Aside from their absence being a negative indicator.


NPM is awesome, just the modules aren't. They are of poor quality, rushed, and often than not, left to rot (because people have moved on to newer shiny APIs).

There are only a few high-quality-working-and-well-documented modules.


Having a naked emperor on the server is a diplomatic concession to the fact that there is also a naked emperor on the client. There's nothing anybody can do to put clothes on the Emperor of the Client, while you can dress your Emperor of the server any way you want.


Which reminds me, wasn't ECMAScript 6 supposed to be finalised in December[1]?

1: http://en.wikipedia.org/wiki/ECMAScript#ECMAScript_Harmony_....


The target date for ES6 has slipped to December 2014:

https://github.com/rwaldron/tc39-notes/blob/48c5d285bf8bf0c4...


Promises are actually very simple - so fundamental that you can think of them as dual to function composition. By a dual to function composition i mean that you can mechanically convert async promise code to everyday synchronous function calls - including everyday exceptions. That means that literally the code can look exactly the same whether or not you are async or sync. The trick is you need either macros or monads to do this.

(This is kind of the whole point of monads - you could write a piece of code by composing a few functions, and then make it asynchronous without breaking the function composition! No need to rewrite your code to use callbacks; the same code will just work.)


Promises are _not_ dual to function composition, unless you mean by “dual” something very different from the common mathematical meaning.


Is this currently possible with NodeJS?

Javascript has none of the following:

- Macros

- Monads

- call/cc


Well, on the other hand languages that DO have macros, monads, and call/cc built-in don't power the web and haven't even contributed to many essential software we use everyday, from our OSes to stuff like Chrome, Photoshop, LaTeX, etc.

In fact, most of it was and is written in C/C++, horror of horrors, and fewer in Pascal / Java.

The notable exceptions I can think of are Emacs (partly), Autocad (partly) and Pandoc.

So I guess there's a tradeoff, use cool languages and not contribute much of usable software, or use some so-so language and create stuff people use.


Nitpicking: LaTeX is a bad example because it's bulit on TeX, which does have macros. :)

But I don't want to go down the rabbit hole of saying that "TeX & Emacs are somehow more powerful than Chrome because their systems use macros."

Granted, sure. People cope well with not being able to express their problems the way they want to. The existence of so much high-quality C/C++/blub software proves this fact.

But for those of us who don't want to change our thinking, tools that fundamentally extend the language are great to have in our toolbox. Maybe it's our weakness, that we feel uncomfortable when we can't express the problem the way we want. Or maybe it's our strength. I don't know.


>Nitpicking: LaTeX is a bad example because it's bulit on TeX, which does have macros. :)

True, had TeX itself in mind!


If something this powerful was reasonable to do in javascript, it would be mainstream knowledge by now.

Clojurescript's core.async is a macro approach which is not quite the same but very similar to what I describe.

You could probably do it in sweet.js (marcos for javascript that are expanded at compile time) but as far as I know, nobody is actually doing it so there are probably showstoppers.

A monadic approach requires all of your code to be pure-functional, which is unreasonable because you'd have to rewrite all of your dependencies to be pure.


Monads are not a language feature, they are a library feature.


Only true in the most pedantic sense.

For monads to work, all your code needs to be pure (in the FP sense). So if you want to do anything substantial with monads in an imperative language, you need to rewrite the entire ecosystem of libraries. That would cost quite a bit more money than your average organization can afford ;)


Also, you of course do not need purity to get monads to “work”. You can do them in SML or OCaml, for instance, which are not pure by a long shot; by the definition of “pure” which I employ, Haskell isn’t either.

Lack of purity is a defect which bites you whether or not you have monads: so if you're perfectly happy dealing with it in your existing code base (like most of us are), making explicit the “monadic” patterns in your codebase will not make it worse, and may improve things a bit.


Also has nothing to do with the language, mate. Your argument does nothing but agree with my point :)


Ok, true. They're certainly much nicer to use if you have nice syntax for them though. :)


Yeah I just started working on a node project and can't help but feel like I have been transported to a time 40 years ago where my operating system didn't have a process scheduler and I have to do it all myself. I literally have to think about how asynchronous behavior works every 2 or 3 lines of code. It's sheer madness.


Your comment is ironic because C# implemented async in 2012. Async thinking takes a while to get used, but that doesn't mean it's more difficult than threads or mutexes and whatnot. In fact, I think it's simpler. You don't have to learn anything properly new if you already know how functions work.


we haven't needed complicated threading on the web server for the last 20 years, because there is a natural isolation model that works most of the time: the request. node turns this on its head and says now you will need to think about concurrency always, even if you don't have a problem that naturally warrants it. that's the issue. if i want to write a goddamn shell script I need to think about concurrency. wat?

are callbacks vs mutexes vs actors vs STM the best way to handle concurrency, that's debatable. but what's not debatable is that comparing concurrent vs non-concurrent code the latter is always easier for people to write and reason about, so you should tend to want to work in environments that are non-concurrent by default and let you focus just on the parts that you need to deal with concurrency.


Unfortunately, the traditional 'request' model ran into some practical scalability limits - colloquially referred to as the C10K problem. The problem is compounded by long-lived requests (comet-style push is still the best way to get around corporate firewall issues) which can block threads unnecessarily for long periods of time. We've largely solved this through async event-driven architectures and green threads. So no, things weren't always so rosy.

By the way, one of the major benefits of Node is that it allows you to use the same libraries in your server and web-applications. So even without the scalability concerns, there would be good reasons to go with Node.


Did you read what I wrote? Yes, there are use-cases where the request->response model breaks down for web applications. I'll even admit that the set of applications where this is the case is increasing. But my point is that these are the exception to the rule, they are against the grain of the HTTP protocol anyway.

The Node environment seems to have shifted from "use this for your one-off, high performance, I/O dominated real time service" to "hey, you know Javascript, build all your webapps in node, build your command line tools in node, build everything in node, since it's the lingua franca of the web." This latter trend sucks, and results in things that to me seem absolutely bonkers, like the Vows test framework.


I'd say it paints a rosy picture for Node putting async event driven architectures on the same level as green threads. They are not even close. In one you have to manage all the details yourself, whereas the other results in a far cleaner implementation with much fewer opportunities for bugs.


Okay, now I understand your critique. I don't think it's a very good point though: Node.js is not designed for extremely simple synchronous web servers (and neither is JavaScript).


That's a generational perspective. For developers fresh out of college promises, async, continuations and so forth are the norm now.


Did you see my comment below with the ToffeeScript example?


How long? Until one of the other languages (say... clojureScript) develops a large enough ecosystem and best practices around it to allow portions of code to be shared between the client and server.

Oh, that also means that this given system has to reach critical mass. Oh, and since everything's server and client side, the compilation has to be rock solid and be easy enough to understand without reverting to javascript in frustration.

I miss programming in Node, for nothing else that I hate having to keep up with so many paradigms and switch between the two. I only have X amount of hours to keep up. When I move back to heavy development in a language after even a few days off, there is friction.

And as someone who writes both front end and back end code, it gets frustrating. I'm not as good as I could be, given the hours in the day I have.


Some of those "things" you counted are based on generators, which let you to do exactly what you want.

Example: https://github.com/spion/async-compare/blob/master/examples/...

With promises, generators are not really necessary -- especially if you add arrow function syntax into the mix. https://gist.github.com/spion/8254661

(Note: that particular method in doxbee has long since been refactored to something saner with better separation of concerns :)


The emperor may have no clothes, but what's so great about clothes? (If God meant people to be naked, they'd be born that way! Same goes for programming languages.)

The naked Emperor JavaScript runs everywhere, and runs fast, and isn't going away. He's streaking at lightning speed all over the kingdom, everybody's seen him, and he doesn't stay still.

The emperor is naked. Long run the emperor.


You mean like fibers? :)

https://github.com/laverdet/node-fibers

This is what Meteor uses on the server side, and it's a pleasure. Everything is suddenly synchronous.


Threads are quite useful, and Javascript has them. However, if what you have is a computation you have the inputs to start now and won't need the result of for a while, lazy evaluation is a far more natural representation for that than a thread.

Ideally, a language ought to have one representation for lazy evaluation, not a dozen; perhaps at some point a new standard will come out that standardizes some subset of lazy evaluation that has won out. Welcome to Javascript.


Javascript does not have threads. There are some Javascript environments that may or may not have threads in some form. Rhino, WebWorkers.

But Javascript does not have a builtin concept of threads.


(Edit: Oops! Sorry, I see that you're already familiar with continuations, but let me explain them for those who aren't)

Sometimes, fundamental switches in thinking are worth it. :)

For example, continuations would be perfect here, and are a wonderful tool for simplifying "nested callback hell" type problems.

But why should a web developer care about continuations?

Consider Paul Graham's Arc challenge: Build a web page that presents a form to the user. After they submit the form, a new page presents them with a link. The link takes them to a third page where the submitted value is presented back to the user. The restriction is that the input field must not be passed in via the URL.

Right now, the user must write in "continuation-passing style", which means threading callbacks, like this:

   function serve(req,res) {
      askUserForValue(req,res, function(value){
          showLink(req, res, function(){
              render(res, {"user-value": value});
          });
      });
   }
Or perhaps the user could somehow stuff the variable into a "session" library with three separate URL routes:

    function serve(req,res) {
       render the form with the POST action of /submit;
    }

    function submit(req,res, value) {
       req.session["uservalue"] = value;
       render the link that sends the user to /view;
    }

    function view(req,res) {
       render the value of req.session["uservaule"];
    }
This one is even more fragmented since you have three separate functions that model one user interaction flow! I certainly like it less than the callback example -- at least I can keep one in my head.

We can do better though. Continuations let you write this:

   function serve(req, res) {
      var value = askUserForValue();
      showLink();
      render(res, {"user-value": value});
   }
If node.js supported continuations, the askUserForValue() function would respond with the form and suspend the serve function right then and there. After the user made a new HTTP call, the suspended askUserForValue() call would return to serve() and its return value would be placed into our variable. Then, the showLink() function could render the link and suspend the function again. Once the user makes another HTTP request, the function resumes, and so on.

The server would have to store the state of the computation itself to disk so it could continue later.

With continuations, we can pretend the user's web browser is the program counter. This frees my mind up. First, I don't have to keep callbacks in my head since there aren't any callbacks anymore. Second, stack traces work the way I expect because functions explicitly call each other. Third, if askForValue() throws an exception if the user inputs a string where I entered a number, no problem! Just wrap it in a try{} block that asks the user to enter a valid value, as if I'm writing a text program that calls readline().

Also note that this is different from just letting each user interaction be in its own thread! Consider:

1. Continuations should be serializable to disk. You wouldn't want blocked threads to just pile up as more users visit your website, and if your server crashes, you certainly wouldn't want to forget their state.

2. The browser "back" button would have to step a program backwards in time; something that is unsupported in a naïve thread-equals-user-interaction system. (E.g. how many of you have booked an airplane ticket with Travelocity/Expedia/whatever, pressed "Back", and then got a big error because the server didn't know how to step backwards through its user stack?) But the ability to save continuations to disk is the ability to save the current state of the program to disk, which makes stepping back in time easy: you can simply reload what was about to happen, just like savestates in a video game emulator.

This is not a new idea. Many Common Lisp and Smalltack web frameworks work well with continuations and have done so for years (the first Seaside release was 2002, I'm not kidding you!)

Racket, a great dialect of Scheme, ships with such a web framework out of the box. Here's my implementation of the Arc challenge in Racket. Note how the 'start' function is implemented: https://gist.github.com/gcr/ad116f565e105c8b2e0d

It's super easy to try this out and you don't need to fiddle with dependencies or anything. Just download Racket from http://racket-lang.org/ , copy+paste the code into DrRacket's giant text box on the top, press Run, and your web browser will pop up with the example.


This is not possible with promises or generators, because their flow is unidirectional (so no back button). It may be possible to come up with an abstraction that works like promises (.then chaining) but can be rewinded, in which case the code would look like this:

  askUserForValue
    .then(value => res.showLink()
      .then(_ => res.render(res, {"user-value": value}))
Infact I think something like this could probably be built with monadic observables and a nice library. The only thing you couldn't do is automagically keep intermediate state anywhere else other than memory :/

The thing is, isn't this just a pipe dream? The moment you need to interact with an external system, like say an SQL database, you can say good-bye to rewinding, or be fine with the fact that you might commit a transaction twice, or otherwise come up with a complex method to also rewind the database state, or write your own database that supports this :)

Edit: Also, these days this approach is largely unnecessary. Now the entire thing would work on the client and the server would only be responsible to provide an API :P


Your last paragraph is why Racket provides ways of invalidating "in-progress" continuations. Ideally, in any user flow, you'd want to have the only side-effect operation occur at the end of the flow (User clicks the 'Book ticket' or 'Process transaction' button); the hard part is usually modeling it beforehand in a way that allows the user to navigate naturally back and forth between the parts of the application/open it up in new tabs or whatever.

But sure, there are limitations of course. :)


I see. It definitely looks powerful. Do you know if this is possible in any non-LISP language?

I mean, invalidating of the "observable" chain could also be done in JS, but I don't think that I know of other languages that could automatically transfer the state to the client.


I'm not sure. I honestly think it's a culture thing. Many Lisp communities embrace continuations, but everyone else rightly shies away because they make your head hurt.

Ruby(!) supports them, so it might be possible for Rails to create something like the pattern I mentioned. In fact, Ruby people use continuations for a few things like the built-in Generator class and making "restartable exceptions" so the thrower can figure out how to deal with the problem rather than the catcher, which is sometimes quite useful: http://chneukirchen.org/blog/archive/2005/03/restartable-exc...

Properly implementing continuations is a huge implementiation burden though, which is another reason why the're uncommon. The language would have to make the stack frame itself slicable and serializable, and it's hard to get good performance when the call stack itself becomes a "first class" language construct. Consider a duplicate() function that saves the state of a program and then spawns ten threads that each reload that state! Have fun implementing continuations in a way that supports that, Guido! :)


Node.js 0.11 supports generators, which appear to be very similar to what you're describing: http://blog.stevensanderson.com/2013/12/21/experiments-with-...


Similar, but less general.

Serializable continuations can be saved to disk. You can reinstate a completed continuation (eg. go back from the final page. After clicking the link again, the page completes again. Or try clicking the link in several tabs; the page concurrently completes several times as you expect.) Generators make this pattern impossible; Continuations give it to you for free.

Second, by being saved on disk, they aren't crammed up in the event loop so they aren't a RAM leak. Racket even provides a way of having the client store all state if you wish; such "stateless" continuations have no memory or disk overhead on a long-running server and never expire until you change your site's source code.


In short: Generators are useful when you use callbacks to wait for the server to finish doing something (eg. finish a DB update or an HTTP request)

Continuations are useful when you're waiting for the user to finish doing something (eg. an HTTP response, not a request).

So they do solve different problems.


I find one of your premises very thin: browser implementors are implementing DOM promises, so promises were clearly not invented solely to make up for the difficulties of Node.


Promises have two awesome things:

* With callbacks interface of the function hardcodes whether the function is strictly synchronous or may use any async code. Change of one little helper function from sync to async may have ripple effect throughout the entire codebase (you have to add callback to all its callers, and all their callers, and their callers...).

Promises fix that. Sync and async functions take arguments and return values the same way, so when you change sync to async you only need to change source and consumer of the value, and nothing in between (e.g. same code can cache results of (memoize) promise-returning functions and sync functions).

* With ES6 yield async calls look and behave almost like sync calls. No callbacks. You don't have to use weird error handling schemes — just use try/catch! You don't have to have a library to iterate over things — just use for()!

A call that used to look like:

    use(compute(arg))
was twisted by async to be:

    compute(arg, use)
Promises change it to:

    compute(arg).then(use)
and yield changes it back to nearly original form:

    use(yield compute(arg))


> if (err) return callback(err)

> That line is haunting me in my dreams now. What happened to the DRY principle?

  iferr = (succfn, errfn) -> (err, a...) ->
    if err? then errfn err else succfn a...

  show_user = (id, cb) ->
    db.query ..., iferr cb, (user) ->
      ...
FTFY, no promises needed. (not to say that promises don't have other advantages, but that specific complaint is easily fixable with an higher-order function to abstract it away)

(JavaScript version: http://coffeescript.org/#try:%20%20iferr%20%3D%20(succfn%2C%...)


No its not. I address this immediately afterwards (below "But spion, why don't you wrap your calbacks?"), but basically, this simple wrapper makes stack traces unusable.

Bluebird (and also Q) stitches together stack traces from multiple previous events. This is amazingly awesome.


I see stuff like:

    promise.when(function(value){
       //blah blah blah
    })
    .error(function(err){
       //handle it
    })
and I think to myself I'm seeing text replacements:

    "promise.when(function(value)" = "try"
    ".error(function" = "catch"
    "})" = "}"
So how is this actually different from the code we're supposedly too lazy to write or too stupid to figure out or something in the first place, thus requiring this slightly more verbose form?

And where is this all going? Like, what's the right granularity of lines-of-code to when chains? If programmers are too lazy to try/catch, then why aren't we too lazy to when/then/error a gigantic block, or you know, not at all? And if we were to instead [w|t]hen/error every line, then I think about DRY and and it occurs to me the most likely next step is a language that "implicitly whens your code", that every line is written like regular procedural code, but automagically has "when" inserted in the middle.

Or in other words, you know, continuations.

But it occurs to me that the only reason any of this stuff is of any interest is because JavaScript, and therefore Node, still lacks real MT support, so you have to jump through these infinitely fractal spilling callback tangos to fake it.

Web Workers was 2009, people. 5 YEARS MAN. Five years that we've been waiting to hear anything about having sane primitives for MT.


Nope, `doStuff().then(function(value)" != try`

A closer representation would be `try { var value = doStuff()` which has similar verbosity.

ES6 will have generators and "arrow" function syntax. Meaning:

  doThing().then(function(value){
     //blah blah blah
  })
  .catch(function(err){
     //handle it
  })
becomes either:

  try { 
    var value = yield doThing();
    //blah blah blah
  } catch (err) {
    //handle it
  }
or

  doThing().then(value => /* blah blah blah */)
    .catch(err => /* handle it */);
meaning you can pick your paradigm :) Except that the functional one composes slightly better, perhaps.

  doThing().then(value => /* blah blah blah */)
    .catch(only(ParseError, err => /* handle it */));
vs

  try { 
    var value = yield doThing();
    //blah blah blah
  } catch (err) {
    only(err, ParseError); // argument repetition
    //handle it
  }


Uhhh, you're not convincing me that you've said anything meaningful towards improving software quality.


As opposed to what, threads? They have contributed towards improving software quality? In most languages with mutable state they've been a disaster -- correct me if I'm wrong.

Adding threads in JavaScript would also be a disaster. Practically everything is mutable and there are absolutely no locking mechanisms.


One possible solution to introduce threads to JS is to isolate threads from each other and use only message-passing:

https://github.com/audreyt/node-webworker-threads/


Sorry, but share-nothing threads that can only communicate via (afaics) serialized message passing are no longer threads. There is very little benefit to those compared to just spawning child processes.


I hope that everyone who writes JavaScript will do themselves a favor and try out ToffeeScript. This offshoot of CoffeeScript simplifies things. For example:

    if fs.exists! thefile
      e, datastr = fs.readFile! thefile, 'utf8'
    else
      datastr = 'not found'
    success = db.update! 'somekey', datastr
    res.end success
So in that example even though there are three callbacks it looks like a normal synchronous flow.


I haven't heard of ToffeeScript before, but this is pretty much how Erlang works. Whenever you make a blocking call, all of the async stuff happens under the hood, so that you just deal with sequential code.

Erlang's concurrency model deals with lots of lightweight processes. Rather than having callbacks it uses a message system. Whenever a process waits for a message it yields so other processes can be executed.

In your example there would be a process separate from the main[0] process that handles the database connection. When you request the update to the database, you are just sending a message to the database connection process. This will receive the message and handle the update when it can[1], and once complete, send you a message back with the result of the update. During the waiting period the VM will continue executing other processes. No callbacks needed :)

[0] Erlang concurrency is based upon the actor model. There isn't really a main process, but for the sake of explanation...

[1] The database connection process will actually block while making the update, so if you have 10,000 processes calling this, you will have a bottleneck. As such you should have a pool of database connection processes, rather than just one.


Yeah Erlang looks awesome and I should definitely try to learn it at some point.

To me Node.js and ToffeeScript seem a lot more straightforward and have the advantage of the whole package system which I believe is more advanced, as well as the npm ecosystem with 54,000 packages.

But anyway it would be good to actually learn Erlang and use it in some projects so that I can really evaluate it.

I'm not sure but I think it might be a lot more resource efficient for one thing.


Also IcedCoffeeScript

http://maxtaco.github.io/coffee-script/

It's just CoffeeScript with await and defer keywords, essentially. I personally would never touch toffeescript/coco and friends as they differ too much for CoffeeScript, which already segregates the JS userbase enough. I figure IcedCoffeeScript is the lesser evil for compile-time control flow as it's not adding a bunch of other changes to the language that users have to understand to contribute to a project written in it. Another issue with these CS spinoffs is; what if the main contributor says "Fuck it" and bails? CS itself is big enough to have a lot of talented contributors. What happens when CS moves over to CS 2.0 ie. coffeescript redux? Do these spinoffs need to recoded?

That being said I probably won't use Iced either, as much as I want to... Maybe I'm all wrong about this? Maybe this userbase segregation crap is not as big a deal as I may think.


I only use that one feature from ToffeeScript as shown in my example. Obviously much cleaner


Have you tried IcedCoffeescript? http://maxtaco.github.io/coffee-script/ if so, how do you think it compares to ToffeeScript?


Yes I have used IcedCoffeeScript. The main idea with IcedCoffeeScript is very similar to the main advantage of ToffeeScript. Namely synchronous looking async using await and defer.

ToffeeScript is definitely superior because it allows you to do that using a much cleaner syntax, just using the ! operator. ToffeeScript also has some other features.


Yes, the syntax does look cleaner. I wish the documentation was a little better, but hey, beggars can't be choosers :-)


Slightly off topic, but I've been working in Haskell, and recently realized how the laziness of the language means everything is basically asynchronous by default, but you can still write your code in a synchronous fashion. No real need for promises or callbacks because lazy values are the promises. Well, when using lazy IO, at least.

I'd actually much prefer lazy values be put into Javscript than almost any other feature (real module support is another big one).


Then again, Simon Peyton Jones has said that were he to make a new Haskell, it would have strict evaluation. As far as I know, lazy IO is considered a pain in the butt. That beeing said, laziness does have its merits and some ideas can be elegantly stated in a lazy language.


The default Lazy IO issue has been addressed in pipes-bytestring:

http://www.haskellforall.com/2013/09/perfect-streaming-using...

The pipes ecosystem ( http://www.haskell.org/haskellwiki/Pipes ) is working well to offer {effects, streaming, composability} without the resource leak issues introduced by lazy IO.


do you have a source on this? For me laziness is the single strongest thing going for Haskell.


It's from a retrospective on Haskell: www.cs.nott.ac.uk/~gmh/appsem-slides/peytonjones.ppt

There seems to be two arguments against laziness. Firstly, it makes reasoning about performance hard. Secondly, forcing strict evaluation is tricky, maybe harder than it would be to do optional lazy evaluation in a strict language.


See my comment below about ToffeeScript which compiles to JavaScript.


The problem with discussions of promises vs ... is there's usually a great deal of misunderstanding about what problem promises are solving: enforcement of the callback contract. I could go on for hours about the subtleties in the conversation, but the gist of it boils down to this:

    Node callbacks have a strict but entirely unenforced (in userland) contract:
The contract:

    * Callbacks should be called OAOO

    * Never throw (explicitly or implicitly) after origin tick (aka pass post-tick errors to callback)
There is also an increasingly common informal contract:

    * Never throw ever (aka pass ALL errors to the callback)
But there are a couple of issues that get in the way of successful enforcement of this contract:

    * When external libraries violate this contract (library A calls library B,
      but library B violates the contract in a way that makes it difficult for
      library A to easily maintain the contract)

    * Asynchronous errors

    * Crash-worthy exceptions
Inevitably, this all continually comes back to error handling over and over again, with the core issue being that node.js has 2 divergent methods for error handling: exceptions (try/catch/throw) and error passing (callback(err)). Exceptions requires opt-in error handling (crash by default) while error passing requires opt-out error handling (crash on demand) "and never the twain shall meet."

Promises attempt to resolve this by coercing all exceptions to error passing, but not all exceptions are safe to be passed. Specifically, any exception originating from core that is not an invalid argument exception (think 4XX vs 5XX class errors) cannot safely be caught / coerced / continued on. Also, not all errors passed to callbacks are even theoretically passable as the unofficial policy of core is that core callback errors are only distinguishable from exceptions in that they occur after the origin tick. In other words, they are equally crash-worthy and equally non-crash-worthy.

So promises are an improvement because when used pervasively, they enforce the formal and informal callback contracts, while providing a continuable-like representation of the value that can be passed around.

This benefit of contract enforcement is entirely independent of the specific API implementations.

The stepup[1] library trivially accomplishes contract enforcement by integrating the async trycatch[2] module without using promises or requiring pervasive usage. I've used them both for years now in various professional projects, with trycatch in use here at LinkedIn. Additionally, stepup could easily return a continuable to support a passable value representation.

Async generators will solve most of this though, allowing node.js to ditch the error passing for exception handling, but since Error subclassing and typed catches are basically not supported in node.js the language keeps getting in the way of a satisfactory complete solution. This is a whole other discussion.

In conclusion, error handling in node.js is an undesigned mess of which the core contributors don't even seem to fully understand or be aware[3,4]. FWIW, node.js' crash on error design is a DoS liability, which spion addresses in the article, and which LinkedIn uses trycatch to avoid.

[1] https://npmjs.org/package/stepup

[2] https://npmjs.org/package/trycatch

[3] https://github.com/joyent/node/issues/5114

[4] https://github.com/joyent/node/issues/5149


> Promises attempt to resolve this by coercing all exceptions to error passing, but not all exceptions are safe to be passed. Specifically, any exception originating from core that is not an invalid argument exception (think 4XX vs 5XX class errors) cannot safely be caught / coerced / continued on. Also, not all errors passed to callbacks are even theoretically passable as the unofficial policy of core is that core callback errors are only distinguishable from exceptions in that they occur after the origin tick. In other words, they are equally crash-worthy and equally non-crash-worthy.

This is not true. With promises, the wrapper of core functionality is left up to you. You can either write a crashy wrapper, an uncrashy wrapper, or a wrapper which picks whether to crash or not depending on the kind of error. I wrote more about this here [1]

Most default wrappers provided by promise libraries catch all synchronous errors but don't do anything with asynchronous errors. So far, this seems to be a fine default -- most unrecoverable, state-corrupting thrown errors in node core are asynchronous, and all the invalid argument errors are synchronous. But even if that weren't the case, it would be a simple matter to write a more specific wrapper.

[1]: https://github.com/petkaantonov/bluebird/issues/51#issuecomm...


You're talking about the core/userland boundary beyond the origin tick, and we're both right. Promises coerce caught exceptions occurring on the origin tick to errors (bad), while allowing non-caught async errors to be handled in a custom manner as you point out.

The comments following your linked comment address this nuance. I agree with Raynos that the core of the issue is as I point out here, 2 divergent incompatible error handling mechanisms, with both of them fundamentally broken:

* error passing fails because we don't have CPS due to lack of Proper Tail Calls

* throwing fails because we lack async try/catch or at least with async generators we lack a performant try/catch or with bluebird-like optimizations (hacks) we lack typed catch and Error.create

The latter is far closer to a consistent error handling pattern than the former, which promises implement (poorly and verbosely IMO). Additionally, async generators also nicely address the issue of slicing your userland stack away from core, so you get a two for one.


Looks to me like you didn't even read the crashy wrapper. With it, its possible to control what gets caught and what doesn't.

Although, I would really like to see a library that is written so badly that even catching exceptions thrown in the same tick would make it unusable.

The default is fine, and exceptional cases can also be covered on demand. (The presented `using` and `acquire` functions might even let you clean up after some "non-recoverable" errors!)


FWICT, I fully understand the example and addressed your points directly. You can wrap callbacks to catch everything or always crash, aka

     while allowing non-caught async errors to be handled in a custom manner as you point out.
And the referenced github issue furthers my point,

   Promises coerce caught exceptions occurring on the origin tick to errors (bad)
first via the OP[1]

    Is it possible to turn try { ... } catch off in the bluebird promise implementation?
and then via Dominic Denicola's characteristic snark[2]

    class PromiseThatMissesThePointOfPromises {
    ...
Please point out anything I am misunderstanding.

[1] https://github.com/petkaantonov/bluebird/issues/51

[2] https://github.com/petkaantonov/bluebird/issues/51#issuecomm...


Then I don't see how you can claim that the fact that "Promises coerce caught exceptions occurring on the origin tick to errors" is bad. This coerces all the invalid argument exceptions into errors. Is that bad? Or are there other errors which should not be caught but are thrown in this manner, and if so, can you provide (links to) examples which ones and why?

These should be quite rare and are addressable by wrapping that badly behaving function with a crashy wrapper that throws asynchronously...


I addressed your point and you are acknowledging it and I can see you understand it from you various other responses. Promises catch "synchronous" exceptions, however unlikely. You can optionally rethrow them / manually crash, which you would need to. Your response is also the most common response, "don't worry about them and deal with them when they come up."

I prefer to avoid this mentality and it motivates the difference in our opinions.


No, I'm just saying that undocumented inconsistency cannot be addressed in any other way other than dealing with it on a case by case basis, as it comes up.


I'm not a js guy, but this concept sounds a lot like Twisted's (python) concept of "deferreds". The idea is that you "defer" caring about some asynchronous process, and then attach "callbacks" and "errbacks" to the "deferred". You essentially create a chain of callbacks or errbacks which gets called depending on if a callback is fired or an exception is raised. Overall it's a pretty great concept, but debugging can be a real pain.


Yup, from what I recall of history both Twisted Deferreds and JavaScript promises have their heritage in E's promises.



Debugging with Bluebird (mentioned in the article) is not a pain because it stitches stack traces across events for you when you run it in development mode :)


And what's wrong with reinventing basic programming-language design a million different ways? Are you complaining that it's too much like Lisp?


Aren't they made to be broken?

(Sorry, couldn't help it)


JavaScript promises. Got it.

Wait - crashing processes? Denial of service? Wha? Why would JavaScript?...

Oh, this is about Node. Goodbye.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: