Hacker News new | past | comments | ask | show | jobs | submit login
Long live the callbacks (trevnorris.com)
60 points by austengary on Aug 22, 2013 | hide | past | favorite | 42 comments



This article is misinformed... I don't know where to start.

As others have said, the shallow nature of his argument is indicated by the complete avoidance of error handling. Callbacks spin out of control very quickly, not because you have to handle every single callback, but you have to handle an error at every single point as well. Mix in some more advanced control flows like `map`, and it's insane.

Please, please do not heed advice to "not use function closures". That's completely ridiculous and unrelated to callbacks. Every single one of his tests creates a new function for every single iteration. Of course that's terrible if you are iterating 2e4 times. Over here in the real world, my closures are executed oh, about 100 times a second (e.g. completely negligible perf hit). This is a seriously flawed benchmark.

I've seen that a common argument for solving callback hell is to move all the functions up into the same scope and name them. I have no idea how you handle that in real code. It's hard enough to keep track of it in normal callback-style, but forcing me to jump around and create tons of temporary names sounds insane.

Learn you some functional programming, and why it's good. Also learn you some error handling, promises, and why callbacks are terrible.

(note: remember folks, async code is async for a reason. It needs to go fetch data from something very slow, like a network connection or hard disk. It's so silly to benchmark closure creation for IO-bound apps.)


> my closures are executed oh, about 100 times a second (e.g. completely negligible perf hit). This is a seriously flawed benchmark.

Take Node's case where you're accepting HTTP requests. Each request is going to require a new instance, and call into the connection listener. If the connection listener contains all the function declarations for all events attached to the client then you loose the performance you could have had by v8 using a previously optimized function. The benchmark was intentionally made to show worst case. I don't expect every application to magically run 80% faster just because they moved some function declarations around.


I'm actually surprised at how low the cost of function declarations is. V8 can handle millions of function declarations per second (as you know http://jsperf.com/closure-vs-callback/8). The cost is so low that ArraySort in http://code.google.com/p/v8/source/browse/trunk/src/array.js, which implements Array.prototype.sort() in V8, declares 6 functions inside a function. Sure, you shouldn't define functions in tight loops, but otherwise this seems like unnecessary optimization.


> DON'T USE FUNCTION CLOSURES

Umm... every time you create a function in JavaScript, you create a closure whether you like it or not. (For "root-level" functions, you can think of the closure as being the variables which are global, i.e. belong to the window object.)

What the author really means is, don't define a function 1e6 x 1e3 times instead of just defining it once.

But that has nothing to do with nesting callbacks. In most cases, nested callbacks are only ever called once, because callbacks are being "chained" to each other.

So the performance the author is talking about has nothing to do with closures or even callbacks, but simply the overhead of re-defining functions thousands of times, which is obviously going to have much more overhead.

The right lesson is, nested callbacks are bad for readability, but in most cases have nothing to do with bad performance. The bad performance in this article has nothing to do with callbacks, but is all about the overhead associated with function re-definition.


>The bad performance in this article has nothing to do with callbacks, but is all about the overhead associated with function re-definition.

To the author's point, I think they're addressing a common vector of criticism with respect to JS callbacks... Usually when someone is writing their "callbacks are evil" proof, they give the typical "callback pyramid of doom" example to prove their point...the author is just showing that if you do it right and flatten your callback structure out with named functions, etc., it solves a lot of those problems.. I think the author was trying to make the point that callbacks themselves are not the problem, per se, but rather the way they are commonly (and wrongly) used.


Thank you. Yes, that was more along the point. I just get punchy about how I say it after 3am.


Is there some sort of group out there complaining that callbacks are problematic because they make things slow in Node? It certainly has nothing to do with my objections. This seems to verge on a strawman, while meanwhile doing nothing to address the problems of code comprehensibility, maintainability, composability, testability, proper error handling, or any of the other things that break when you are throwing away your entire stack information every 5 to 10 statements or so and basically programming with something just one tiny, tiny step above the Gotos that Dijkstra Considered Harmful.

Further, the performance argument is trivially discarded by observing that in real languages like Go or Erlang, the speed of real code is just as fast if not faster than Node. You only get slowdown in Node because you have to pay to implement abstractions at the wrong layer. In the general case, callbacks are slower due to the need to keep creating stack frames over and over, instead of leaping into already-existing ones in progress, and things like Go will always have an advantage there.


> In the general case, callbacks are slower due to the need to keep creating stack frames over and over, instead of leaping into already-existing ones in progress, and things like Go will always have an advantage there.

Theoretically it's the same asymptotic cost. If you have state that has to be saved from I/O operation to I/O operation then you have to store that somewhere. In Go you would store it in the goroutine stack; note that allocating a goroutine stack requires an allocation. In Node you would store it in the callback environment, also requiring an allocation. In theory an optimizing JavaScript engine could reuse the same captured variable environment across multiple closures if it's the same, resulting in the same asymptotic performance as Go.


"In theory an optimizing JavaScript engine could reuse the same captured variable environment across multiple closures if it's the same, resulting in the same asymptotic performance as Go."

I said stack frame, not capture environment. You don't "store it" in the goroutine stack, it is the goroutine stack. Something like Node is going to be constructing and destructing frames much more often.


This is something I tried last NodeJS system I wrote. I ended up with dozens more function definitions elsewhere in the code. Those functions were bizarre little things that only ran in a very specific context when several other things were in progress. Defining your callback functions separately just leads to an unmaintainable mess.

It isn't the language I use, but C# really handled this whole situation best with async/await. There you can just keep coding along in the main narrative of execution that you are concerned about even if it means under the hood the code execution stops and other things are done until your await statement continues. The error flow can usually be handled with some sort of exception mechanism. This article and the traditional pyramid of death are both really just hacks to get around lack of a language change like async/await.


The people who say that the "callback haters" hate it because of indentation miss the point.

The problem is readability and flow. async makes code flow (and perhaps more importantly , data flow) harder to grok without knowing the code well. closures don't help with that, but renaming doesn't help either. Especially if your thing is going to be named things like WriteToDiskAfterOtherFileRead or something.

things like yield and whatnot are embraced because it allows us to build things with clearer patterns. Callbacks have their place as well, but when you think about how most people reason about IO (even in async fashion), a better expressive tool would be appreciated.


> readability and flow

correctness is more important than readability, though good readability sometimes may correlate with correctness.


Heh, this is the kind of article that will be humbling to the author to re-read later in his career.


its not useful to conclude anything about callbacks without considering rigorous error handling, which is of course the whole point of promises [1]

[1] http://domenic.me/2012/10/14/youre-missing-the-point-of-prom...


The name `thenables` is a good catch


I am sorry to say this but it seems in case the developer is doing more harm to the "callbacks are good" and node.js crowd than he is helping.

He thinks he is advocating but stuff he is saying is embarrassing and it will become one of those things everyone links to it in ridicule and says "here is what to expect from using this project". Kind of like Rhyan Dalh famous "Node.js has linear speedup over multiple cores for web servers."

The writing style (Oh wait, I know. It's because they're Doing it Wrong) doesn't help it either. That kind of style to be credible has to come from the likes of Linus Torvalds and has to be backed up by serious experience or good solid proof.


I'm glad I know enough to realize how utterly terrible this post is, but I really feel bad for someone who doesn't know better and actually thinks there's something of value here. They will come away a worse JavaScript programmer then they were before.


Could someone who understands these things explain to me why flattening the declaration in the second example appears to make it 85% faster?


JavaScript functions have identity, so creating anonymous functions, even if they don't close over anything, actually requires a garbage-collected object to be allocated, rather than just creating a pointer to a piece of code.

Doing this in inner loops is going to cause a noticeable performance degradation. (Doing it in expensive asynchronous operations, which is what callbacks are often used for, is not going to make a measurable difference.)


Oh. I was expecting it to be some javascript-y reason rather than a mere big-O calculation :-P Forgive me, I flew cross-country overnight and am operating on not much sleep :-)


In the first example the genPrimes function is being created 2e4 times, once for each iteration of the loop. In the second example it's defined once.


Yep, if you use jshint/lint it'll warn you about creating functions within a loop. The performance difference is more to do with that than any inherent flaws in using anonymous functions as callbacks.


In the first example, he creates a unique closure per point instance and places it on the point itself.

In the second example, he creates a single closure and places it on the point's prototype.

Unsurprisingly, creating one object is faster than creating a thousand of them. I'm not sure why the author thinks this has anything to do with closures.


This is in general a good advice for most languages; instead of calling a function multiple times, do it once (and potentially store the result if you want it for later use). A similar, albeit far more simple, example would be calling a length method when iterating over an array instead of saving it to a variable.


I mostly write javascript all day, and I'm pretty sure that less than 0.01% of the javascript I've ever written is called often enough to make closure allocation time a serious concern. By all means, optimize if you're writing an inner loop body that gets called a million times per pageload, but I think most of us don't need to worry about it.


From my cold-dead hands you'll take closures away from me.

(function(){})()

They're a great part of JS these mathemagics.


An IIFE is more like a block. Check it!

  do end


Ruby blocks?


C blocks:

  {}
  do{}while(0);


I'm not completely getting what parts of them you think make them seem more like c blocks. Could you clear this up a bit?


The use case for an IIFE (if you do not use its return value) is the same as for a block. It just gives you a new lexical scope and you run the code inside it. In Lua or C, you can do this with a block.

If you use the return value of an IIFE, it ends up being more like a block in an expression-oriented language like Scala or Julia.

In general, closures are like closures, but IIFEs just use a closure to do something that every other language does without one.


The last example doesn't seem to work on node 0.10.17:

      if (sieve[(x / 8) >>> 0] & (1 << (x % 8))) {
               ^
    TypeError: Cannot read property '0' of undefined


Nor in 0.8.23. I think the chaining of the constructor to .fill in `var sieve = new SB(len).fill(0xff, 0, len);` is the issue, because SlowBuffer::fill isn't supposed to have a return value.


Ah yeah. I forgot that Buffer#fill() only returns "this" in master. I updated the example to fix this.


think it might be because fill may not return a value, prob just need to call sieve.fill on a separate line after you declare it.


Here's a follow up to this completely misunderstood post:

http://blog.trevnorris.com/2013/08/callbacks-what-i-said-was...


First comment by @lloydwatkin made me smile:

"I don't know who you are. I don't know what you want. If you are looking for generators or promises, I can tell you I haven't coded any. But what I do have are a very particular set of callback skills; skills I have acquired over a very long javascript career. Skills that make me a nightmare for developers like you. If you performance analyse callbacks correctly, that'll be the end of it. I will not look for you, I will not pursue you. But if you don't, I will look for you, I will find you, and I will destroy your benchmarks."


Alright, I'll make sure that from now on I don't have any data from before the async operation that I need to access after the async operation.


Or you can use generators, fibers or core.async if you use clojurescript and have callbacks without all the pyramid of doom.


"Good. Bad. I'm the guy with the Erlang"


I feel like a hypocrite, voting up pro-callback sentiment and at the same time voting up callback dissidence. I guess I like both sides of the story.


JavaScript needs to be replaced. Just looking at that JS code makes me cringe.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: