Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ES6 Performance (incaseofstairs.com)
64 points by g4k on July 3, 2015 | hide | past | favorite | 23 comments


The trouble with microbenchmarks like these is, JS engines nowadays are often clever enough to simply eliminate the code being tested, or change its character enough that the results are no longer meaningful. Vyacheslav Egorov (a chrome v8 engineer) has written a bunch of very good blogs on this. E.g.

http://mrale.ph/blog/2014/02/23/the-black-cat-of-microbenchm...

http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.h...

Checking the tests here, the "default parameters" section shows some tests being 2000x faster than others, which sounds suspicious. Here's an es5 test case:

    function fn(arg, other) {
      arg = arg === undefined ? 1 : arg;
      other = other === undefined ? 3 : other;
      return other;
    }
    
    test(function() {
      fn();
      fn(2);
      fn(2, 4);
    });
Sure enough, an arbitrarily smart VM could compile that code down to `test();`. How much this and other optimizations affect each test is anyone's guess, but I think it's likely that at least some of these results are dominated by coincidental features of how the tests are written.


I agree. The function (like those in the above benchmark) that does not have side-effects and returns nothing, is basically doing nothing as far as compiler is concerned, so it will be optimized away (not run at all). Modern compilers are incredibly good at that kind of stuff.

So what OP should do, is make function return something, and then make sure that that 'something' somehow survives the test. For example, if that 'something' is a number, add it to a global variable that survives the benchmark and print that global variable at the end of the benchmark. If it is a string, maybe add the hashcode to global variable, or maybe string length. Anyways, that will force compiler to emit the code that executes the function, and the benchmark will actually execute the function.


I don't think that would go far enough. Consider: even if all the functions returned something, a clever compiler might inline the hot function calls, and might then see that most of the argument checks are unnecessary, and eliminate them. Which would mean it's gotten rid of precisely the code that's being benchmarked!

In short, writing JS microbenchmarks that actually test what they claim to test is quite hard.


Although everything you say is true, your comment doesn't really add much value.

As you mentioned, micro-benchmarks are riddled with bias, and confusing results.

So putting micro-benchmarks aside and looking at ES5 / ES6 induced performance bottlenecks in actual apps, they are clearly present. (note, this is of-course limited to features actually implemented)

Unfortunately, a macro focused endeavor (only gauging full app performance) isn't as nicely actionable as the micro.

So in practice, in an attempt to produce high value but actionable feedback a hybrid approach, utilizing both micro/macro investigation yields the best results.

In addition the micro-benchmark vs optimizing compiler trap can be mitigated by investigating the intermediate and final outputs of the optimizing compiler.

Anyways, /rant.

There exist an unfortunate amount of JS performance traps, I wish where taken more seriously. Although more work, it would be quite valuable for someone to perform a root cause analysis to investigate potential bottlenecks brought to light by this post.


> So in practice, in an attempt to produce high value but actionable feedback a hybrid approach, utilizing both micro/macro investigation yields the best results.

I disagree. JS is obviously not fast by its nature; the only reason it's fast sometimes is because modern JS engines do incredible optimizations behind the scenes. As such, for real-world performance, writing code that the engine understands how to optimize entirely dwarfs trivia like how Babel transpiled your default parameters. (Even the most microbenchmarked function will run dog-slow if the v8 optimizer bails out.) And if real-world performance is dominated by optimizability, then it follows that microbenchmarks are largely useless unless they happen to get optimized/deoptimized in the same way that your code does.

Incidentally for v8 at least, currently using any ES6 feature (even a single "const" that's never used) causes the optimizer to bail out. So any question of "ES6-induced bottlenecks" is beside the point - it doesn't matter how fast the ES6 feature is if its mere presence slows down everything else.


While a lot of what you say is good, your opening assertion that his comment added nothing is nonsense.

He's right, you're right, js performance is still, in reality, pretty sucky in certain circumstances. It's worth talking about.

I remember writing an agonisingly slow program only 2 years ago and all that I was doing was a push/unshift, which at the time was for a mandelbrot generator I was making for fun, so had a loop of a few hundred thousand. A quick change of code and it was fine, but it was bizarre to be hitting something like that when every other language that I'd written the same code in hadn't even taken a second, let alone the minutes the javascript was taking.


TLDR, use Babel in "loose" mode and you'll be fine for pretty much all the ES6 syntax features (by which I exclude Maps, Sets, and generators). Most of the features listed are zero-overhead when transpiled this way. As usual, the native implementations are much slower for some reason (probably because they aren't optimized yet).

Microbenchmarks and relative speeds are not super useful. I'm a performance nut and I love optimizing code -- back in the day I wrote a syntax-highlighting editor that was snappy in IE6's crappy JScript engine -- but I'm not going to worry that some syntax feature is 3x or even 10x slower than assigning to a local variable (which is what, one CPU instruction?). If you're that concerned, you should be avoiding object allocations ({}) like the plague, and that is just madness except in really performance-critical sections of code (and games).


There might be some great information here but it's completely unreadable. Add some borders on your table, It's impossible to read in it's current state.


Interesting data, poor presentation.

The tables need some formatting and colors would be nice, too. Instead of "slower" and "faster" it should be just a factor. So, 2x would mean that it takes twice as long and 0.5x would mean that it's twice as fast.

Also, what's the baseline? Where does that 1x come from?


Promises - the assumption here is that native promises are fast - this is amusing. Userland implementations like Bluebird promises are significantly faster than native promises. Not to mention the fact that converting an API to use promises is slow with native promises - a native `promisify` will have to be provided for Node, it's being worked on.


How often do you do async stuff per millisecond that this starts to matter? At least for browser stuff I really don't see the problem.


The problem usually arises when the GC pressure introduced from the promises (often the intermediate promises) created by many active/concurrent tasks starts burning copious amounts of cycles.

As your concurrency increases, a poor promise implementation starts burning copious amount of valuable cycles, often largely due to GC pressure.

In the abstract this doesn't sound that bad, but when comparing well behaved implementations such as BlueBird, RSVP, When, ES6-Promise etc. with the current state, (July 3, 2015) native Promises, the difference is still staggering.

As for the browser, a promise is a great way to handle async and often a great abstraction around a single potentially remote entity. As more ambitious applications are created, it isn't uncommon to have thousands or tens of thousands of these remote entities. Wouldn't it be nice, if the overhead of using the promise was negligible?


I've worked on node.js projects where certain promises implementations were too slow for the use-case



Super interesting, and somewhat disappointing as well.

Also — off topic, but I really wish they'd provided graphs, or at least given those tables some formatting love. Does anyone know of similar benchmarks that have?


I think you should complain on your browser's default table styling rather. Looks good with w3m ;P

Seriously though, graphs can be manipulated to look exactly how you want the user to interpret it.


    /faster/ 46
    /slower/ 482
Indeed


See http://kpdecker.github.io/six-speed/ for overview and better readability.


Wow, this is all a bit disheartening to read the day after I spent all day updating my app to ES6 + babel.

These performance hits are extreme. I would never have guessed that so many of these features are taking 20-2000x speed hits.

I hope the browsers and V8 catch up soon so transpiling ES6 is no longer necessary.


Did you profile the before and after versions of your app? I think some of the more extreme results (like 2000x) here may just be due to the benchmarks getting optimized into empty functions.


Use loose mode. Not only will sane code seldom hit the edge cases, it also generates way more readable es5. And, according to this article, there's hardly any performance hit left.


tables not formatted!!!!!!!


Add some borders to the tables, the presentation is horrible




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: