Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The trouble with microbenchmarks like these is, JS engines nowadays are often clever enough to simply eliminate the code being tested, or change its character enough that the results are no longer meaningful. Vyacheslav Egorov (a chrome v8 engineer) has written a bunch of very good blogs on this. E.g.

http://mrale.ph/blog/2014/02/23/the-black-cat-of-microbenchm...

http://mrale.ph/blog/2012/12/15/microbenchmarks-fairy-tale.h...

Checking the tests here, the "default parameters" section shows some tests being 2000x faster than others, which sounds suspicious. Here's an es5 test case:

    function fn(arg, other) {
      arg = arg === undefined ? 1 : arg;
      other = other === undefined ? 3 : other;
      return other;
    }
    
    test(function() {
      fn();
      fn(2);
      fn(2, 4);
    });
Sure enough, an arbitrarily smart VM could compile that code down to `test();`. How much this and other optimizations affect each test is anyone's guess, but I think it's likely that at least some of these results are dominated by coincidental features of how the tests are written.


I agree. The function (like those in the above benchmark) that does not have side-effects and returns nothing, is basically doing nothing as far as compiler is concerned, so it will be optimized away (not run at all). Modern compilers are incredibly good at that kind of stuff.

So what OP should do, is make function return something, and then make sure that that 'something' somehow survives the test. For example, if that 'something' is a number, add it to a global variable that survives the benchmark and print that global variable at the end of the benchmark. If it is a string, maybe add the hashcode to global variable, or maybe string length. Anyways, that will force compiler to emit the code that executes the function, and the benchmark will actually execute the function.


I don't think that would go far enough. Consider: even if all the functions returned something, a clever compiler might inline the hot function calls, and might then see that most of the argument checks are unnecessary, and eliminate them. Which would mean it's gotten rid of precisely the code that's being benchmarked!

In short, writing JS microbenchmarks that actually test what they claim to test is quite hard.


Although everything you say is true, your comment doesn't really add much value.

As you mentioned, micro-benchmarks are riddled with bias, and confusing results.

So putting micro-benchmarks aside and looking at ES5 / ES6 induced performance bottlenecks in actual apps, they are clearly present. (note, this is of-course limited to features actually implemented)

Unfortunately, a macro focused endeavor (only gauging full app performance) isn't as nicely actionable as the micro.

So in practice, in an attempt to produce high value but actionable feedback a hybrid approach, utilizing both micro/macro investigation yields the best results.

In addition the micro-benchmark vs optimizing compiler trap can be mitigated by investigating the intermediate and final outputs of the optimizing compiler.

Anyways, /rant.

There exist an unfortunate amount of JS performance traps, I wish where taken more seriously. Although more work, it would be quite valuable for someone to perform a root cause analysis to investigate potential bottlenecks brought to light by this post.


> So in practice, in an attempt to produce high value but actionable feedback a hybrid approach, utilizing both micro/macro investigation yields the best results.

I disagree. JS is obviously not fast by its nature; the only reason it's fast sometimes is because modern JS engines do incredible optimizations behind the scenes. As such, for real-world performance, writing code that the engine understands how to optimize entirely dwarfs trivia like how Babel transpiled your default parameters. (Even the most microbenchmarked function will run dog-slow if the v8 optimizer bails out.) And if real-world performance is dominated by optimizability, then it follows that microbenchmarks are largely useless unless they happen to get optimized/deoptimized in the same way that your code does.

Incidentally for v8 at least, currently using any ES6 feature (even a single "const" that's never used) causes the optimizer to bail out. So any question of "ES6-induced bottlenecks" is beside the point - it doesn't matter how fast the ES6 feature is if its mere presence slows down everything else.


While a lot of what you say is good, your opening assertion that his comment added nothing is nonsense.

He's right, you're right, js performance is still, in reality, pretty sucky in certain circumstances. It's worth talking about.

I remember writing an agonisingly slow program only 2 years ago and all that I was doing was a push/unshift, which at the time was for a mandelbrot generator I was making for fun, so had a loop of a few hundred thousand. A quick change of code and it was fine, but it was bizarre to be hitting something like that when every other language that I'd written the same code in hadn't even taken a second, let alone the minutes the javascript was taking.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: