This is cool, but array methods are so rarely the bottleneck; Even in the highly-specialized demo, the difference is near-negligible.
It's a fun project and might have application in specialized situations (and maybe for Node, where time spent in synchronous iteration blocks all other requests?). But I feel like this does more harm than good in most cases (larger bundle, extra build step, less readable output code can make debugging harder, etc)
I agree, while this is a cool idea it doesn't seem to be useful beyond being a proof of concept of an optimization technique. For example, my browser rendered the faster.js version 50ms slower at this point: https://i.imgur.com/G5b10Ns.jpg. It does appear to perform better in the long run (https://i.imgur.com/u3ApYYl.jpg) but it's an improvement of less than 1%, and when your renders are taking 30+ms there are probably other things you should be looking at :P
Also, unless I'm misunderstanding the README, using the plugin introduces the huge caveat of breaking code that uses "restricted names" (certain names of Array functions) - the example given is a class that defines a map() function, which apparently would cause some kind of failure. In larger codebases where you don't control all code used in production this seems like a big problem.
Agreed for the most part - this realistically does not have a good generalized use case. IMO the only real harm is the slightly larger bundle, though - I think the extra build time is negligible and you could just disable this optimization when developing.
My main motivation for building this was to make a better way to use an optimization library like fast.js (https://github.com/codemix/fast.js), which inspired faster.js. A huge issue with the fast.js library is that you have to rewrite your entire codebase in order to use it, whereas enabling or disabling faster.js is a one line change.
To the extent this tool will make it appear that there is less usage of functional JS patterns browser vendors deciding how to optimize their engines, I'm not a fan. Compiling away a useful feature or pattern just because it's not yet optimized creates a risk of a self-fulfilling prophecy. Obviously this is not a zero-sum game and optimizing for one pattern doesn't necessarily mean that something else must suffer, but prioritization of what to optimize is sometimes based on instrumentation measuring deployed usage and not just artificial benchmarks.
The performance gap between a `for` loop and and a `.forEach` (also map) is much narrower than it used to be, and that is very encouraging.
It’s seems like it does harm in JS environments that have JITs. But it would be interesting in strictly interpreted environments. Some devices run JS but disallow JIT due to security concerns.
v8 for instance, has the interpreter (ignition), and optimizing compiler (turbofan), that have a lot of undocumented behavior that people just to try to probe via microbenchmarks.
Prepack is doing much more than GCC can because it effectively evaluates the whole code and then serialises its heap. This is both its biggest strength and weakness.
Unlike GCC, it can unroll any kind of metaprogramming, but it needs to have a model of the environment (e.g. it won’t execute code relying on DOM) and it can produce larger code (in terms of absolute size).
I'd like to be able to specify restrictions on type usage in Javascript. For instance, in some cases I know that my array will always be 6 items larges - no smaller, no larger, never sparse. But I can't specify this, yet it allows for certain optimizations (e.g. loop unrolling).
Lately I was memory profiling an application in Javascript using Chrome. The fact that it used ES6 classes really helped, as some anonymous classes where hard to find memory leaks with, but the typed classes was a simple text query.
I can imagine ES6 classes with decorators may enable specialized optimizations too.
The mystery of JIT optimization. The optimizer will turn that into a dereferenced register value. I remember reading that v8 used to add an extra instruction if you assigned the variable yourself.
The demo page basically shows this is pointless practically - and sometimes _slower_.
These kind of optimizations were very useful in hot paths even just a few years ago. However browser engines have optimized functional patterns (what this mostly rewrites) to be within absolute microseconds of imperative versions.
In some cases it can even make useful assumptions in an FP method (immutability, etc) that it can't in the standard loops - which is why sometimes this is slower.
Long story short: concentrate on shipping less JavaScript and good practical solutions/algorithms and not stuff like this.
I thought I was in new and not the front page. You're telling me that as a demonstration of perf gains, in a graphics context I win back .8 ms in exchange for another dependency and unnecessary cognitive load?
I think this is an example of _optimization_. An optimization that is easily applied as a transform for Babel, but that's just it.
Whether it is premature or not, it would depend on if you apply it carelessly without knowing whether you need it, or if you apply it when you know you need. Right?
It's a fun project and might have application in specialized situations (and maybe for Node, where time spent in synchronous iteration blocks all other requests?). But I feel like this does more harm than good in most cases (larger bundle, extra build step, less readable output code can make debugging harder, etc)