Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Why? Because underneath all that rust .... is an optimizing compiler, and it happens the author has decided to stay on the happy path of that.

There are two big differences here: 1) You're comparing "staying on the happy path" of a JIT compiler in the JS case, vs. an optimizing compiler in the Rust case. With the latter you can just compile your code and see what comes out and it tends to be fairly predictable. With the former, I'm not even sure there are tools to inspect the generated JIT code, and you're constantly walking the line of JS engines changing their heuristics and throwing you off the fast path. This was one of the primary motivations for the asm.js/WebAssembly work: the ability to get predictable performance.

2) Many of the optimizations mraleph performed were tricks to avoid allocation (which is normal optimization stuff, but more of a pain in GCed languages). In JS he winds up having to effectively write C-in-JS which looks pretty hairy. In Rust controlling allocation is a built-in language feature, so you can write very idiomatic code without heap allocation.



> what comes out and it tends to be fairly predictable

Predictable as long as you stay on the same version of the compiler (yes, I know that there are crater runs to prevent regressions). Also, how does much can/does the output for different target architetures differ in performance? Couldn't that be likened to trying to optimize for multiple JS engines?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: