> Hermes today has no JIT compiler. This means that Hermes underperforms some benchmarks, especially those that depend on CPU performance. This was an intentional choice: These benchmarks are generally not representative of mobile application workloads.
What workloads are different from mobile application workloads, other than server-side code?
> Because JITs must warm up when an application starts, they have trouble improving TTI and may even hurt TTI. Also, a JIT adds to native code size and memory consumption, which negatively affects our primary metrics. A JIT is likely to hurt the metrics we care about most, so we chose not to implement a JIT.
So it seems to be specific to their TTI (Time To Interact) metric. A server application generally has the time to warm-up to reach best-performances (pretty common in the java world for example) while a mobile application has to react to user inputs as soon as possible.
Probably to limit scope / complexity. Remember that the JS engine here (in React Native) is basically pulling strings to orchestrate a bunch of native modules, not doing any of the heavy lifting. It’s quite different from the current web environment.
If JS code is few event listeners to verify/massage the data to submit to the server, then the answer is yes. But for more complex web pages the answer is now. For example, React on web and similar framework may not be feasible without JIT.
Do you have more information on this? I remember looking into it a couple months ago, and being told the jit wasn’t going to be developed further. Thanks!
Benchmarks that are designed to measure absolute performance often don't test the factors that make real world software feel fast or slow. They're designed to output a number that allows you to compare the performance of the VMs in certain scenarios.
The speed of a JS vm factoring primes doesn't indicate how laggy it'll feel scrolling through a list because it's garbage collecting every few hundred millis.
> Hermes today has no JIT compiler. This means that Hermes underperforms some benchmarks, especially those that depend on CPU performance. This was an intentional choice: These benchmarks are generally not representative of mobile application workloads.
What workloads are different from mobile application workloads, other than server-side code?