This needs to be shouted from the rooftops. This is the single most important constraint of the project and also its greatest strength.
So asm.js isn't the magical perfect browser bytecode that everyone on HN wants -- and which would have all manner of flaws if it actually existed in concrete form, rather than a platonic ideal in people's heads -- but that's a completely unrealistic goal anyway.
But asm.js is usable in all browsers immediately. asm.js is just JS.
> But asm.js is usable in all browsers immediately. asm.js is just JS.
A magic JS-based bytecode that's usable in all browsers immediately isn't useful if it isn't fast. Which it isn't, because it's a JS-based bytecode executing under existing JS engines.
So now we have apps that perform incredibly poorly when run on a browser without "asm.js" support, and a rather ridiculous bytecode format that will have to be parsed natively to run reasonably quickly, with a fair bit more complexity for every layer in the development and runtime stack because they insist on keeping it as valid JS syntax.
Our numbers show asm.js can be 2x slower than native or better. That's not "not fast". And, even without asm.js optimizations, the same code is 4x slower than native, which is as good or better than handwriten JS anyhow - which is not "incredibly poorly".
If you have other numbers or results, please share.
For a desktop and/or mobile app, on which the consumer is waiting and you are burning battery (laptop/phone) or just simply CPU cycles, 2x-4x slower is 'not fast'. You're simply wasting the end-users time and resources for what amounts to ideological reasoning.
We're always making a trade-off between performance and ease of programming, but when your competition is coming in at 2x faster than your optimal case, and 4x in the standard case, you're going to lose for all but the simplest apps.
How does PNaCl compare in terms of performance to "native" code? It still has the compilation overhead, it still has a lot of the bounds checking… It's not clear to me that PNaCl will actually be much quicker than asm.js.
I believe the ideal (for users) would be to target NaCL natively, with a fallback to server-side PNaCL compilation, and an absolute fallback to PNaCL compilation/execution.
>Python, for one, is plenty usable, and is not fast.
In any environment where I might currently choose to use Python I also have the option to use something else for parts of the project where Python proves to be too slow. Will FirefoxOS provide such an escape hatch?
>JS engines are close to Java/C in speed
'close' is a pretty vague term. For a lot of tasks Ruby is 'close' enough to C that the difference doesn't matter. For a different set of tasks Java is not 'close' enough to C (or C+asm) to be a viable choice and neither is Javascript.
I'd also like to point out that battery life does matter, and using at least twice the CPU cycles for most tasks isn't conducive to good battery life.
>In any environment where I might currently choose to use Python I also have the option to use something else for parts of the project where Python proves to be too slow. Will FirefoxOS provide such an escape hatch?
Seeing that Python is 10-20 times slower than V8 for most Python/JS native operations, you should have that problem much. Especially considering that the purpose of asm.js is to give you an even greater boost in speed. And seeing that NaCl never got anywhere, not only this is your best bet but it's far better than anything else out there at the moment.
asm.js IS a bytecode format. That it is human readable or that it accepts some tradeoffs because of JS doesn't matter. The end result (after the JIT pass) would not be any slower for it. The only problem with a readable "asm" would be slower load times, but that can be taken care of in the future by providing some pre-compiled format or more control over caching if asm succeeds.
Please be careful with the "close to C in speed" claim.
There are only a very small number of languages that can legitimately claim that (C++, Fortran, and sometimes Ada). Java is not one of them. JavaScript is surely not one of them, even with the latest versions of V8.
The only time we see performance remotely close (which still usually means several times slower, at best) to C is for extremely unrealistic micro-benchmarks that have been very heavily optimized to a state where they don't at all resemble real-world code.
We have had Word Processors and Spreadsheets in Javascript (Google Docs), 3D and 2D games, and even a h264 decoder and a PDF renderer. Heck, they have ported QT to Javascript, and the example applications run at a very acceptable speed. None of the above are slow.
So, no, it's not true that V8 is only fast in selected "microbenchmarks".
You might not do scientific applications or NLE video editing with it, but for everything else it should be just fine.
>'Very acceptable' speed isn't what consumers are looking for when comparing battery life and wall-clock performance between competing platforms.
Where does the idea come that V8s and co very acceptable speeds come at the expense of battery life and wall-clock performance???
Not to mention that people are using far less capable web apps in the mobile and desktop space now (i.e pre-asm.js javascript), so the increase in speed due to the asm.js/optimisation standardisation would only make battery life and wall-clock performance better.
This needs to be shouted from the rooftops. This is the single most important constraint of the project and also its greatest strength.
So asm.js isn't the magical perfect browser bytecode that everyone on HN wants -- and which would have all manner of flaws if it actually existed in concrete form, rather than a platonic ideal in people's heads -- but that's a completely unrealistic goal anyway.
But asm.js is usable in all browsers immediately. asm.js is just JS.