Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

WebAssembly is still a flavour of assembly. It's only nearly native performance to the real code because the interface to JavaScript has overhead. Every action in JavaScript incurs overhead due to dynamic types and objects, as well as dynamic memory allocation and garbage collection. Wasm can theoretically ignore it all and run as if it were compiled for the host system, except when it needs to interact with the JavaScript environment.

It's astonishing how fast JavaScript has become. But even if it were fully compiled, it would still be a language with higher overhead.

You can still write bad code, or compile a language with high overhead into WASM. This remains valuable for porting existing libraries into the browser and reducing bandwidth usage. But properly done with a fast compiled language like c or rust.... wasm can unlock some magical things into the web ecosystem.



> It's only nearly native performance to the real code because the interface to JavaScript has overhead.

That's not at all the only reason WASM is slower than native. WASM is bytecode. It still has to be JIT compiled, just like JavaScript. And WASM to begin with does not have a very complex instruction set, so the code generated by your language's LANG-to-WASM backend can't be optimized as heavily as its native backend.

As a rule of thumb (from my experience), you're almost never going to achieve significantly better performance in WASM than the equivalent algorithm written in optimized JS.


> It still has to be JIT compiled

Eeh. Comparing a garbage collected jit language to bytecode jit parsing is... quite possibly the most insane argument you could make.

And what does instruction count have to do with optimization? Most languages optimize in architecture invariant representations before creating the bytecode. So the wasm binary is already optimized.

From searching the web to make sure; the language barrier between wasm and js is the highest performance bottleneck. So its generally recommended to not bother for simple algorithms until it gets better.


> Eeh. Comparing a garbage collected jit language to bytecode jit parsing is... quite possibly the most insane argument you could make.

Not understanding that WASM still has to be optimized and compiled to machine code, and then calling me insane over it, is certainly an approach to discourse

> And what does instruction count have to do with optimization?

Not going to bother with this one. Do some research into how compilers work, maybe.

> From searching the web to make sure; the language barrier between wasm and js is the highest performance bottleneck.

It certainly is. Not sure where I claimed it wasn't. What I'm saying is that there are also other reasons a program will run slower when compiled to WASM compared to when compiled to native.


> Not going to bother with this one. Do some research into how compilers work, maybe.

> so the code generated by your language's LANG-to-WASM backend can't be optimized as heavily as its native backend.

https://cs.lmu.edu/~ray/notes/ir/ Intermediary representations. Most modern compiled languages are optimised independently of the target architecture. So the code has been optimised way before it even became was text. the LANG-to-WASM backend has most, if not all optimisations that LANG-to-arm64 would have done. The final parser is nearly trivial in compute and complexity, making its implementation a pretty approachable intermediate programming exercise.

Comparing it to running a modern compiler optimisation for a high-order language is apples and oranges. The only optimisation realistically remaining is the processor's speculative execution engine.

> Not sure where I claimed it wasn't

> Not only is it subjective but V8 does so much to optimize JavaScript code that I wouldn't be surprised if the benefits for most applications were negligible anyway.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: