Hacker News new | past | comments | ask | show | jobs | submit login

I did a double take when I read this too. However, I’d wager that in 99% of cases the bottleneck isn’t at the language level for most desktop apps or GUI apps in general.

I have an app that has one pretty frequent cpu intensive call (it computes the lcs for diffing). I decided to benchmark a rust wasm bindgen implementation against the js one I was using and found that ironically enough, node did perform better than rust. To be fair to the rust version, I think the JS implementation was slightly better optimized. I then looked into parallelizing LCS, (which is possible) but bindgen and almost all node:rust ports don’t support multithreaded calls, I can think of a myriad of ways to overcome this but not in a browser context. At that point I gave up on worrying about porting to lower level languages for cpu intensive work because if you’re forced into a single threaded context the gains of a memory managed language are going to be trivial at best and add ipc overhead at worse.

The main culprit of poor JS performance in native contexts is almost always bridges and the serialization/deserialization that comes with them. Electron apps don’t really suffer from the bridge problem. While, I’d never claim that JS is as fast as GO or Java (objectively it isn’t), it’s seems equally naive to suggest that using memory managed languages without exploiting multicore is going to lead to significant performance gains in a native app or web app.

I think the real power of wasm is the power of virtualization, it won’t be performance unless web APIs decide to break the single thread paradigm.




Just use Workers API and spawn another JavaScript thread


Well then you’re back to the bridge problem. postMessage is crazy slow. There are some problems where parallelization will outperform the ipc cost with postMessage but those problems are few and far between. I haven’t dove into the new web GPU api but it does look kinda promising (shared memory is hard, so I’m apprehensive). I’m sure some of these things will get solved in the long run but for the time being, the bottleneck to browser performance is not necessarily JavaScript itself but the webapis that are crafted around a single threaded model. This is really my only point. Aside from providing systems engineers familiarity with syntax/devex and possibly some portability with some llvm/wasm compatible libs, there is little point in wasm runtimes at this current time imo.

citation:

https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers...

“Data is sent between workers and the main thread via a system of messages — both sides send their messages using the postMessage() method, and respond to messages via the onmessage event handler (the message is contained within the message event's data attribute). The data is copied rather than shared.”


Transferable objects have been part of the spec for a while now, making it possible to send data back and forth contexts with no copying.

https://developer.chrome.com/blog/transferable-objects-light...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: