Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How WebAssembly is accelerating new web functionality (chromium.org)
36 points by cpeterso on April 17, 2023 | hide | past | favorite | 4 comments


Now that SharedArrayBuffer is back and WASM can do multiprocessing, yeah, it’s going to be ridiculously powerful for various processing heavy activities.


The obvious question is this: if WebAssembly really works well, why are efforts like WebCodecs needed at all? Isn't it now superfluous? The answer in this blog post appears to be that no, WebAssembly doesn't work well enough and is therefore seen only as a way to ship prototypes whilst waiting for "real" implementations inside the browser engines themselves. So why is that? It boils down to:

1. WebAssembly can't poke holes in the sandbox.

2. Cache partitioning means you can't amortize the cost of a "plugin" written using wasm, so every website has to download it from scratch and in particular in a blocking manner, so you're fairly restricted in how much code you can ship this way.

To which there are some obvious solutions, like allowing wasm modules to coordinate with native code plugins that can also be downloaded on the fly and managed by the browser, and to allow HTTP objects to opt out of cache partitioning (the threat model is such that if a file opts out it's ok because the goal is to stop sites learning if you browsed to some other specific site, but site-specific files wouldn't opt out).

It seems though, that the direction of travel is to not do this. Instead browsers will continue adding every possible feature to the specs. It's hard not to feel like maybe what we need here is for browsers to become separated out into a sandboxing layer, a hardware/OS abstraction layer, and finally an HTML rendering layer that sits on top as if it were any other app. Whenever these threads pop up I link to a design doc I wrote that proposes something like that in the hope of getting interesting feedback, so here it is again:

https://docs.google.com/document/d/1oDBw4fWyRNug3_f5mXWdlgDI...


> if WebAssembly really works well, why are efforts like WebCodecs needed at all?

The answer is pretty straightforward.

https://developer.mozilla.org/en-US/docs/Web/API/WebCodecs_A...

> Many Web APIs use media codecs internally. For example, the Web Audio API, and the WebRTC API. However these APIs do not allow developers to work with individual frames of a video stream and unmixed chunks of encoded audio or video.

> Web developers have typically used WebAssembly in order to get around this limitation, and to work with media codecs in the browser. However, this requires additional bandwidth to download codecs that already exist in the browser, reducing performance and power efficiency, and adding additional development overhead.


The codecs in WebCodecs also requires bandwidth to download. So they could do this by moving codec support out of the existing APIs, with an auto-fallback to a default WebAssembly module if you don't specify one when instantiating the API, thus de-privileging the renderer code and keeping things size equal whilst still allowing lower level API access.

That's kind of my point: they don't seem to act in a way that suggests WebAssembly is a way to implement features with competitive performance and overhead.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: