Hacker News new | past | comments | ask | show | jobs | submit login

I'm a little fuzzy on the multitenant security promise of WebAssembly. I haven't dug deeply into it. It seems though that it can be asymptotically as secure as the host system wrapper you build around it: that is, the bugs won't be in the WebAssembly but in the bridge between WebAssembly and the host OS. This is approximately the same situation as with v8 isolates, except that we have reasons to believe that WASM has a more trustworthy and coherent design than v8 isolates, so we're not worried about things like memory corruption breaking boundaries between cotenant WASM programs running in the same process.

At the end of the day, if your runtime relies on a whole shared OS kernel, you have to be concerned about the bugs in the kernel. That's true of VMs as well, but to a much more limited extent: KVM is (by definition) much smaller than the whole kernel, and KVM bugs are rare.

I'm writing this mostly as a provocation; I don't have a clear enough understanding of backend WASM multitenant security to have strongly-held opinions about it.




> This is approximately the same situation as with v8 isolates, except that we have reasons to believe that WASM has a more trustworthy and coherent design than v8 isolates, so we're not worried about things like memory corruption breaking boundaries between cotenant WASM programs running in the same process

My understanding is that WASM's spec isn't just more coherent than JavaScript's, but also orders of magnitude smaller, which means less surface-area to check for vulnerabilities


Something that appeals to me about the multitenant security story in WebAssembly is how easy it is to provide alternative implementations for system calls. In most wasm implementations, you begin with a "raw" wasm runtime, and explicitly provide host functions ("system calls") that the wasm code can call. Nowadays the wasm implementation is likely to provide a wasi implementation out of the box, but it's simple to replace the implementation of one or more of the wasi system calls with your own (or define your own syscall interface entirely!). In this way, you can put in extra protections, monitoring, alternative implementations, etc where you see fit.

It's kind of like a built-in mechanism to adopt gVisor's approach to container security. Implementing gVisor is a gargantuan task that few companies would embark on; comparatively, doing the same in wasm is absolutely trivial.


The stuff running inside your WASM sandbox is also an easier target than software running on a native host, since WASM lacks a lot of the security mitigations we would now take for granted, like ASLR. Corrupting function pointers is also very easy to leverage for attacks in wasm because function pointers and functions in general are sequentially numbered starting from 0 instead of being at semi-random offsets in memory, an invoke will either work or produce a signature mismatch error.

So in practice it is probably easier to actually break the stuff inside the sandbox, and then you get to have fun trying to compromise the host system wrapper.


Sure, but there are big protections like that only the start of functions (with compatible signatures) can be pointed to, removing the majority of ROP gadgets, and the return addresses on the stack can not be overridden, so you can't create a ROP chain.


Absolutely, but the flipside is that eval, 'new Function' or 'new WebAssembly.Module' are one import away :( At least you can use content security policy to disable those and modern linters strongly discourage their use from JS, but if you're using many modern frameworks they're often tucked away in the implementation to solve interop problems.


While I mostly agree with you, let me play the devil a bit (just for fun!)

Let's agree with the base that (probably) most of the bugs will happen within the host defined imports (aka bugs in the systemcalls/kernel) vs bugs on the Wasm runtime per se. In that front, Wasm host system call implementations will offer not that much advantage compared to other virtualization approaches such as Firecracker.

However, those advantages are only perceived when you run software that's already fully containerized (in a OCI-like container). But here's the nit: there's many more software that is not containerized (for example: 3rd party libraries).

Running this libraries in Wasm can actually be more secure than running the native ones (since we are bringing Firecracker-like security to the systemcalls the library may use). Specially, when running this software locally.

So, in my view, Wasm forces most of the software to be secure, regardless of its kind (library or application/OCI-container).

It's a small thing that might make a big difference!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: