Hacker News new | past | comments | ask | show | jobs | submit login

But we already have containers and VMs are cheap.

I find WASM interesting from a technical perspective, but not from a practical one.




When Cloudflare Workers launched, they said V8 isolates had some great properties for serverless-style compute:

5 ms cold starts vs 500+ ms for containers

3 MB memory vs 35 MB for a similar container

No context switch between different tenants' code

No virtualization overhead

I'm sure these numbers would be different today, for instance with Firecracker, but there's probably still a memory and/or cold start advantage to V8 isolates.

https://blog.cloudflare.com/cloud-computing-without-containe...


But in this case, it's running PHP, which doesn't have a long-running model, it always cold starts, and it does so really fast natively. I can't see how it could be faster in WASM.


100% agree.

In almost all cloud deployment, whether transparently or not, you'll have a hypervisor/VM underneath for hardware-level/strong isolation reasons. Using wasm on top of that stack only for isolation purposes might not be the best use of it. Having said that, if wasm is useful for other reasons (e.g., you need to run wasm blobs on behalf of your users/customers), then my (admittedly biased) view is that you should run these in an extremely specialized VM, that has the ability to run the blob and little else.

If you do this, it is entirely possible to have a VM that can run wasm and still only consume a few MBs and cold start/scale to 0 in milliseconds. On kraft.cloud we do this (eg, https://docs.kraft.cloud/guides/wazero/ , wazero, 20ms cold start).


“Containers are not a sandboxing mechanism”, I hear reasonably often (although that seems surmountable at least in theory?).

VMs are cheap, but not “let’s run thousands of them on ‘the edge’ in case we get a request for any of them!” cheap.


On kraft.cloud we can (done internal stress tests for this) run thousands of specialized VMs (aka unikernels) scaled to zero, meaning that when a request for one of them arrives we can wake it up and respond within the timescales of an RTT. You can take it out for a spin, just use the -0 flag when deploying to do scale to 0 (https://docs.kraft.cloud/guides/features/scaletozero/).


Interesting – are we talking actual Linux VMs here, with binary-compatible syscalls etc., or something that applications need to be specifically built or packaged for in some way?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: