I am sure this has been answered before, but a quick google search didn't really clear this up for me but... I don't quite understand the usage of WASM on the server side (not denigrating, just looking for an explanation). If you are using rust, can't you just compile it down to a binary and run that on your server? Or is the main advantage the sandboxing? Or is the idea you have a bunch of wasm compatible servers and then you can just schedule a bunch of different wasm programs on a server no matter what language you wrote it in?
Is the workflow: Build app in rust, compile it down to WASM, and then run it on a server with wasmtime? I think I am missing some step.
I'm the author of a Wasm on the server side runtime[0] that focuses primarily on rust compiled to Wasm, and this question comes up quite often. Why compile to Wasm if you can compile to native code directly? To add to Radu's answer, here are some of my favourite reasons.
Having an in-between bytecode layer allows you to build application architectures in rust that would not be possible when compiling directly to machine code. Hot reloading is a good example. Having the VM hold onto resources like tcp streams and file descriptors allows you to exchange the business logic without even breaking a tcp connection. Fine-grained sandboxing is another good example. Revoking filesystem access from just parts of your applications or limiting memory/cpu usage for each individual request[1] is something that is just impossible to do correctly without a vm managing it.
A less obvious benefit are the improvements to the developer experience, like compile times. Most of the dependencies (async executor, tcp stack, message passing, ...) are usually already part of the wasm runtime and don't need to be compiled/linked again. The rust compiler also seems to have a better time when it comes to generating .wasm executables instead of native ones. Most rust wasm apps I write compile much faster than equivalent native ones. Just because there is so much less for the compiler to do.
Many wasm runtimes, like lunatic, include an async scheduler and green threads/processes. This means that you get most of the benefits of async rust without needing to actually use async and worry about all the issues that come with it[2].
Thanks for the response. Lunatic seems like a really interesting piece of software. Allowing you to run anything that compiles down to wasm seems like an awesome opportunity for a platform to solve some of the headaches of current serverless setup.
But you can already do this with DLLs or with SOs files. You can keep your network logic for example inside another module (perhaps the host app) and then load the logic that acts on the received data as a dll.
Process separation and IPC natively using a library such as Qt is literally a few hundred lines of code.
But that's sort of moving the goal posts now. Original claim for WASM was that it allows hot reloading. Which is a thing you can do with DLLs just fine.
No, the original claim of Wasm definitely isn't only one side point about development. The original claim of Wasm is that all of these features are packaged in a well thought out whole + the success of this platform (thanks to browsers mostly) is what's so interesting about it.
It is also hard to contain the DLL/SO, every system has different mechanisms to ensure it can not access 'everything'. Wasm, as build for the browser, has it in its DNA.
Could you explain more about how hot reload works? I don't see how you can continue from some arbitrary point in an HTTP request, for example, without having detailed knowledge about how a language stores its state in memory and how the code changed between versions.
In short, when compiling to Wasm, you are building a binary that is agnostic of the operating system and CPU architecture it is going to run on, so it can be executed in lots of very different places (microcontrollers, Raspbery Pis, in the cloud).
You can also use several VERY different programming languages, and interoperate between them (write a component in Rust, import it in a new component in JavaScript, and use those two from C# — this is an example for how the component model will (hopefully soon) enable cross-langauge interop in WebAssembly).
Among other benefits, the compact binary format (which makes it easy to distribute modules), the isolation sandbox, common compilation target.
Sure, I suppose that's one way to put it. It has a lot of real, practical benefits over the JVM though (locked down by default, small surface area, simpler, buy in from major platforms, compilation target for multiple major languages), and this feels like a dismissal you could put on anything that follows in the footsteps of something else. One could dismiss Rust as hipster C++, but that wouldn't really engage with the issues of compile-time memory safety or concurrency or ADTs. Or JSON as the hipster XML! These are all true in a sense, but it doesn't address the actual advantages and disadvantages (both of which exist for all of these examples).
I'd think of it as plugins everywhere, like L7 proxies [1]. The code for a network filter could be reloaded in the server upon detection to a configuration change. I believe the startup mad dash is on to capture who is going to run the container runtimes, and host the plugin repositories.
So my guess is that Fermyon Technologies with Spin could be looking to follow the Vercel with NextJS model where everything will be open source and runnable from the development runtime, but there eventually will be features like Edge Functions [2] in five years that will only be available when you deploy to their hosted service. They'll likely work towards that with web services with Spin [3]
and CMS with Bartholomew [4] for starters. Instead of everything linked together in a NodeJS app directly, your code will run from a WASM library in a sandbox.
But my guess about the Vercel model could be slightly wrong after checking out their solid founding team [5] -- the bios emphasize the WASM and Kubernetes worlds. The problem if it is something like the Docker model is that the standard will just be made with the big players like OpenContainer [6] and the enterprise business sold off [7], and/or folded into one of the cloud infra players (with the data centers).
Is the workflow: Build app in rust, compile it down to WASM, and then run it on a server with wasmtime? I think I am missing some step.