I really want to play with HaLVM at some point. Unikernels are fascinating to me. Think of how many server hours we can save by just booting up a unikernel on each incoming request.
The problem with HTTP, for me, is that I need to have some program running, listening on a port, in order to access the functionality that I've chosen to expose behind a REST API.
With unikernels, RESTful web servers go from being a special kind of function, that requires an always-running executable, to the executable being "applied to" incoming requests, as needed. So a DNS server can basically take an incoming request, boot up a unikernel in 20 ms, reply with the unikernel IP, and the newly started unikernel will receive the actual request data shortly thereafter. This feels more like "applying a web function" to an incoming request, and throwing away that function when the connection is closed - like a garbage collector, of sorts. Rather than having some executable sit and wait for data on a TCP socket, it would be nicer to be able to distribute web apps as shared objects: loaded into memory when needed and applied to a request.
> So a DNS server can basically take an incoming request, boot up a unikernel in 20 ms, reply with the unikernel IP, and the newly started unikernel will receive the actual request data shortly thereafter.
This actually sounds like how you would do this in Erlang - and it's really elegant because WebSockets/Server-Sent Events are simply encapsulated in processes spawned when the request is received. I don't know if this is how Erlang webservers actually work, but this is how I'd do it.
I'm pretty sure this has been abandoned. Hasn't had any updates in a really long time. "Latest commit bc97a26 on 12 Oct 2015".
One of the contributors recently wrote: maximk commented on 25 May "All recent updates to the code were customer-driven. New users mean new updates. The likely application area for this is NFV, modular software for telecoms. No definite plans"
It's not abandoned AFAIK, it's a small team that does consulting and they have to prioritise work for customers. When that coincides with Ling they can make progress. I've heard this from more than one unikernel/LibOS project.
I have seen this, and some of their blogposts have been quite intriguing, but I haven't seen much recently. Additionally, I have yet to see a guide to making and deploying a web application on it.
I'm not sure the second part of your question makes sense in the context of LING. It's a complete reimplementation of an Erlang emulator, not a derivative of BEAM.
That said, I've been working recently with Erlang on Rumprun precisely because I didn't want to rely on a totally unknown commodity (LING) in terms of implementation and behavioral semantics at the same time as drastically changing my operational semantics (unikernel deploy).
I think what you propose is pretty close to the original model of dynamic web requests: CGI
The webserver receives a requests, checks the destination and then applies a function to the request by spawning an executable that handles it.
The difference is that you want to spawn a unikernel instead of an executable and your acceptor is not a webserver but something different, but I think these are more performance, security and deployment considerations instead of huge architectural differences.
> "Rather than having some executable sit and wait for data on a TCP socket, it would be nicer to be able to distribute web apps as shared objects: loaded into memory when needed and applied to a request."
How would you know which web app to apply to a request without associating apps with TCP (or UDP) sockets? I'm not saying it can't be done, I'd just like to understand what you're proposing.
I guess one way would be to name your executable/object code appropriately, eg. mywebapp.io.exe, and have what would be a kind of reverse proxy (listening on a socket) open this executable/dll and apply it to incoming requests to mywebapp.io.
It's really just associating executables with a host/domain name, isn't it? Of course, it would be nice to serve different resources on the same domain using different executables, but I guess this could work too, if we just define domain->resource->executable routes.
It's really just changing the interface from a TCP socket, that a running executable is listening on, to something more sane, like an .so/.dll file which exposes a httpmain() function which accepts a HTTP request as an argument. Then the reverse proxy would be a caller of a function, rather than a dispatcher of data.
Then Amazon/Google/MSAzure would have this reverse proxy, which wouldn't be a reverse proxy at all anymore, running on their infrastructure. You would distribute your web app as a Xen image, and the reverse proxy/"web function proxy" would boot up your VM instance on incoming requests, charging for the number of milliseconds the instance runs.
How are most of you able to do uni kernel deployment and development? I've found that my primary deployment target (AWS EC2) is disqualified since most of these run times need hardware virtualization support.
I'm not sure what you mean. I've haven't had problems getting LING, Rumprun, and MirageOS unikernels deployed on EC2 using AWS's user-provided kernel support. It requires a kind of chain-booting process, but it works fine.
The problem with HTTP, for me, is that I need to have some program running, listening on a port, in order to access the functionality that I've chosen to expose behind a REST API.
With unikernels, RESTful web servers go from being a special kind of function, that requires an always-running executable, to the executable being "applied to" incoming requests, as needed. So a DNS server can basically take an incoming request, boot up a unikernel in 20 ms, reply with the unikernel IP, and the newly started unikernel will receive the actual request data shortly thereafter. This feels more like "applying a web function" to an incoming request, and throwing away that function when the connection is closed - like a garbage collector, of sorts. Rather than having some executable sit and wait for data on a TCP socket, it would be nicer to be able to distribute web apps as shared objects: loaded into memory when needed and applied to a request.