Hacker News new | past | comments | ask | show | jobs | submit login

>If it were 5-10x faster than that you could probably fit that into a lot of interesting use-cases for on-demand workloads

That's basically lambda although you lose control of the kernel and some of the userspace (although you can use Docker containers and the HTTP interface on Lambda to get some flexibility back)

Under the hood, Lambda uses optimized Firecracker VMs for < 1 sec boot

>I think for the majority of use cases boot performance is not particularly important

Anything with auto scaling. I think CI is probably a big use case (those get torn up and down pretty quickly) and you introduce hairy things like Docker in Docker unprivileged builds trying to run nested in a container




Yeah, CI is a good use case. Even autoscaling I kinda feel like you need to be a lot faster to make a huge difference tbh though.

And yeah, Firecracker is pretty sick, but it's also something you can just use yourself on ec2 metal instances, and then you get full control over the kernel and networking too, which is neat.


There's no fundamental reason Lambda needs to make you lose control of the kernel; I'd love to have Lambda with a custom kernel, and that doesn't seem like it'd be too difficult to support.


You can do lambda with containers which should get you close, I think.


A container does not include the kernel, so it doesn't get any closer. I just want a single static binary and a kernel, not a full container.


I wonder why they don't expose a kernel instead of just the rootfs. It's hard to imagine a great reason. Maybe they harden their guest kernel?


At one point, Lambda didn't expose the ability to write custom runtimes, and only provided runtimes for specific languages. People reverse-engineered those runtimes and figured out how to build custom ones. Eventually, Amazon provided a documented, well-supported way to build and run custom runtimes, but that required documenting the interfaces provided to those runtimes (e.g. environment variables and HTTP APIs instead of language-runtime-specific APIs).

I'd love to see Lambda support custom kernels. That would require a similar approach: document the actual VM interface provided to the kernel, including the virtual hardware and the mechanism used to expose the Lambda API. I'd guess that they haven't yet due to a combination of 1) not enough demand for Lambda with custom kernels and 2) the freedom to modify the kernel/hardware interface arbitrarily because they control both sides of it.


Yeah I'd bet that it's just a "haven't justified this work yet" kinda thing. We just run Firecracker VMs ourselves.


Interesting.

How does that work?

How do Firecracker VMs differ from containers on lambda or fargate?


Lambda uses a stripped down Linux kernel (afaik it has some syscalls removed)

The kernel surface is part of their security model. There's some details here https://www.bschaatsbergen.com/behind-the-scenes-lambda

E.g. exposing the kernel would undo some intentional isolation


I've seen that, but I wonder to what extent they've done custom work and to what extent they've just used established Kconfig options to compile out surface area they're not using.

In any case, Firecracker+Nitro seem like they'd be a sufficient security boundary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: