Hacker News new | past | comments | ask | show | jobs | submit login

So basically, the latency from "CreateInstance" to "I can SSH in" for the fastest AMI is roughly 3 seconds, with max being 8 seconds.

That's actually pretty solid. If it were 5-10x faster than that you could probably fit that into a lot of interesting use-cases for on-demand workloads. The bottleneck is the EC2 allocation itself, and I'd be interested in seeing what warm EC2 instances can do for you there.

That said, I think for the majority of use cases boot performance is not particularly important. If you want really fast 'boot' you might just want VMs and containers - that would cut out the 3-8 seconds of instance allocation time, as well as most of the rest of the work.

Curious to see a follow up on what's going on with FreeBSD - seems like it takes ages to get the network up.




>If it were 5-10x faster than that you could probably fit that into a lot of interesting use-cases for on-demand workloads

That's basically lambda although you lose control of the kernel and some of the userspace (although you can use Docker containers and the HTTP interface on Lambda to get some flexibility back)

Under the hood, Lambda uses optimized Firecracker VMs for < 1 sec boot

>I think for the majority of use cases boot performance is not particularly important

Anything with auto scaling. I think CI is probably a big use case (those get torn up and down pretty quickly) and you introduce hairy things like Docker in Docker unprivileged builds trying to run nested in a container


Yeah, CI is a good use case. Even autoscaling I kinda feel like you need to be a lot faster to make a huge difference tbh though.

And yeah, Firecracker is pretty sick, but it's also something you can just use yourself on ec2 metal instances, and then you get full control over the kernel and networking too, which is neat.


There's no fundamental reason Lambda needs to make you lose control of the kernel; I'd love to have Lambda with a custom kernel, and that doesn't seem like it'd be too difficult to support.


You can do lambda with containers which should get you close, I think.


A container does not include the kernel, so it doesn't get any closer. I just want a single static binary and a kernel, not a full container.


I wonder why they don't expose a kernel instead of just the rootfs. It's hard to imagine a great reason. Maybe they harden their guest kernel?


At one point, Lambda didn't expose the ability to write custom runtimes, and only provided runtimes for specific languages. People reverse-engineered those runtimes and figured out how to build custom ones. Eventually, Amazon provided a documented, well-supported way to build and run custom runtimes, but that required documenting the interfaces provided to those runtimes (e.g. environment variables and HTTP APIs instead of language-runtime-specific APIs).

I'd love to see Lambda support custom kernels. That would require a similar approach: document the actual VM interface provided to the kernel, including the virtual hardware and the mechanism used to expose the Lambda API. I'd guess that they haven't yet due to a combination of 1) not enough demand for Lambda with custom kernels and 2) the freedom to modify the kernel/hardware interface arbitrarily because they control both sides of it.


Yeah I'd bet that it's just a "haven't justified this work yet" kinda thing. We just run Firecracker VMs ourselves.


Interesting.

How does that work?

How do Firecracker VMs differ from containers on lambda or fargate?


Lambda uses a stripped down Linux kernel (afaik it has some syscalls removed)

The kernel surface is part of their security model. There's some details here https://www.bschaatsbergen.com/behind-the-scenes-lambda

E.g. exposing the kernel would undo some intentional isolation


I've seen that, but I wonder to what extent they've done custom work and to what extent they've just used established Kconfig options to compile out surface area they're not using.

In any case, Firecracker+Nitro seem like they'd be a sufficient security boundary.


Curious to see a follow up on what's going on with FreeBSD - seems like it takes ages to get the network up.

Our BIOS boot loader is very slow. I'll be writing about FreeBSD boot performance in a later post.


Hi Colin

Do you know if there is any plans on FreeBSD creating a super minimal server version that can be used as a docker host ... similar in size to Alpine Linux.


I know lots of people have made stripped down versions of FreeBSD, e.g. nanobsd. Beyond that, I'm not aware of anything specific... but I probably wouldn't be since I don't pay much attention to the container space. Try asking on one of the mailing lists maybe?


Where did you get 3? The fastest numbers I could see in the post add up to 7-8s


Per the post it’s 8.4 and independent of the OS:

> The first two values — the time taken for a RunInstances API call to successfully return, and the time taken after RunInstances returns before a DescribeInstances call says that the instance is "running" — are consistent across all the AMIs I tested, at roughly 1.5 and 6.9 seconds respectively

“Running to available” is what’s in the table, ranging from 1.23s to 70 or so.


>use-cases for on-demand workloads. Yea true ! Maybe depending on the availability maybe they can "spawn up" some x-amount of "spare servers" to get the median time even lower..


> Curious to see a follow up on what's going on with FreeBSD - seems like it takes ages to get the network up.

FreeBSD rc executes all rc.d stripts sequentially, in one thread. OpenRC AFAIK can start daemons in parallel, but unfortunately switch to OpenRC was abandoned: https://reviews.freebsd.org/D18578




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: