Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It depends on how which performance metrics you're interested in, where you draw the boundaries for individual workloads, and how you then schedule those workloads. Hyperlight can start new Wasm workloads so quickly that you might not need to keep any idling instances around ("scale to zero"). That's new, and it makes comparisons a little more complicated. For example:

- If we take VMs as our boundary and compare cold start times, Hyperlight confidently comes out on top. That's 1-2ms vs 125ms+.

- If we take warm instances and measure network latency for requests, Hyperlight will come out on top if deployed to a CDN node (physics!). But if both workloads run in the same data center performance will be a lot closer.

- Say we have a workload where we need to transmux a terabyte of video, and we care about doing that as quickly as possible. A native binary has access to native instructions that will almost certainly outperform pure-Wasm workloads.

I think about Hyperlight Wasm is as yet another tool in the toolbox. There are some things it's great at (cold-starts, portability, security) and some other things it isn't. At least, not yet. Whether it's a good fit for what you're doing will depend on what you're doing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: