Hacker News new | past | comments | ask | show | jobs | submit login

> Which vCPU is faster, a ten year old Xeon E5-2643 v2 at 3.5GHz or a two year old Xeon Platinum 8352V at 2.1GHz? It depends on the workload.

It really does not depend on the workload, when those workloads we're talking about are by-and-large bounded to 1vCPU or less (CI jobs, serverless functions, etc). Ice Lake cores are substantially faster than Ivy Bridge; the 8352V will be faster in practically any workload we're talking about.

However, I do agree with this take, if we're talking about, say, lambda functions. The reason being that the vast majority of workloads built on lambda functions are bounded by IO, not compute; so newer core designs won't result in a meaningful improvement in function execution. Put another way: Is a function executing in 75ms instead of 80ms worth paying 30% more? (I made these numbers up, but its the illustration that matters).

CI is a different story. CI runs are only bound by IO for the smallest of projects; downloading that 800mb node:18 base docker image takes some time, but it can very easily and quickly be dwarfed by all the things that happen afterward. This is not an uncontroversial opinion; "the CI is slow" is such a meme of a problem at engineering companies nowadays that you'd think more people would have the sense to look at the common denominator (the CI hosts suck) and not blame themselves (though, often there's blame to go around). We've got a project that can build locally, M2 Pro, docker pull and push included, in something like 40 seconds; the CI takes 4 minutes. Its the crusty CPUs; its slow networking; its the "step 1 is finished, wait 10 seconds for the orchestrator to realize it and start step 2".

And I think we, the community, need to be more vocal about this when speaking on platforms that charge by the minute. They are clearly incentivized to leave it shitty. It should even surface in discussions about, for example, the markup of lambda versus EC2. A 4096mb lambda function would cost $172/mo if ran 24/7, back-to-back. A comparable c6i-large: $62/mo; a third the price. That's bad enough on the surface, and we need to be cognizant that its even worse than it initially appears because Amazon runs Lambda on whatever they have collecting dust in the closet, and people still report getting Ivy Bridge and Haswell cores sometimes, in 2023; and the better comparison is probably a t2-medium @ $33/mo; a 5-6x markup.

This isn't new information; lambda is crazy expensive; blah blah blah; but I don't hear that dimension brought up enough. Calling back to my previous point: Is a function executing in 75ms instead of 80ms worth paying 30% more? Well, we're already paying 550% more; the fact that it doesn't execute in 75ms by default is abhorrent. Put another way: if Lambda, and other serverless systems like it such as hosted CI runners, enables cloud providers to keep old hardware around far longer than performance improvements say it should be; the markup should not be 500%. We're doing Amazon a favor by using Lambda.




> It really does not depend on the workload, when those workloads we're talking about are by-and-large bounded to 1vCPU or less (CI jobs, serverless functions, etc). Ice Lake cores are substantially faster than Ivy Bridge; the 8352V will be faster in practically any workload we're talking about.

If you were comparing e.g. the E5-2667v2 to the Xeon Gold 6334 you would be right, because they have the same number of cores and the 6334 has a higher rather than lower clock speed.

But the newer CPUs support more cores per socket. The E5-2643v2 has 6, the Xeon Platinum 8352V has 36.

To make that fit in the power budget, it has a lower base clock, which eats a huge chunk out of Ice Lake's IPC advantage. Then the newer CPU has around twice as much L3 cache, 54MB vs. 25MB, but that's for six times as many cores. You get 1.5MB/core instead of >4MB/core. It has just over three times the memory bandwidth (8xDDR4-2933 vs. 4xDDR3-1866), but again six times as many cores, so around half as much per core. It can easily be slower despite being newer, even when you're compute bound.

> We've got a project that can build locally, M2 Pro, docker pull and push included, in something like 40 seconds; the CI takes 4 minutes. Its the crusty CPUs; its slow networking; its the "step 1 is finished, wait 10 seconds for the orchestrator to realize it and start step 2".

Inefficient code and slow hardware are two different things. You can have the fastest machine in the world that finishes step 1 in 4ms and still be waiting 10 full seconds if the system is using a timer.

But they're operating in a competitive market. If you want a faster system, patronize a company that provides one. Just don't be surprised if it costs more.


Lambda is good for bursty, typically low activity applications, where it just wouldn't make sense to have EC2 instances running 24x7. There about some line-of-business app that gets a couple of requests every minute or so. Maybe once a quarter there will be a spike in usage. Lambda scales up and just handles it. If requests execute in 50ms (unlikely!) or 500ms, it just doesn't matter.


Lambda is not crazy expensive. It’s expensive if you’re running something 24/7 in place of a physical server or VM.


It would be nice if I could still use all lambda functionality and tooling but instead have it running in my own vms with long time commitment.


Not quite sure I follow. But I built an asp.net api and deployed it into lambda and it cost $2/m and when it started to get more traffic and the cost got to $20/m I moved it to a t4g instance.

When I moved it, I didn’t need to make any code changes :) I just made a systemd file and deployed it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: