Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> After all, the 5090 alone is multiple times the wattage of this SoC.

FWIW, normalizing the wattages (or even underclocking the GPU) will still give you an Nvidia advantage most days. Apple's GPU designs are closer to AMD's designs than Nvidia's, which means they omit a lot of AI accelerators to focus on a less-LLM-relevent raster performance figure.

Yes, the GPU is faster than the NPU. But Apple's GPU designs haven't traditionally put their competitors out of a job.



M2 Ultra is ~250W (averaging various reports since Apple don’t publish) for the entire SoC.

5090 is 575W without the CPU.

You’d have to cut the Nvidia to a quarter and then find a comparable CPU to normalize the wattage for an actual comparison.

I agree that Apple GPUs aren’t putting the dedicated GPU companies in danger on the benchmarks, but they’re also not really targeting it? They’re in completely different zones on too many fronts to really compare.


Well, select your hardware of choice and see for yourself then: https://browser.geekbench.com/opencl-benchmarks

> but they’re also not really targeting it?

That's fine, but it's not an excuse to ignore the power/performance ratio.


But I’m not ignoring the power/performance ratio? If anything, you are doing that by handwaving away the difference.

Give me a comparable system build where the NVIDIA GPU + any CPU of your choice is running at the same wattage as an M2 Ultra, and outperforms it on average. You’d get 150W for the GPU and 150W for the CPU.

Again, you can’t really compare the two. They’re inherently different systems unless you only care about singular metrics.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: