Hacker News new | past | comments | ask | show | jobs | submit login

What's the difference between Apple's unified memory and the shared memory pool Intel and AMD integrated GPUs have had for years?

In theory you could probably assign a powerful enough iGPU a few hundred gigabytes of memory already, but just like Apple Silicon the integrated GPU isn't exactly very powerful. The difference between the M1 iGPU and the AMD 5700G is less than 10% and a loaded out system should theoretically be tweakable to dedicate hundreds of gigabytes of VRAM to it.

It's just a waste of space. An RTX3090 is 6 to 7 times faster than even the M1, and the promised performance increase of about 35% for the M2 will means nothing when the 4090 will be released this year.

I think there are better solutions for this. Leveraging the high throughput of PCIe 5 and resizable BAR support might be used to quickly swap out banks of GPU memory, for example, at a performance decrease.

One big problem with this is that GPU manufacturers have incentive to not implement ways for consumers GPUs to compete with their datacenter products. If a 3080 with some memory tricks can approach an A800 well enough, Nvidia might let a lot of profit slip through their hands and they can't have that.

Maybe Apple's tensor chip will be able to provide a performance boost here, but it's stuck on working with macOS and the implementations all seem proprietary so I don't think cross platform researchers will really care about using it. You're restricted by Apple's memory limitations anyway, it's not like you can upgrade their hardware.




Apple gets significant latency and frequency benefits from placing their LPDDR4 on the SoC itself.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: