Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, but how much until integrated beats separate card in $/performance?

Once I bought a separate i387 chip for floating point. Later it was inside the main CPU.



Both CPU and GPU are in large part limited by power delivery and heat. Putting them both in one chip tends to make both of them worse. I wouldn't count on that reversing any time soon.

It is however curious how CPU and GPU become more similar over time, with CPUs getting ever wider SIMD instructions and GPUs becoming better over time with branching code, integer performance, etc.


There is some convergence, but CPUs are still latency optimized and GPUs are throughput optimized which makes for vastly different architectures.


> Putting them both in one chip tends to make both of them worse.

You don’t need to do that for them to share a single memory address space. Putting them together would be useful if sharing caches between them made sense, but it doesn’t.


Interestingly the later separate i487 chip was in fact a full 486 CPU and 487 FPU on one die, and disabled the 486 CPU in the main CPU socket. Integration really means something, even when the product SKUs aren't there yet.


They integrated it, but you were still buying it. You just saved a couple dollars on packaging.

Similarly, an integrated GPU has a meaningful price advantage at low enough price tiers.

But GPU price and power budgets have been pretty steady for a long time, so I wouldn't expect a major shift any time soon.


Lots of jobs where a GPU is a factor of 1000x faster so...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: