Hacker News new | past | comments | ask | show | jobs | submit login

You realize the type of expertise and money needed to design a processor? That’s not even to mention the loads of money you need to back up to TSMC to even get capacity. For cutting edge fabrication, you’re still going to be behind, Apple and Nvidia to get capacity



OpenAI doesn't need a general purpose GPU like Nvidia. It has too major tasks that could benefit from custom ASiCs that we know of: training and inference, for which efficient designs are different.

Designing a specialised ASIC is much easier and cheaper than a general-purpose, public use processor for a wide range of applications. You don't need most of the programmability or even half the compute and fancy memory and scheduling units. You also don't need the to develop a rich API (like CUDA or Vulkan) for third parties to use.

This holds even for state of the art intensive calculation engines with minimised energy consumption. Think of all the crypto-mining ASICs that were built a few years ago. They were relatively cheap to design and optimised for calculations per energy unit.


Your comment is straight to the point, just see how fast ASICs have destroyed the idea of using graphics cards to mine crypto.

I find it very likely that we'll have in a year or two, not only cheaper but also faster hardware that can do training and inference.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: