You realize the type of expertise and money needed to design a processor? That’s not even to mention the loads of money you need to back up to TSMC to even get capacity. For cutting edge fabrication, you’re still going to be behind, Apple and Nvidia to get capacity
OpenAI doesn't need a general purpose GPU like Nvidia. It has too major tasks that could benefit from custom ASiCs that we know of: training and inference, for which efficient designs are different.
Designing a specialised ASIC is much easier and cheaper than a general-purpose, public use processor for a wide range of applications. You don't need most of the programmability or even half the compute and fancy memory and scheduling units. You also don't need the to develop a rich API (like CUDA or Vulkan) for third parties to use.
This holds even for state of the art intensive calculation engines with minimised energy consumption. Think of all the crypto-mining ASICs that were built a few years ago. They were relatively cheap to design and optimised for calculations per energy unit.