Oh, a lot of reasons. For one, I'm a data scientist and I am intimately familiar with the machinery under the hood. The hype is pushing expectations far beyond the capabilities of the machinery/algorithms at work, and OpenAI is heavily incentivized to pump up this hype cycle after the last hype cycle flopped when Bing/Sydney started confidently providing worthless information (ie "hallucinating"), returning hostile or manipulative responses, and that weird stuff Kevin Roose observed. As a data scientist, I have developed a very keen detector for unsubstantiated hype over the past decade.
I've tried to find examples of ChatGPT doing impressive things that I could use in my own workflows, but everything I've found seems like it would cut an hour of googling down to 15 minutes of prompt generation and 40 minutes of validation.
And my biggest concern is copyright and license related. If I use code that comes out of AI-assistants, am I going to have to rip up codebases because we discover that GPT-4 or other LLMs are spitting out implementations from codebases with incompatible licenses? How will this shake out when a case inevitably gets to the Supreme Court?
Why do you think that? Competition? Can you elaborate?