With the amounts being invested into AI and even just in recognition of running an AI service (@see how openAI loses money on their $200 subscriber accounts). The "for funsies" services like switching an HTML form over to a chatbot are clearly not going to be a realistic resolution for this technology. I'd argue that even when it comes to code generation the tools can be useful for green-field prototyping but the idea that a developer will need to verify the output of a model will never cause more than marginal economy of tools in that sector.
The outcome that the large companies are banking on is replacing workers, even employees with rather modest compensation end up costing a significant amount if you consider overhead and training. There is (mostly) no AI feature that wall street or investors care about except replacing labor - everything else just seems to exist as a form of marketing.
One prescient comment was made by Eric Yuan (Zoom CEO), who made the claim that the reliability and general efficacy issues from AI products would be solved "down the stack", and I think this is more or less the attitude right now. Firms believe if they can build their LLM/AI products, the underlying LLMs/AI will catch up over time to meet their requirements.
I've also talked to a number of CTOs and CEOs who tell me that they're building their own AI products nominally to replace human workers, but they're not necessarily confident it will be successful in the foreseeable future. However, they want to be in a good place to capitalize on the success of AI if it does happen.
The outcome that the large companies are banking on is replacing workers, even employees with rather modest compensation end up costing a significant amount if you consider overhead and training. There is (mostly) no AI feature that wall street or investors care about except replacing labor - everything else just seems to exist as a form of marketing.