The cost can be significantly reduced immediately and drastically if OpenAI or Anthropic were to choose to do so.
By simply stopping the training of new models, profitability can be achieved on the same day.
With the existing models, we have already substantial use cases, and there are numerous unexplored improvements beyond the LLM, tailored specifically to the use case.
This only works if all the AI companies collude to stop training at the same time, since the company that trains the last model will have a massive market advantage. That not only seems extremely unlikely but is almost certainly illegal.
Current frontier models are not good enough because they still suffer from major hallucinations, sycophancy, and context drift. So there has to be at least (and I have no reason to believe it will be the last, GPT-5 demonstrates that the transformer architectures are hitting diminishing returns) one more training cycle.
By simply stopping the training of new models, profitability can be achieved on the same day.
With the existing models, we have already substantial use cases, and there are numerous unexplored improvements beyond the LLM, tailored specifically to the use case.