Even with a junior there is generally a logic to the mistake and a fairly direct path to improving in future. I just don't know if the next token was chosen to be x statistically is going to be able to get to that level.
Good thing LLMs aren't a glorified statistical model, then, eh?
Anyway, why wouldn't there be? You reach out to the parent company with an issue and request for improvement. If you're a big enough client, you get your request prioritized higher.
Same as with any other product that's part of your product today.
The appliance of LLMs today isn't straight up text in, text out; It has become more complex than that. Enough that it can be improved without improving the LLM model.