Because it objectively is. Do you use AI tools? The output is clearly and immediately useful in a variety of ways. I almost don't even know how to respond to this comment. It's like saying “computers can be wrong. why would you ever trust the output of a computer?”
But you needed to tell it that it was wrong 3-4 times. Why do you trust it to be "correct" now? Should it need more flogging? Or was it you who were wrong in the first place?