Ok, what when I talk to you, and say, "so, I saw Mike the other day in an accident." What do you reply? "oh, how is Mike? Did Mike hurt Mike?" Or do you reply "Oh, how is he? Did he hurt himself?"
And adding to this, isn't the whole issue assuming that Mike is a male? Imagine training a medical bot that gets stuck in a loop asking about pronouns in a an automated 911 call, for instance.
Modern society isn't easy to program for, if we're trying to make robots that conform to (western, minority) "norms".
Going back to my original post: "I kind of think that we need to steer AIs away from trying to talk like humans at all."
The question is "why would I be talking to ai about an accident involving another person?"
Is it a medical AI? then perhaps it would be better to refer to Mike as "the patient", etc. Which is how it would be handled in code.
Are you looking to make friends with your ai? Then no, I think you need to go find real humans to talk to, who can navigate complex human social interactions better.
But then you're not speaking to Mike, right? You're speaking about Mike. So unless the person you're speaking with gets offended on Mike's behalf I don't see what the problem is.