When a toddler can pull the trigger and kill someone, you may argue guns are pretty good at killing. Key point being, people don't have to be good at guns to be good at killing with guns. Pulling the trigger is accessible to anyone.
How often does that actually happen? It's only when a fun owner was irresponsible, leaving a loaded gun in an accessible place for the toddler.
Similarly, AI can easily sound smart when directed to do so. It typically doesn't actually take action unless authorized by a person. We're entering a time where people may soon be willing to give that permission in a more permanent basis, which I would argue is still the fault of the person making that decision.
Whether you choose to have AI identify illegal immigrants, or you simply decide all immigrants are illegal, the deciding is made by you the human, not by a machine.
Not the OP, but my best guess is it’s an alignment problem, just like gun killing what the owner is not intending to. So the power of AI to make decisions that are out of alignment with society’s needs are the “something, something’s.” As in the above healthcare examples, it can be efficient at denying healthcare claims. The lack of good validation can obfuscate alignment with bad incentives.
I guess it depends on what you see as the purpose of AI. If the purpose is to be smart, it’s not doing very well. (Yet?) If the purpose is to deflect responsibility, it’s working great.