I agree: humans generally don't tolerate mistakes from machines if the mistakes are not similar to those that humans would make. And people generally don't recognize their own intellectual shortcomings in comparison to others (other human cultures, other non-human animals, machines) as long as those shortcomings are common within one's peer group. It's unremarkable to be unable to memorize long passages in a literate culture, but it's an intellectual impairment to have a hard time distinguishing similar symbols like U and V. In an oral culture the relative importance of memory and visual symbol disambiguation are reversed.
It's not totally illogical; it presents real problems. Let's say you wanted to offload content filtering to an AI and have it get rid of sexually explicit or graphically violent images. In this case an AI-based filter that could be fooled by adversarial input much more easily than a human.