Please don't talk about murdering the person you're talking to. It's intented to provoke painful emotion.
I'll clarify my comment though. The object is really a model that's shaped like a turtle but with pictures of rifle parts on it. It's neither a rifle nor a turtle. Both human and computer are too confident in their classification and both are just as wrong.
Using your analogy. You could actually hide gun inside a lunchbox and fool humans.
That doesn't follow because the pistol painted to classify as a lunch box is still a pistol.
The "Turtle" is a plastic replica of a turtle not a real turtle. It's the treachery of images idea - "Ceci n'est pas une pipe."
Humans see its form and recognize the plastic replica to be a representation of a turtle because we prioritize its shape over its textured image which seems more correct to us, but I'm not sure that it really is more correct in some objective way. In this case I suppose you could say it is because a turtle is what we mean for it to represent, but the test seems rigged in favor of human visual classification.
I think an interesting question is what adversarial attacks exist on human vision that may not affect a machine (certain optical illusions?). If we're also vulnerable to this kind of manipulation then it may not be something unique to computer vision we may just be picking test cases that we're better at. Then it's just a matter of tradeoffs and deciding when we want human style classification.
I agree, but it's an unfair test - it was designed to confuse the computer and not the human.
For a counter example - imagine that you make a toaster that looks exactly like a pistol, but it actually just toasts bread.
A human would think it's a pistol when looking at it (so would the machine in this case). There may be adversarial examples where the human classification is worse than the machine if you specifically try and make examples that are bad for the human visual system.