It may or may not have happened with tanks; it sure happened with horses:
To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made.
In a simple test, Müller’s team used LRP to work out how two top-performing AIs recognised horses in a vast library of images used by computer vision scientists. While one AI focused rightly on the animal’s features, the other based its decision wholly on a bunch of pixels at the bottom left corner of each horse image. The pixels turned out to contain a copyright tag for the horse pictures. The AI worked perfectly for entirely spurious reasons. “This is why opening the black box is important,” says Müller. “We have to make sure we get the right answers for the right reasons.”
There is, in general, a great deal of work on explaining the decisions of neural net. Explainable AI is a thing, with much funding and research activity and there's books and papers etc, e.g. https://link.springer.com/book/10.1007/978-3-030-28954-6.
And all this is becaue, quite regardless of whether that tank story is real or not, figuring out what a neural network has actually learned is very, very difficult.
One might even say that it is completely, er, irrelevant, whether the tank story really happened or not, because it certainly captures the reality of working with neural networks very precisely.
Incidentally, (human) kids should never be allowed to hug sheep or goats like that. They can easily catch something nasty (enterotoxic E. coli, mostly). See e.g.:
To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made.
In a simple test, Müller’s team used LRP to work out how two top-performing AIs recognised horses in a vast library of images used by computer vision scientists. While one AI focused rightly on the animal’s features, the other based its decision wholly on a bunch of pixels at the bottom left corner of each horse image. The pixels turned out to contain a copyright tag for the horse pictures. The AI worked perfectly for entirely spurious reasons. “This is why opening the black box is important,” says Müller. “We have to make sure we get the right answers for the right reasons.”
https://www.theguardian.com/science/2017/nov/05/computer-say...
There is, in general, a great deal of work on explaining the decisions of neural net. Explainable AI is a thing, with much funding and research activity and there's books and papers etc, e.g. https://link.springer.com/book/10.1007/978-3-030-28954-6.
And all this is becaue, quite regardless of whether that tank story is real or not, figuring out what a neural network has actually learned is very, very difficult.
One might even say that it is completely, er, irrelevant, whether the tank story really happened or not, because it certainly captures the reality of working with neural networks very precisely.