This isn't Elon Musk-style evil AI. This is the more prosaic idea of hackers being able to hack AI systems. For instance, a hacker can put stuff on a stop sign (which won't be visible to humans) to make a Tesla car think it's not a stop sign, and therefore cause an accident. I recommend reading up about it. This is the kind of stuff that can actually happen today.
There are practical applications too. For instance, getting around automated filters like copyright or porn detection. Making adversarial captchas that fool computers but not humans.
Your brain is not immune to the problem, it's just hard to automate the creation of optical (and audio, and presumably all other sensory) illusions when we don't have a synapse resolution connectome of the relevant bits of your brain.
Examples include That Dress, duck-or-rabbit, stereotypes, "garden path sentences", and most film special effects.
Ironically your second link starts with Clever Hans, which is another example of my point. Machines, even organic ones like our brains, are not magically able to know objective reality, and the failure itself isn't idiocy — the rate of failure (and things like metecognition about the possibility of failure) is (at last part of) the intelligence-idiocy spectrum.
"Generally, the vehicle will have a set of sensors to observe the environment, and will either autonomously make decisions about its behavior or pass the information to a human operator at a different location who will control the vehicle through teleoperation."
Not saying that creating military killer robots counts as "trying to do good", however ...
if we're talking about a terminator-like scenario, let's not forget the robots from the movie initially are human-designed military appliances.
Possibly by someone trying to do good. More likely by someone just following their rational incentives. Very few people are 'evil' actors but most follow incentives.
Look at the ransomware and DDOS botnet epidemics though. If these programs were to infect and control self-driving cars all sorts of science fiction level bad stuff could happen.
I really hate that smart comment. Two big differences are that with overpopulation on Mars, (1) we would be able to address it before it happened or while it was happening, on a time-scale of decades, and (2) even if there was a total disaster and Mars became permanently unlovable and a lot of people died, we'd still have Earth. Whereas (for those concerned about runaway AI) the time-scale could be minutes and the stakes are the end of known intelligent life in the universe.