I'm not saying it's a good idea, but software can and has been written to evaluate the likelihood of the validity of certain facts using heuristics. Petrov said the official training taught that an American first attack would be massive, not just a few missiles; that same knowledge could be used in an algorithm.
Doesn't that suggest the worst possible act of mass slaughter in that system would be to get a "massive" array of fake missle drones or dud missles from the direction of a nuclear power to spark dual MAD strikes from a third party?
It would shift it to a hackable spoofable system. Said bad actor could state a "technically I didn't kill millions/billions you did" and be right in a way.
Yes, but that's a failure mode of any system, including with humans: a realistic false flag attack can provoke a counter-attack against your real target. The solution is to improve your capacity of distinguishing true attacks, regardless of whether the analysis is made by a human or software.
I suspect the real danger of the automated system is speed and with it being beyond the ability of others to counter it - which means the system may overreact immediately with nothing to be done to stop it in matters of ultimate stake.
https://duckduckgo.com/?q=Russian+officer+prevented+wwiii&t=...