That might be one of the worst ideas I've ever heard. Do you really want to be near a nuclear weapon that is controlled by an opaque, unexplainable process? Even without the insane unobservable complexity of machine learning, do you want to be near a nuclear weapon co9ntrolled by a simpler, theoretically explainable expert system "AI"?
Safety critical systems - including nuclear bombs - need to be simple, with realistically-understandable deterministic behavior. The answer to "what would our nuclear weapons do in ${any_hypothetical_situation}" should never be "I don't know".
Even worse, if an "AI" is involved (by any definition of "AI"), that means the control system involves a Turing complete language. Asking questions about the behavior of the control system for a given set of inputs probably shouldn't require solving the Halting Problem.
edit: I suspect these so-called experts need to read the report[1] from the Therac-25 investigation.
Lol... terminator dark fate sub 1, the parallel. Never before has a movie shown the events that are simultaneously manifested in reality.
Come on down this fall it’s going to be scorching out. I mean the sky will literally be on fire as your mind explodes from both radiation and the insane surelism that was terminator!
According to MAD principles a "safe" deterrent is less effective. Because your adversary must always think you are capable of delivering a retaliation. A perfectly safe system, may prevent that retaliation and invalidate the detterent effect. This issue applies to all technology and human systems connected with nuclear weapons. The outcome of nuclear war is so horrific that no technoogy would ever be good enough.
The solution is to eliminate them entirely and stop creating such insane choices.
> The solution is to eliminate them entirely and stop creating such insane choices.
That solution entails magically ensuring that no one else has such weapons or will develop them.
If we had a way of doing that, we wouldn't even need armies protecting borders. You'd still need minimal border security to ensure individual intruders don't make it through, but why maintain a hugely expensive army if you could ensure that nobody else could create an army to invade you (or anyone else?)
Personally I don't see long term national security as possible with nukes either. Sooner or later a situation will exist where a bad actor is in power. Or they are just launched by accident. And large countries are not guaranteed to be politically stable. These large nations can and do disintegrate.
I think the risk of obliteration without nukes is less bad. I would rather see my country destroyed in a first strike or invaded by a foreign power. I think that would be less bad than a nuclear war.
The funny things is that the USA could easily be defended from conventional attack. And a small sub based detterant would be good enough. And there is a risk in stoking paranoia in your enemy by having large stockpiles. The vast majority of nations have far less conventional defence than the US and still don't want nukes.
I mean wasn't that the point of having nuclear submarines, since they can't easily be located but can deliver enough of a retaliation to bring your victory party to an abrupt end (separately from realizing that the fallout from nuking your enemies is now drifting towards your country anyway thanks to the weather).
From the fine article you linked (for which link I thank you):"Time compression has placed America’s senior leadership in a situation where the existing NC3 system may not act rapidly enough. Thus, it may be necessary to develop a system based on artificial intelligence, with predetermined response decisions, that detects, decides, and directs strategic forces with such speed that the attack-time compression challenge does not place the United States in an impossible position."
Had the Soviet Union had such a system in place in September, 1983, human civilization would not be around for us to be having this discussion.
If the situation is so dire as to call for serious discussions about AI control of nuclear weapons, it is dire enough for us to initiate, sponsor, and encourage global discussion over massive multi-lateral nuclear weapons reduction.
Automate defensive systems, don’t automate the offensive launch of nuclear weapon warheads. Whoever came up with that idea should be fired. You have to know what the taboos are, what lines not to cross.
What’s next, simulated casualties are exchanged for actual casualties? I think one star trek series had a plot reaching that logical conclusion.
In the past 100 years, there have been several incidents where humans have averted war by refusing to follow orders. There have been far fewer incidents of humans causing nuclear war by either following or disobeying orders
> There have been far fewer incidents of humans causing nuclear war by either following or disobeying orders
True - but if you add the words "nearly" and "allegedly", then it's not such a rosy picture.
The book "Red Star Rogue" proposes the idea that the Soviet submarine K-129 was commandeered and attempted to launch a first strike on the United States in 1968. She was very far off her expected course when sunk, and allegedly there was a fail-safe system that scuttled the boat when an unauthorized attempt was made to launch her missiles.[1]
The US's nuclear weapons were capable of being used by a single individual for quite some time, until the 1960s. Even with the advent of the Permissive Action Link, until 1977 the 8-digit arming code was - seriously - "00000000". They were worried they wouldn't have the code available if the weapons were needed.[2]
There are numerous "close calls" that occurred, but those two came to mind right away in the context of human beings causing or having nearly caused a nuclear attack by disobeying orders in particular.
Maybe it’s some form of argument where you present three hard-to-swallow options (“Admittedly, each of the three options — robust second strike, preemption, and equivalent danger — has drawbacks” [1]), and then you pretend to offer a better alternative, which is actually so preposterous that the first three options start looking perfectly reasonable.
Theory: they know something about AI that we don't and because of what they know, decoded to scare all the world's AI practitioners into getting their shit together at the same time. Alternate theory: they are idiots. Alternate theory number two: they know something about AI and a lot about nuclear doctrine that most people don't, and they are actually onto something...
Why not remove PAL* and delegate authority to local commanders? I'd rather take the risk of trusting a human over a human designed, highly complex computer system.
In the UK, due to the lack of land-mass to absorb a strike, the PM issues final orders to the commanders of the Trident subs. This is part of the deterrence strategy.
The article on Letters of Last Resort made me think how absurd democracy can be at times.
It looks like PM May wrote her Letters of Last Resort, and then Johnson did the same. But from the looks of it, new Letters of Last Resort could be written pretty soon. Just thinking of those sub commanders, receiving new letters last week and then getting another set of letters just a couple of weeks later.
The proposal is indisputably harebrained, but the question is why the authors decided to publish it? One of the authors is a former US Air Force colonel.
And in particular why they choose to propagate falsehoods like "Hypersonic cruise missiles are powered all the way to their targets using an advanced propulsion system called a SCRAMJET. These are very, very, fast. You may have six minutes from the time it’s launched until the time it strikes." Lest it be any confusion, scramjets are fast, but are nowhere as fast as ICBM's (Mach 5-10 vs Mach 20-23).
> Lest it be any confusion, scramjets are fast, but are nowhere as fast as ICBM's
I agree that's weird phrasing and incorrect on its face. That said - as I understand it, cruise missiles don't have the launch signature ICBMs have. While it would take longer for a cruise missile to travel the same distance as an ICBM, it might in fact give the target country less time to react if they aren't detected until they are well on their way.
I came here to see if someone already mentioned Colossus.
Funny trivia: there are parts of that movie shot in Rome, my city; you can see the Vatican, San Peters, the Orange Garden, the Janiculum and Isola Tiberina, specifically the hospital "Fate bene fratelli", where I was born.
The Americans meet there with Russians, joining forces to stop Colossus, and you can see written on a wall a mirrored "W Lenin".
The writing was actually there before they shot the movie, it was not added on purpose and stayed there until not long ago.
I'm not saying it's a good idea, but software can and has been written to evaluate the likelihood of the validity of certain facts using heuristics. Petrov said the official training taught that an American first attack would be massive, not just a few missiles; that same knowledge could be used in an algorithm.
Doesn't that suggest the worst possible act of mass slaughter in that system would be to get a "massive" array of fake missle drones or dud missles from the direction of a nuclear power to spark dual MAD strikes from a third party?
It would shift it to a hackable spoofable system. Said bad actor could state a "technically I didn't kill millions/billions you did" and be right in a way.
Yes, but that's a failure mode of any system, including with humans: a realistic false flag attack can provoke a counter-attack against your real target. The solution is to improve your capacity of distinguishing true attacks, regardless of whether the analysis is made by a human or software.
I suspect the real danger of the automated system is speed and with it being beyond the ability of others to counter it - which means the system may overreact immediately with nothing to be done to stop it in matters of ultimate stake.
I don't see how faster ballistic missiles change the game. In the past you still could do surprise attacks and example eliminate the chain of command, or the lines of communication. While there are reports that suitcase nuclear devices were developed.
However what ultimately fixed this issue in the 60s was the adoption of nuclear submarines - you can surprise the adversary, but because you cannot find the subs the retaliatory strikes are assured.
Čapek warned us of this in 1921. Give a machine inteligence enough to be dangerous, without the emotional safeguards of humans and disaster is certain to follow.
AI controlled weapons... Holy shit... Stupid ideas like that makes contemplate moving to the middle of the Pantanal or Amazon jungle to live of the land, away from all this madness.
I think Congress needs to pass legislation banning such an insane, stupid idea. And we should stop calling it "artificial intelligence," as if some software is actually intelligent comparable to humans. This is just software and as with all software, it has bugs. Trusting it to not start a nuclear war is essentially saying, "this software, which we can't even explain how it works (cause ai), is bug free." Insane. This whole ai movement is dangerous but not because something like Hal or the terminator will be created but because stupid people like these will convince the government to put such weapons under the control of buggy software no one, including its creators, understands with horrific consequences. All so we can make sure that the whole world is destroyed in case of nuclear war. The insanity is truly mind boggling.
Congress isn't gonna pass legislation to ban the military from suggesting stupid ideas, or the executive branch from approving them, because that itself would be a stupid idea that reduces American defence readiness and suppresses worthwhile ideas as well as bad ones.
> as if some software is actually intelligent comparable to humans
How's your Go game coming along over there? (Have you ever even played? Because if you had I think you'd have more respect for the intelligence required (for a human) to succeed even moderately at the game.)
> not because something like Hal or the terminator will be created
Actually it is pretty dangerous for those reasons too.
Playing go or chess or any other game isn't intelligence. The computer isn't intelligent. It's programmed for one task and one task only. That's great programming and use of statistics etc. but that's not intelligent.
Anyway, if you're worried about terminator and Hal like beings, it's futile to discuss intelligence when you are living in a science fiction world. In our world, those things do not exist and there is no sign that they will exist anytime soon. What does exist and is scary is bug ridden software that controls weapons. That doesn't make for great science fiction though.
If your private definition of intelligence excludes everything machines have already been made to do, I suspect it is indeed pointless to discuss machine intelligence with you.
Personally, I'm not hung up on whether or not submarines can swim. If you are, please give me your clear definition of intelligence and explain how Go and chess and "any other game" are excluded by it.
If submarines are intelligent then cars and bikes must be intelligent also since they operate on similar principles. If a bike is intelligent, then pogo sticks are intelligent too. I suppose at that point, anything can be intelligent. What kind of intelligent things will bikes and cars and pogo sticks and submarines do if all humans ceased to exist? Let me guess: none.
I was referring to the famous quote from Dijkstra:
"The Fathers of the field had been pretty confusing: John von Neumann speculated about computers and the human brain in analogies sufficiently wild to be worthy of a medieval thinker and Alan M. Turing thought about criteria to settle the question of whether Machines Can Think, a question of which we now know that it is about as relevant as the question of whether Submarines Can Swim."
Nobody is suggesting submarines can think. The actual point that Dijkstra was making is that it matters what the machine is capable of, but it doesn't matter what we call the thing that it does.
If your private definition of intelligence doesn't allow you to apply it to machines, that's irrelevant.
This must surely be an instance of "add 'AI' to whatever topic" in order to get attention, rather than a serious proposal, right? I'm sure we'll see "put nuclear weapons on blockchain" soon.
> Unlike the game of Go, which the current world champion is a supercomputer, Alpha Go Zero, that learned through an iterative process, in nuclear conflict there is no iterative learning process.
What the flying fuck? We don't even understand how modern AI algorithms actually work and how they decide stuff. And we want to give such blackboxes the power to annihilate all life on Earth?
I think the offense industry will read your comment and get all big-headed about how much stuff they think they can kill. They can definitely kill off homo sapien, but earth life will go on. There's still a few billion year left in the sun's main sequence. Maybe something more compassionate will evolve out of the leftovers.
Safety critical systems - including nuclear bombs - need to be simple, with realistically-understandable deterministic behavior. The answer to "what would our nuclear weapons do in ${any_hypothetical_situation}" should never be "I don't know".
Even worse, if an "AI" is involved (by any definition of "AI"), that means the control system involves a Turing complete language. Asking questions about the behavior of the control system for a given set of inputs probably shouldn't require solving the Halting Problem.
edit: I suspect these so-called experts need to read the report[1] from the Therac-25 investigation.
[1] http://sunnyday.mit.edu/papers/therac.pdf