Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Artificial intelligence is changing every aspect of war (economist.com)
121 points by johnny313 on Sept 8, 2019 | hide | past | favorite | 65 comments


Just wait until somebody think it's a good idea to declare war, based on the advice of an AI. That scares me more than anything.

Or launching nuclear weapons based on the advice of AI (or allowing AI to control them altogether). All out nuclear war has so far been avoided multiple times, because of an actual human making a split second decision at the right moment.


Human civilization will die an embarrassing death. Maybe we will even earn the equivalent of a Darwin Award on Gwelkklak 10, "Died by Own Avoidable Science Fiction Trope".

Maybe the answer to the Fermi Paradox is, "tripped on shoe laces while scoring +10 on the sosh meeds".


You don’t know that so why bother speculating like you do ?


Are you really saying that people shouldn’t use their imagination and not talk or think about things they are not 100% sure of?

Because if that’s what we did then no need to worry about getting back to the Stone Age, we would have never left it anyway... Man’s ability to dream, imagine and explore the unknown both physically and intellectually is what got us where we are.


"Human civilization will die an embarrassing death."

The language used here isn't speculation, it's absolute. If the author was to prefix the statement using speculative terms then I wouldn't have said a thing.

The remark came across as highly negative, unsubstantiated and not useful so I called it out.

Edit: Downvotes? You're going to have to do better than that before I change my mind.


The remark came across as highly negative

Probably because it was.

It was also a reply to a comment about a hypothetical future, which sort of implies speculation for all child comments.

And how can something be absolutely embarrassing? What is and is not embarrassing is an opinion. How would someone word that in a way that fits your requirements?


The comment doesn’t need to fit my requirements, it surprises me the Internet now feels like it needs to convince people to think a certain way.


Dr Strangelove, Skynet in Terminator, all of this has been talked about many times before. The truth is that already most of nukes are probably automated to ensure the concept of mutually assured destruction works.


You don't even have to automate all that and still achieve the goal of destroying both the human civilization as we know it: it was already existing in practice, simply by delegating (human) responsibilities and effective since the thermonuclear bombs were stockpiled. It's documented in the book: https://www.bloomsbury.com/us/the-doomsday-machine-978160819...


> The truth is that already most of nukes are probably automated to ensure the concept of mutually assured destruction works.

Why do people say things like this? They aren't. There is still a human-in-the-loop for all systems I'm aware of.

The problems with nukes are that (a) the command and control systems can't reliably prevent one or a small number of people from going from going rogue or (b) the 'human' in the loop has to make a quick decision whether to launch based on potentially crappy inputs.

The latter case has happened enough in the past that it should be pretty clear it's a serious problem. See https://en.wikipedia.org/wiki/List_of_nuclear_close_calls for a list.


But really, as long as everyone believes in MAD, there is really no reason to carry it out. Engineers working on these systems could quite reasonably, and in good faith, install break points and avoid meaningless loss of life. Then the only MAD system which actually needs implementation is if one of those engineers talks..


But for everyone to believe it, it has to be an at least reasonable belief. Luckily, that's a low-ish bar with fairies being a somewhat common belief too (I have no numbers but want them...). So those engineers for those same good faith reasons install redundancies. Heh, it's a balance that may or may not tip positive (everything working) or neutral (nothing happens) or negative (everything sabotaged by an enemy actor) and we'll only know after the trigger has been pulled.


Pretty sure they took the "men out of the loop" back in 1983 ... https://www.youtube.com/watch?v=5bF1_PGMAj0


"Or launching nuclear weapons based on the advice of AI"

That's the crazy thing about hypersonic nuclear weapons... A bomb going from Russia to America in under 5 minutes, who else is going to decide than an algorithm... Knowing world peace is partly ensured by mutually assured destruction :s

(To be clear, I'm for a moratorium on hypersonic nuclear warfare, for sure.)


If we play fast and loose with the definition of AI we've come close already with operation RYAN (https://en.wikipedia.org/wiki/RYAN):

> By May of 1986, these binders had evolved into a catalogue of 292 indicators of “signs of tension.” - https://www.wilsoncenter.org/publication/forecasting-nuclear...

I'm not sure how much was computerized but it basically took a bunch of signals and fed them into an algorithm to tell if the US was about to nuke them.


Command and Control: Nuclear Weapons, the Damascus Accident, and the Illusion of Safety is a great book about how close you can get to launching a war unintentionally.


It is also a chilling documentary on Prime and probably YouTube. Just to think how close we came to losing part of Arkansas and Tennessee...and then to find out how common this was.



Just wait until somebody thinks it's a good idea to declare war, based on the advice of someone in their cabinet


Maybe I'm a naive optimist but I doubt that will ever happen (or until we have AGI). We as a species dont really like to give up control. See for example the "Auto Pilot": we have the tech to fly planes safer than human pilots for decades now, yet we don't because we don't like the feeling of not being in control. I can't imagine that a group of people decide to hand control of a WMD to a computer.


It may also be the need to blame someone, the pilot in this case, in case of a disaster.


We've almost had several nuclear weapons launched due to flaws in human decision making, two that were launched with the inputs of humans, and a ton of wars launched with the input of humans. What about an AI being involved automatically makes the situation so much worse?


Look at any of the "flash crashes" of recent years were out of control AI escalated the problem much faster than humans could react to it. Just take that and imagine that there are nuclear weapons involved.


Ok. None of those have had any long term negative outcomes. What was I supposed to imagine?


A stock dipping isn’t so bad because it can go back up. A nuclear missiles cannot unexplode.


And professionals price the downside into those trades. You don’t engineer trades the same way you do nukes cause it literally doesn’t matter if you get them wrong. As the history of “flash crashes” has shown.


A Bit easier to recover from a market dip than a nuclear strike don't you think?


It wasn't even that you recovered from the dip, the trades were actually busted (invalidated.)

Can't unexplode a bomb, for sure.


Trades are not routinely busted during flash crashes.


They were in "the" flash crash.


It had some pretty long term negative effects for Knight Capital.


Knight didn’t suffer from a flash crash. They suffered from a operational risk incident. This is literally something that professionals price into their trades. The long term negative ramifications were nothing. The next day a new firm was running the same trades under the same name.


I thought Knight got bought out a year later?


And for those few failures think of the trillions and more of the decisions that machines made that did not harm, but made lives better than before machines were doing work for us.


We understand how people work and think pretty well, their edgecases, and countermeasures to said edgecases.

We don't understand edge cases of AI much at all, and the terrain-space of "possible minds" they might be is vastly broader than that of a human mind, and utterly outside our experience.

An AI might work with absolute 100% flawless precision for 100 years, then decided one day to kill everyone. A lot harder to see that coming from a totally alien intelligence than a human one. AI does not think like humans, have human values, or care about what humans care about, that might make them extremely unpredictable and dangerous.

There will be no time for iterative development, or learning from our mistakes, a sufficiently advanced AI knows we would try to stop it, and make sure that we never can.

https://www.youtube.com/watch?v=tcdVC4e6EV4


The reality is, no state will be able to deter a major adversary in a future war without automation being a significant part of how decisions are made. Conventional (aka non-nuclear) offensive and defensive capabilities, across air, space, land, maritime, subsurface and cyberspace move too quickly to rely on "old" methods of defense and attack.

Modern technology also allows us to be faster, cheaper, safer and more accurate, things that everyone wants from militaries worldwide. The goal of automation/optimization is as few and short of engagements as possible.

The only thing better than no war is a short one.


This was pretty much exactly the reasoning behind the US military launching the nukes on Japan at the time. They did their modeling and predicted very heavy casualties on both sides for taking island by island vs the nukes making Japan rethink the entire war.

I think all wars are tragic and am not sure that I agree dropping the nukes was a good idea (I’m a US Army combat vet or OIF II btw), but this comment seems eerily familiar.


Indeed, and Curtis Lemay - head of the firebombing campaign on Japan and later head of SAC, the nuclear arm of the US Air Force - was the primary proponent of the theory that War should be so bad and brutal for the purpose of ending it swiftly.

"The New York Times reported at the time, "Maj. Gen. Curtis E. LeMay, commander of the B-29's of the entire Marianas area, declared that if the war is shortened by a single day, the attack will have served its purpose." This view was later echoed by Japan's former Prime Minister Fumimaro Konoe, who said, "The determination to make peace was the prolonged bombing.""

War is hell, as you saw first hand, and those of us who are charged to execute it want it to end as quickly as possible.


I have a feeling some of the couple hundred thousand people killed in hiroshima and nagasaki would have preferred the war to last one more day if it could mean killing, say, 100K fewer civilians. If there were such tradeoffs to be made.

But I suppose Maj. Gen. LeMay would have prefered killing yet additionals hundreds of thousand civilians if it could mean saving the life of one American soldier.

Just different perspectives I guess.


The requirement for unconditional surrender prolonged the war, many arguments could be made, but unconditional surrender was also necessary to prevent the Japanese government from planning future wars by restructuring their society.

Grand strategy involves decisions for the welfare of societies for a century into the future.


“I do not personally regard the whole of the remaining cities of Germany as worth the bones of one British Grenadier.”


War parties want to dominate their enemies and win a given conflict as quickly as possible, but historically are quick to pick a new fight. The dust and horror of Hiroshima and Nagasaki were barely settled before the USA rushed right back into another armed conflict in Asia. For the hawks, there is always another “necessary” conflict around the corner that justifies a new arms race and more flawed rationalizations for war atrocities.


The United States did not initiate the Korean War.


Japan had been suing for peace for over a month. In fact they had pretty much agreed to all American demands except the status of the emperor.


The Japanese surrendered when the Soviets entered the Pacific theater.


> Modern technology also allows us to be faster, cheaper, safer and more accurate, things that everyone wants from militaries worldwide.

You have no idea what you're talking about do you?

> The goal of automation/optimization is as few and short of engagements as possible. The only thing better than no war is a short one.

Like afganistan, the longest us war ever with no forseeable end in sight, those kind of short engagements?


I thought we all agreed -- the only winning move is not to play.


As an aside, “Fail Safe” (1964), is one of the best movies ever made, IMHO, about the risks of technology and warfare. Once you get past the slow pacing, the black and white... it’s an incredible film. It’s based on the same book as Dr Strangelove — but just takes a completely different, dramatic take on the material. You also get a chance to see why Henry Fonda is considered such a legendary actor. Sidney Lumet, the director, is also a legend: “Dog Day Afternoon”, “12 Angry Men”, “Network”...


Technologists need to realize that arms races will doom our children. The solution is international agreements to not build horrifying weapons, not subscribing to fear mongering that demands arms races.


Exactly. Which is why we never had WWII because everyone at the end of WWI realized this and signed treaties limiting their arms. Unfortunately, that is not what happened.

The problem with this is that the more arms control we have, the more the benefits of one party cheating.


WWII didn't happen "because one party was cheating." Everybody knew that Hitler was building the huge army. The West allowed him to do that for years because they believed it won't affect not them but only that "other" land that they didn't like.




Slaughterbots... is pretty terrifying to see again.


The future is gonna be just great, right?

This is the stuff that terrifies me. It's a mistake to think we'll never find ourselves in the middle of a warzone just because we live in a rich country.


It was over morally for the human speciea the moment we ended up with a system where by a single person could decide upon the extrajudicial killing of another person via remote control and we did not collectively revolt. AI is irrelevant. Once we have unleashed the mechanized killing of people via the black box it doesn't really matter the layers of systems we build atop it. Without the moral integrity to rebel against the clearest and most basic form of this system we have already given ourselves over to it.


When was this moment, sometime briefly after the invention of agriculture?


Agriculture was invented fairly recently, around 8000BC. Homo Sapiens have been living in groups for tens of thousands of years before that. One of these group leaders would no doubt have ordered the death of a competing group. No doubt that is what GP wanted us to revolt against.


That is not what I meant. Though the same impulse is perhaps at work, IDK.

The reason I find the modern version of this as adequate condemnation of our species is several fold.

1) We profess a belief in human rights that goes back to our most basic education. Most people in the West are indoctrinated with the values of human rights.

2) We worship at the alter of those rights. We hold up MLK and Lincoln (deserved or not) as our greatest citizens because of what they did for individual liberty.

3) We acknowledge injustice. Every one of us is made aware of the horrors of the colonial period, the Holocaust and the gulag.

Even the most ignorant among us is educated of the moral duties and the ethical possibilities of our world. And yet, we choose to do nothing when our own violate the values of our mythos.

The ignorance that can be said to cloak the violence of our forbearers cannot cloak our own for we do not posses it.


> 1) We profess a belief in human rights, etc

We talk like that. But it's just talk. It is clear from our behavior that this is not the case.

I'd love to believe that it is just due to ignorance. Sometimes it is, but I fear that we're running on tribal programming and that's not going to change in a hurry.


This is kind of the core of why I think maybe the moral failures of our time may uniquely condemn us.

In the past, the concious or wordy parts of our brain didn't exactly contain the moral philosophy we have now which is supposed to counter our basic cruelty.

We have hundreds of years of it. We call the enlightenment foundational to our governments and ethics. Many people hold the Bill of Rights sacred like a religious text. We have the most extreme examples of human cruelty to heed as examples. Anyone may read about them, most American school children are required to learn about them.

But even that knowledge, that programming is not capable of overcoming our basic tendency toward violence and cruelty. Maybe, there are some behaviors the wordy part of the brain just can't reason us out of.

I hope I'm wrong. Maybe there's an even better set of ideas than the enlightenment, humanism, and human rights that will be more contagious and more powerful to overwhelm our basic nature. Perhaps those ideas are already out there in some culture being drowned out by the hegemony of Western moral philosophy. IDK.


Maybe so. If so, then I don't know what to make of the enlightenment and Nuremberg and all that.


So piercing another human being on a sharp object is morally better?


Face to face for a bit of food or scrap of land is probably better than through an elected representative because we suspect you are an adherent of an ideology we fear.


[flagged]


I see your point but blood does not wash so easily as milk.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: