Hacker News new | past | comments | ask | show | jobs | submit login
Deep learning pioneer Yoshua Bengio is worried about AI’s future (technologyreview.com)
185 points by jonbaer on Nov 20, 2018 | hide | past | favorite | 137 comments



"It should be noted that no ethically-trained software engineer would ever consent to write a `DestroyBaghdad' procedure. Basic professional ethics would instead require him to write a `DestroyCity' procedure, to which `Baghdad' could be given as a parameter."

-- Nathaniel S. Borenstein, https://en.wikipedia.org/wiki/Nathaniel_Borenstein


deleted


I think Borenstein's thing was a joke about people not being very ethical.


Right over my head...


> I don’t completely trust military organizations, because they tend to put duty before morality

Ah, first, a duty flows from a moral position, so perhaps the master misspoke. Second, I submit you can strongly trust military organizations to do their duty. They will stop at nothing to kill their enemies with extreme prejudice. And the people who lead military organizations, in most cases, have thought quite long and hard about their moral positions. It is the nature of the business, when sending people to die, to question everything about yourself.

Now, that doesn't mean war is a civilized affair. It is awful, insane. And I frankly don't know a lot of senior military leaders who have ever been eager to go to war, except in cases where they believed the operation truly met the standards of jus ad bellum. What you often find in histories are senior officers getting replaced when their advice to not enter war is not in line with the positions of their civilian leadership.


A more charitable reading is that military organizations are structured so that everyone but the very top leadership follow 'duty', i.e. orders, without thinking about the morality of their orders (I know that in principle they're supposed to refuse unlawful orders. In practice, we have fairly mixed results on that front. Not to mention the fact that a great deal of barbarism can be perfectly lawful in war.).

But as anyone who's been in an organization of more than ten people can attest, what the leadership says it's doing, what the leadership thinks it's doing, and what the organization is actually doing are three very different things. In most organizations the mismatch usually just ends up in money leaking everywhere and stupid things being build. In militaries, all sorts of atrocities can happen, and this is amplified because the boots on the ground and the cogs in the machine are trained to not think too hard about the ethics of their actions. So I think that quote is a perfectly reasonable take on the matter.


> are trained to not think too hard about the ethics of their actions.

My experience has been that a great many officers think very hard about the ethics of their actions. War is mainly vast expanses of boredom punctuated by short periods of terror. And thinking is about the only thing left to do in those vast expanses.


Apparently they think about it yet execute the orders regardless. Just judging by the atrocities of all wars.


Do you think you shouldn't steal things from other people? Of course. I mean I think we'd agree that everybody, more or less, feels this way, right? Yet each year there are more than 8 million thefts per year in the USA alone. It seems that the latter fact there contradicts the former, but it doesn't. You can't compare things to a baseline of 0. You need to compare them to what things would be like if our proposition was true. For instance if most people believed that it was okay to steal things from people then there would not be 8 million thefts per year, there'd be billions. 8 million thefts is actually very strong evidence that most people believe you should not steal from other people.

And so too for atrocities in war. The non-zero number of atrocities in no way suggests that people just blindly follow orders, as a matter of fact. You need to compare these numbers to what would happen if people gave no consideration to morality whatsoever in their execution of orders. And what are those numbers? I'm not sure, but I expect vastly higher than the relatively few atrocities we can reference.


There's a more subtle argument, that the military as a whole views the word "atrocity" much differently than the rest of society. Though lots of society now agree with a sort of self-referential notion that our institutions and structures define what is an atrocity, so that whatever we do cannot possibly be an atrocity, if the rules are followed.

We see this sort of attitude when it comes to discussions of police brutality. Many police officers view the use of lethal force through the lens of self-defense in high stakes scenarios. Others view the use of lethal force through the lens of "police are not judge, jury, and executioner".

When people get shot as they run away from the police, even if each side of the debate might be rational in their own world-view, the conclusions are completely opposed.


I understand what you mean, but I am just pointing out that because of its nature no matter how hard people in the organisation think about morals they are the hand that do the atrocities, and these happen a bit too often considering. I have experience with military as well and the tone of "oh they think so hard about morality, consequences" argument just makes my blood boil.


Then I would put forth that you don't have much experience with military leadership. In practice, morality, as well as cause and effect, are a constant discussion among military leadership. To add to this: It's the lower echelons of leadership that are more about numbers and efficiency and even lower is where it's about "getting it done".

The disconnect, I believe, is what one considers duty another may consider blind faith. There is a vast amount of room for ambiguity here.


Look at it from a machine learning perspective: moral hazard is the most poorly labeled dataset of all. Adding as much synthetic data to the mix as possible is the best you can do. We all acknowledge it's insufficient. But reading authoritative texts, writing as best you can, and most importantly, practicing are the only things you can do. What exactly do you want? No war? That's a political issue. No atrocities? As grandparent said, to err is human. Let he without sin cast the first stone.


This appears to base the arguement comparing to the worst case scenario, we don't need genocidal numbers of casualities to determine if morality has gone out the window. And that's another thing; numbers. The obsession with numbers, body counts, injuries, all to justify the reason we are there. In vietnam the US was fighting guerrilla warfare, in the middle-east it's called terror.

None of this will be declassified for years, it's technically still not over yet, we'll be waiting a long time to know some of the things we still don't know now.


War is, by its nature, atrocious. Any activity that places death as the solution to problems is atrocious.

There is a point at which we have to consider whether war is worth the atrocity. What is so important that entire civilizations would condone the slaughter of their 'enemies'?

I have some of my own opinions on the above question, I only ask because I think the question should be considered carefully.


> War is mainly vast expanses of boredom punctuated by short periods of terror. And thinking is about the only thing left to do in those vast expanses.

Hardly.

https://www.youtube.com/watch?v=tixOyiR8B-8

> On the third day I was there, this guy who had picked me up in the Jeep, a corporal who I was ultimately going to replace, he and I were in the battalion intelligence section, we were sent down to the tractor park, the amphibious tractor park to meet a bunch of detainees. It was our responsibility to take care prisoners, and detainees were a classification of civilians, they were not combatants; they could be detained for questioning, which is why they were called detainees.

> And Jimmy and I went down to the tractor park and two tractors came in, they had a whole bunch of Vietnamese up on top high flat-topped vehicles about eight or nine feet tall, and as the tractors wheeled into the park the Marines up on top immediately began hurling these people off, and they were bound hand and foot, so they had no way of breaking their falls, and they were old men, women, children, no young men, and I couldn't believe these guys were treating these people this way, and I turned to Jimmy and said, I grabbed him by the arm and said "What are those guys doing? We're supposed to be helping these people." And Jimmy turned to me and he looked at my hands on his arm, I sort of took them off, and he said "Ehrhart, you better keep the mouth shut until you know what's going on around here." I think it was at that point that I realized things were not quite what I was expecting.

> [..]

> None of that distilled itself into the clear kind of expression that I'm presenting now. What I began to understand within days and which became patently clear within months was that what was going on here was not what I had been told, what was going on here was nuts, and I wanted to get out. I knew if I was still alive on March the 5th 1968 they'd stick me on an airplane in Danang we used to call it the freedom bird and I could fly away and forget the whole thing. Turned out not to be quite so easy to forget it, but that was the notion, and certainly my last eight to nine months I ceased to think, I quite literally ceased to think about why I was there, or what I was doing. The sole purpose for my being in Vietnam at that point was to stay alive until I could get out.

> And the reason for that is, you know, the kinds of questions that began to present themselves were just.. the questions themselves were ugly and I didn't want to know the answers. It's like banging on a door, you knock on a door, and the door opens slightly and behind that door it's dark and there's loud noises coming like there's wild animals in there or something. And you peer into the darknees and you can't see what's there but you can all this ugly stuff.. do you want to step into that room? No way, you just sorta back out quietly, pull the door shut behind you, and walk away from it. And that's what was going on, those questions, the questions themselves were too ugly to even ask, let alone try to deal with the answers.


I've been to war...9 months of patrols punctuated by cleaning the shitter and the occasional mortar round or sniper's bullet sent in your general direction. 0/10 would not go back.


David Hoffman's YT channel is pure gold


How does this refute the sentence you quoted?


Thinking about what one is doing is not the only thing that is possible. It's perfectly possible to think about it in ways that further remove oneself from ones actions (scapegoats, rationalizations, etc.), and it's possible to not think about it at all.

That in turn does away with the implied claim that we can "trust" people to think about what they're doing, because there it's the only thing they can even do.

It's like saying the only thing you can do when you're in a gang is to consider your actions and discuss them with others, so just from first principles (we pulled from thin air) we can make this deduction about gang life.

I mean, the context of this is taking issue with the statement that a person doesn't "completely trust" military organizations, that's bad enough. But the claim that being bored a lot between periods of terror means people genuinely reflect, that's stunning.


you should watch the clip and you'll understand


> the people who lead military organizations, in most cases, have thought quite long and hard about their moral positions. It is the nature of the business, when sending people to die, to question everything about yourself.

Do you have any data backing up this claim? To me, it sounds like wishful thinking. The detailed and well studied cases that I can think of, eg: Mylai Massacre, the murder and subsequent coverup of Patrick Tillman, the murder of Laveena Johnson, indicate that the people who lead military organzitions, and the systemic structures within these organizations typically have a very questionable (if any) sense of morality.


> Do you have any data backing up this claim?

The last 24 years of my life and the great many excellent enlisted and officer members of the armed services I have worked alongside.

We study My lai, and Abu Ghraib, the Vincenes, and other failures of leadership. I'd count those as the exceptions that prove the rule. We also study failures of civil and business leadership, but those are so plentiful they occupy every newspaper every day.


I could never understand why Abu Ghraib was a huge story in the west. It seemed such a tiny thing compared to the grand invasion/slaughter of Iraq going on at the time. Uncountable thousands killed. Yet somehow Abu Ghraib was the thing that troubled Americans. (That and how the French wouldn't join the slaughter.) Astounding.


The February protests in western countries¹ against the upcoming invasion were much larger than any reaction against Abu Ghraib; they were some of the largest anti-war rallies in known history.

¹I realize they were worldwide, but we're talking about the west


Sure, I was a part of the huge Sydney protest, carrying a poster listing the names of about a dozen chemical weapons the US had sold to Iraq, under the title "Where did Saddam get his weapons?"

I was talking about size of the story. (e.g. If the US kills 3,000 people in a far-off country, no biggie. However, 9/11 was a huge story.) The Australian media was largely, as usual around those times the US is preparing to invade somewhere or mid-doing it, echoing mainstream US TV opinion. Well, a lot of those US shows are shown here too. There was a lot of soul searching in the US about the extremely troubling Abu Ghraib scandal, which went on for some time, and left some kind of permanent mark. The protests were covered when they happened, but I don't think they produced the same moral....pseudo-crisis at all, and quickly disappeared. They could be ignored.


Abu Ghraib and My Lai were, as the parent said, "failure of leadership" - whether the failure involved troops not being disciplined or simply the troops indiscipline becoming known to the public. The invasive and the Vietnam War were policies. Leadership decided on these and if the death and destruction involved had remained entirely under leadership's control, no further problems would have been perceived.

That the excesses of war, rather than war itself, can considered the problem is a success for policy since it allows a wedge to be driven once the public sours on war. "No more Vietnam Wars", a slogan the vast majority embraced after Vietnam, was a slogan that could morph to "no more unsuccessful wars fought with conscripts who discipline breaks down, next time we'll do it right!".


Ok thanks. Note that the subject has moved from the initial questioning of the claim that military leaders "have thought quite long and hard about their moral positions. It is the nature of the business, when sending people to die, to question everything about yourself." Killjoywashere was asked "Do you have any data backing up this claim?"

And in answer was given these 'failures of leadership', which seem to be PR disasters (the problem was not that they happened but that people found out about them - "simply the troops indiscipline becoming known to the public" - things that couldn't possibly be spun as the greatest country on Earth doing God's work) or people not doing their duty.

The moral dimension has entirely disappeared, which rather supports garyhunt's suspicions. Maybe questioning everything about your country is more important than 'questioning everything about yourself'. More important for the hundreds of thousands that were killed, at least.


> I'd count those as the exceptions that prove the rule.

Could you give a few examples where our military leadership took an action that backs up your claim of exhibiting a strong sense of morality?


You're asking for something that doesn't make news. There's multiple classes of this.

The untold hours NCOs and officers spend sweating details to reduce moral ambiguity in operations.

If you want is to see men take bullets for innocents, there are plenty of them but, unfortunately, there aren't many people around to tell those stories. A few have made it out, and you can find some of them in the Medal of Honor citations (1)

General Dean's delaying action at the outbreak of the Korean war (2).

More subtly, General Shinseki testifying to Congress against administration dogma on the number of troops needed in Iraq, and sticking to that despite intense pressure.

Admiral McRaven's life in general.

(1) https://smile.amazon.com/Americas-congressional-recipients-o...

(2) https://en.wikipedia.org/wiki/William_F._Dean

(3) https://www.nytimes.com/2007/01/12/washington/12shinseki.htm...


For a discussion about morality and, arguably, lack of morality at an extreme level (strategic nuclear war plans) - worth considering the discussion between Marine General David Shoup and Air Force General Power:

https://www.dailykos.com/stories/2007/3/22/314933/-

"any plan that kills millions of Chinese when it isn't even their war is not a good plan. This is not the American way."


I don't think that is a reasonable question.

To use an extreme example: I can ask for a examples where someone has committed murder, but it would be unreasonable to ask for examples where someone hasn't committed murder.

There's an asymmetry to showing morality or immorality. Exhibiting a sense of morality is the standard case, and it's hard to show it. Being immoral, however, can be trivially shown by pointing out the mos or law being broken by a specific behavior.


I think the question is very reasonable, your example is flawed because question is not about examples of someone not doing something, he is asking if someone actively worked against or seriously questioned of a murder he/she is ordered to commit. Parent is sceptical because of priori contrary evidence that military actions and morals conflict frequently.


> he is asking if someone actively worked against or seriously questioned of a murder he/she is ordered to commit

How could one reasonably produce publicly available documentation where someone refused to execute an unlawful order? Why would someone issue such an order in the first place?

Think about what could possibly motivate a soldier's action: the consequences of questioning an order are far less severe than the consequences of executing an unlawful order. "I was only following orders" has long been discredited as a legal defense.

> Parent is sceptical because of priori contrary evidence that military actions and morals conflict frequently.

Morals are not universal. To which morals are you referring specifically? Some people are morally opposed to violence in principle, the military obviously is not.

[Edit]

I'm still in disagreement as to the logic of the question, but I am beginning to think that the mere existence of court martials in the interest of upholding military law (ie, codified morals of the military) should be sufficient proof that a military body can exhibit a strong sense of morality.


So the "leadership" seems to fail again and again although they thought really hard about their moral positions.


This "citation needed" is misplaced IMO. And if I'm not mistaken, all of your examples have nothing to do with "people who lead military organizations"?


> if I'm not mistaken, all of your examples have nothing to do with "people who lead military organizations"?

If anything, that doesn't make these examples moot, but the red herring of how military leaders supposedly think long and hard about stuff.


You might find a few example of military officers counseling against war in history, mainly in liberal democracies, but you'll probably find a lot more the other way round. Even then the council is usually more about when, where and how rather than whether to do it at all.

Not all militaries are meaningfully accountable to political leadership, and not all political leadership cares about morality anyway (cough Russia cough). In free democracies yes, the military is at least to some extent held accountable to the government and the people, but we still end up with atrocities only a few of which are ever held to account, such as the Balkans war. Or in places like Chechnya or eastern Ukraine (so far) nobody is ever held to account. A few is a heck of a lot better than none, because it can act as a deterrent and an encouragement to the organisation to do better next time though, of course.

Realistically, the decisions about what weapons the military should have, how they should be designed and what safeguards should be in place are all political decisions. Only how they are employed in war is really in the hands of the military, and in liberal democracies even that is sometimes a political decision (e.g. using a submarine to sink the Belgrano in the Falklands War.)


The difference between military and civilian decision making is this:

Civilians can be politically active, convince other and refuse to work, or quit on case by case basis without criminal penalties.

In military you can only refuse to follow obviously illegal orders and even then it's a huge personal risk. Refusing to follow morally wrong but technically legal orders will lead to criminal punishment.

For example. Targeting civilians is illegal. If every male above 16 years of age is considered combatant, then it's legal to kill unarmed children from the sky. Refusing on the principle will lead to military court. Military court is extremely unlikely to rule that refusing to follow orders was the right action.


'Enemy combatants' is a horror built into the system. IME, most people have forgotten that is how collateral damage was hidden.


  in most cases
I doubt the current American commander in chief has thought long and hard about his moral positions. That being said, the authoritarian regimes of the world will undoubtedly not hesitate to use AI in their military, so some form of deterrence will presumably be required.


[flagged]


It seems to me that it is your view of reality that might be distorted if you can and have inferred that the parent has an irrational hatred of the current president of the United States from that particular sentence.


I'm perfectly willing to entertain that notion.

Explain to me how the parent could come to that conclusion by other means?

Consider that this president has consistently used teddy roosevelt tactics to deescelate tensions in areas the cultural elites have deemed impossible (e.g., north Korea, China, Russia, EU, Canada).


I do not want this to sound ironic, but the explanation that seems most intuitive to me is that the parent has been informed of some number of the president's actions (likely through the media and/or aquaintances) and came to the conclusion that the president does not think long and hard about his moral positions. While the parent may indeed have an irrational hatred of the president (I do not immediately see any evidence for it, but you may not prove a negative), it is not a hard requirement in order to maintain the opinion that the president does not think long and hard about his moral positions. Does that seem plausible?

On an unrelated note, I do not hate the president, but also believe that either he does not think long and hard about his moral positions, or simply does not actually share them with the public. In my case, I believe this because he appears to rapidly shift his opinions, attitudes, and words in a way that makes it difficult to infer what he believes, or even what he wants people to believe that he believes.


Then you have to explain why he didn't want to bomb random Muslims but the democratic party, their propaganda outlets, and their commisars did.

Can you explain that? That's the part that makes your statment seem like neither of you have actually thought about this.


I do not understand what "the democratic party, their propaganda outlets, and their commisars" have to do with whether or not the president has thought long and hard about his moral positions. I explained why I think that he does not, but even if that reason were not provided, it does not follow that someone with the position that the president does not think long and hard about his moral positions would need to have an irrational hatred of him. To briefly reduce your position to a strawman:

>Person X believes that the president does not think long and hard about his moral positions -> Person X has an irrational hatred of the president

My issue is with that deduction. There is a logical jump somewhere in here that I am not able to follow. You reference multiple third parties and contrast them to the president as if to suggest that whatever differences that they may have means that it is clear that this hypothetical Person X has an irrational hatred of the president, or that a particular decision that the president has made proves this, but these are not obvious conclusions to me. As for why "he didn't want to bomb random Muslims but the democratic party, their propaganda outlets, and their commisars did", I actually do not have to explain that. The reason that I have given is in no way refuted by or related to this claim.

Instead, could you explain why the opinion that the president does not think long and hard about his moral positions is sufficient evidence that the holder of that opinion irrationally hates the president?


Because it has no basis in reality, as evidenced by the fact that, in the examples provided, he's demonstrated more morality than anyone else in politics.

I'm not really sure why this is confusing. If you think he doesn't think things through, you have to explain the counter examples.

Otherwise you're just ignoring reality and cherry picking facts.


Would you say that your argument is essentially "This person is wrong in such a way that would not be possible without irrationally hating the president"? If that were the case, I would argue that people can be wrong for any reason(s), which seems like a truism to me.

As for the counter-examples, you have not provided any. You are arguing that the president is the most moral politician (which, while an extraordinary claim, has not been disagreed with by anyone in the thread as far as I can tell), while I am arguing that someone's lack of belief that the president thinks long and hard about his moral positions is not sufficient evidence that such a person has an irrational hatred of the president. I would normally have left it there since we may not even be talking about the same thing, but it seems like your argument is for a unified theory that not only is the president moral, but he is moral to the extent that the belief earlier expressed is evidence that the person holding it both hates the president, and does so irrationally. I was wholly incredulous of that claim, but was seeing if there were some obvious part of it that I missed. That is the confusing part. Does statement actually make sense to you without 5 or 6 extra assumptions in between?


That is correct. You have no concrete examples of immorality, merely gross generalizations. When presented with cases showing that he is more moral than people you defend, you obfuscate and ignore and claim incredulity, because you have no argument.


It’s not merely military organizations. The US president and congress use military force on any target in the world even when there’s no or very little benifit for security of US citizens, even under influence of lobbyists. Think of Vietnam or Iraq 2003, and others.


> Ah, first, a duty flows from a moral position, so perhaps the master misspoke.

I can't speak for them, but I'd say obedience would be a better word here.

> the people who lead military organizations, in most cases, have thought quite long and hard about their moral positions

Then that just proves that there are leaders in military organizations that even after thinking long and hard about something, come up with abysmal results, which they never quite seem to be capable to even own up to.

https://www.youtube.com/watch?v=_J2VwFDV4-g

> except in cases where they believed the operation truly met the standards of jus ad bellum

Oh, so I can trust they believe that, and when it's instead a war of aggression, I'm to be comforted that they believe it's not? And then exactly those people who already can't seem to use their brains responsibily are supposed to be allowed to build machines to make "decisions" and kill autonomously?

All you said is what everybody said, historically. Everybody is either the good guy, or when they fuck up, they're not obedient to evil, they're simpling making "mistakes"... but at least they thought long and hard, it's not always smooth sailing for those who kill others.

As if that perspective is the only one that matters, while ignoring the people who were murdered by militaries through no fault of their own, and any duty a person might have towards them.


You're asking members of a military, that is, the custodians of ultimate state violence, to decide who the enemy is? Are you sure about that?

To whom does the soldier have a greater duty, the government elected by the citizens of his country, or the citizens of some other country?

Let's say I take a 4-star general, and strip him of all rank. Put him in a platoon and tell him to follow the orders of the lieutenant in charge. No matter his geopolitical insights and experience, he is now dependent on the information of the lieutenant. It's no longer theory, now it's specifics. He is told to knock down that door and shoot the first person he sees. How can he possibly make any decision other than to execute the orders he is given? This is the terrible responsiblity of the lieutenant: he is the last source of information and direction the soldier has. Would you want your soldiers to die for a lie? As the result of misdirection? Do you want them to know to kill an innocent?


Merriam Webster puts duty arising from morality third after https://www.merriam-webster.com/dictionary/duty 1 filial duty and 2 obligatory tasks, I believe the usage here was obligatory tasks.

I also think obligatory tasks is unfortunately the way that duty is commonly used in our degraded age.


Hey! It sounds like you know what you're talking about. Could you maybe point me to some sources on where one could read up on how military leaders decide to go to war, what set of morals they need to come up with to be able to send people to war or this "jus ad bellum"-concept (never heard of that before).

I would really appreciate it!


Military leaders don't decide to go to war. That's for elected leaders. Waging war and studying how to wage war are the proper roles of the military. That said, any civilian leader with a clue should ask their military commanders what a war would entail, those critical "how much" questions. And if the numbers don't add up, probably take a pass, look for a diplomatic solution. This is where honest warriors like Shinseki lose their jobs and people like McNamara, Taylor, and Westmoreland end up in charge.

As for reading up on the general principles upon which the Western world built its world order, how to know when a war would be just, the collected works of Plato, starting with The Republic, and the stoics would be a good start. I would advise Irvine's Guide to the Good Life as an introduction to the stoics.

I would also add Errol Morris's The Fog of War, including McNamara's separate 10 lessons (1), and, as counterpoint, Ken Burns' Vietnam series. The two together are an unparalleled view of the entry into war, and its consequences.

(1) https://en.wikipedia.org/wiki/The_Fog_of_War#Ten_additional_...


I would also add a very significant figure in Just War Theory (another useful term) is St. Thomas Aquinas (as I'm sure you know but maybe others don't).

https://en.m.wikipedia.org/wiki/Just_war_theory#Jus_ad_bellu...


Thank you!

I actually already read "A Guide to the Good Life" and was planning on re-reading it this winter.


Not quite what you looking for, but I found this source insightful regarding how the military think about leadership. Very progressive I’d say, see for example the Gardener’s approach

http://leadership.au.af.mil/sls-skil.htm


people in the military at least have seen war first hand, but they are not the ones making the big decision of joining one, no?


One of the first cyberpunk books I read opens with a drone hit on some European business man at a company resort.

It seems like we’re hell bent on making this reality. I mean, military AI is basically an inevitable future for us at this point, and while it’ll probably take a few years to leak into private hands, it eventually will, and the world will be a little more shitty for it.

Aside from that I’d rue the day the Americans get AI murder drones, especially if I was living in one of the 7 or 9 countries they are currently assassinating people in with the use of the current drone fleet. As terrible as a murder assassin drone is, it’s at least controlled by a supposedly moral being.


No future is inevitable. That's what the californian ideology convinced you of.


Meh, maybe it's not that bad. Sure, you'll get the occasional psycho trying to kill his pop-star stalker-victim. But for many high profile people - politicians, mega-corp CEOs, media tycoons - it might be a good way keeping them in line, knowing 'the people' can finally strike back again. For most normal/low-profile people this won't be a profile... since most of us don't have arch-enemies.

(Playing devil's advocate here, so please take this with a grain of salt ;))


Sounds like the "Assassination Market" theory of decentralizing power from the 1990s.


Seems like the interviewer focuses a lot on juicy questions where Bengio does not have highly developed answers. (Q: AI for War? A: Bad)

Where his opinions are more interesting is the last questions, outside of the articles agenda of "stuff to freak out about."

Asked what new progress areas he's excited about, Bengio (charmingly) responds with slow progress areas he's frustrated with.

Anyone know (or can guess) if he's refering to anything specific when he (I'm connecting dots) using deep learning to "learn causality?"


One of the related project he mentioned being very interested in is Baby AI, which is trying to improve grounded language learning, so incorporating world knowledge into NLP:

* https://arxiv.org/abs/1810.08272

* https://github.com/mila-udem/babyai


Deep learning has failed to develop symbolic learning, things like model building and causal reasoning using a model, hypothesis generation and model selection, etc have not seen much of that. This is something Judea Pearl has been pointing at lately.


He's talking mostly about military technology. I'd agree that killer-robots would be horrible - for the potential to make military action easier, to make it even easier for a single mad man launch a war and so-forth.

I don't think "real" military AI is that close because things on the battle-field have to be robust and current AI seems to be inherently fragile - not always reliable and less reliable in chaotic situations.

But semi-military applications like deciding who a drone will kill have potential ... to do even more harm than drones have already done.


> I don't think "real" military AI is that close because things on the battle-field have to be robust and current AI seems to be inherently fragile - not always reliable and less reliable in chaotic situations.

Do you think the military contractors pushing their military AI solutions care about that? Do the people approving those solutions (usually those in Congress that assign the budgets) know enough to make the distinction between "highly effective AI" and "poor/broken AI that won't achieve its promised objectives during battle"? Do they even care, or do they care more about their campaign donations?

The CIA and NSA have already used awfully inaccurate algorithms to send drones to kill "targets" (like people who they thought used a certain phone number). Why do you think it will be any different with AI killer robots?

The algorithms will likely be just as inaccurate based on the biased/flawed data the AI will be fed, but the damage (civilian casualties) will be multiplied 10-100x due to how much easier/cheaper it will be to create and operate AI killer robots.


That semi-military approach is already happening https://www.theguardian.com/science/the-lay-scientist/2016/f...


Is that drone carrying out kills w no humans in the loop?


No. Predator / Reaper drones are remotely piloted via satcoms link by a crew of two or three, typically pilot / sensor/ weapon operator. They are not autonomous with the exception of auto-pilot functions. The crew are typically embedded in a larger HQ where there may be political / legal cells to advise.

If the target list is wrong it won’t help the targets but the human in the loop may help to avoid blowing up a school accidentally, depending on the amount of collateral damage that can be accepted for a given target.


No but if the humans in the loop trust the algorithm so much, they're more like executive agents of the algorithm than the ones making the decisions.

Just because you don't automate the step of actually launching the missile doesn't mean a machine can't be the originator of the order.


This shocks me. Here is this man who is clearly a complete novice in international politics, economics and history. He says that we should make rogue countries “feel guilty” for developing malicious ai implementations. He hand-waves away the military as simpletons who “act on duty” like a f*king high school student. When confronted with the existential threats of ai he launches into an analysis of African visa process and “inclusivity.” This is complete and utter garbage. Worst of all he talks about reaching general, human-like ai without ever mentioning that it is basically an automatic extinction level event for human kind. If you disagree then check out my comment history and change my mind. Preventing general ai is extremely important and anyone who wants to form some kind of group to encourage regulation and legislation around ai please get in touch with me.


> we need to be able to extend it to do things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

Having recently read the Book of Why by Judea Pearl, I found particularly interesting his remarks on causality. Pearl’s approach is based on causality being a testable assumption that requires domain expertise to be expressed. Having done that, it provides powerful techiques to address causal questions from data.

Any efforts towards a general AI capable of causal reasoning should instead be able to create causal assumption from experience and “reasoning”.

I have not seen around (have not looked much) discussion on how to combine general purpose AI approaches such as DL with domain specific approaches such as the Causal Inference techniques described by Pearl. Anyone has references to share?


How much sleep would he lose when you realize the very educational institutions which he attended were designed by large to benefit the military's R&D?

Anything we invent or discover can and will be used by the military or intelligence agencies and even law enforcement agencies who are doing real human right violations. Hell, there are graduates that go work for the government because it's stable, or even unwittingly be writing RAT or researching zero days.

The truth is, as researchers, engineers, our guesses are as good as touching a part of the elephant. Everyone thinks the part they hold are the whole, and make the claim, this is what an elephant should be.

When in fact, if you follow the authors logic, we are all complicit. Every walk of life is influenced by the military and for the military. Internet? Designed for resilient military comm against nuclear attacks. Microwave? TV? Radio?

So the genie is out of the bag and now is going to work for the military. Should we feel outraged? Should we stop all AI research because of the authors view that the military kills people so its automatically evil?

What about the people who are ready to give their lives so these researchers can continue living and doing great work? It seems to me that beggars can't be choosers. It's good to have a strong sense of morality so you don't end up writing a RAT tool for a corrupt government that ends up torturing dissidents. But rarely will you even know who is using it and where it's being applied. It's simply designed so you don't have to be burdened with the moral dillemma of how a state should think and behave.

By large, a state is not a person, no conscious, no morals only national interests dictated by the few in power. This skewed power dynamics will remove the decision makers from the burden of making immoral decisions to further "national" agenda. ex) Do we torture an alleged terrorist to extract information that can stop an imminent attack on hundreds? It's certainly not the call of the people who wrote the software to manage torture and it's not the call of those who follow orders.

We are ruled by ideology, the one that sells to us constantly of an unknown, unpredictable threat. If the author has anything to blame, it is that people by far and large have already voted with their money and hearts-ignorance is bliss, gathering material wealth is priority.


don’t completely trust military organizations, because they tend to put duty before morality

The rockets go up, who cares where they come down / That’s not my department says Werner von Braun


My first thoughts were on the failure of logic of just being opposed to the military doing things without proposing how they should reasonably behave but...

Well it seems like the author of the article had the headline in mind before even talking to YB. The responses don't seem like well thought ideas on a complex topic, they seem like random off-the-cuff answers to a journalist's leading questions, and it isn't fair to criticize them.


So he's saying that concentration of "wealth" is bad, and war is bad.

A quick glance at the definition of moral gives, "a person's standards of behavior or beliefs concerning what is and is not acceptable for them to do" which suggests that they may be fluid. Are killer robots necessarily any less moral than killer humans? We seek to replace humans with "robots" in many cases under the assumption that they perform better. I suppose in the case of killer robots this could mean more effective killing, or perhaps it could mean more accurate strikes and less civilian casualties? (I'm not saying I am an advocate for military AI, just posing some questions).

Finally, suggesting that we need to focus less on incremental progress when DL still isn't completely understood seems premature. I'm not sure another great leap in AI is on the horizon until a leap in computational power or a new framework is discovered.


My take on the "robots mean more accurate/less risky warfare" is that that's precisely the problem, actually (at least if we start from the assumption that war is bad). By "industrializing" warfare and reducing the cost in lives, we make it more politically palatable.

Risk assessment is a massive part of waging war. If the risk to one side is reduced by using robots (or other weapons-of-cheap-destruction) instead of humans, then the likelihood that side will favor war as a conflict resolution mechanism is (all other things being equal) increased.

On the other hand, if the risk is too high, then alternative options are more likely to be favored. This seems to be the thinking around things like nuclear disarmament and why proliferation is generally seen as a bad thing despite nukes being hands-down the most cost-effective way to end a war (at least against an enemy not similarly equipped - see the Cold War). The reason given for the US bombing of Japan was to save lives by shortening the war - though I'm not about to get into whether that decision was justified.

And that's before we introduce AI, which has had notorious bugs like failing to identify dark-skinned faces relative to light-skinned faces (https://www.bostonmagazine.com/news/2018/02/23/artificial-in...) or driverless car crashes. As uncomfortable as I am with proliferation of killer tech in general, introducing AI actually makes my skin crawl.

Even without the unpredictability of AI, "precision strikes" are not necessarily as precise as their name implies: https://www.theguardian.com/us-news/2014/nov/24/-sp-us-drone...


All good points. It would be nice to see some game theory type assessment of these imbalances.


People too often come to one side or another of an issue without considering context and alternatives. War unfortunately drags everybody down to the lowest common denominator. The reason for this is that the lowest common denominator, the nation that would act with the greatest disregard for anything other than their own victory, is the nation that wins wars and gets to set the rules for the rest of the world.

In 1945 the United States became the only nation to ever launch a nuclear strike on another country. In a two moment's flashes we killed hundreds of thousands of individuals - the vast majority being innocent civilians playing no direct role in the war whatsoever. But that act of arguable amorality not only immediately ended a world war, but has since led to a great fear of war among any nation with these weapons which has led to an unprecedented period of relative world peace. This [1] great video shows the borders of Europe throughout time. The sudden lapse in that constant warfare and transition, the one we now still live in, that occurs just shortly after the more widespread development of nuclear weapons is striking.

But nuclear weapons will not remain the ultimate weapon and deterrent forever. And whichever nation is able to develop appropriate defenses against nuclear weapons and offenses beyond nuclear, will be the same nation that decides the direction of our species moving onward into the future. Again, it's a lowest common denominator problem - but it's one that's ultimately unavoidable by any means other than by ensuring that you yourself are always the most militarily capable nation. And ideally multiple nations will develop these weapons in unison. I'm not particularly fond of e.g. China having unopposed reign throughout the world, but neither am I fond of the idea of an unopposed reign of e.g. the United States. I think we find the best outcomes when we have multiple balanced powers, and in this regard it is our responsibility to never fall behind.

[1] - https://www.youtube.com/watch?v=P9YnYRk8_kE


I don't understand the argument against military robots and AI. Why is it more moral to send men to die instead of machines?

Edit:

About asymmetry in the battlefield, yes, but then you should say the same of all advanced weaponry, right? Each side is aspiring for asymmetry in war technology.

About the army refusing to carry out orders. That's a good point that I hadn't thought of. I'm not super sure that current humans are really that great, but it's a fair point.

About the cost of war. High end military technology is very expensive and something this sophisticated probably won't be cheap. But even disregarding that, I would prefer thousands of machines being destroyed to humans dying.


Because I think people take pause when considering sending their own countrymen to their deaths. They won't think as long or as hard if killing the enemy is simply a question of deploying robots. Think drone strikes.

Also, recent developments in machine learning have seen computer programs outperform humans on certain tasks. Granted, those tasks currently are very specific, but if we extrapolate 30 or 40 years into the future, we may have killing robots which can't possibly be defended against by people without comparable technology.

I think it's also simply a matter of imagining the particulars of a world with sophisticated killing robots. Imagine some kind of hunter drone tracking you, finding you, and dispassionately killing you. Do you want to live in that kind of world? Would you want to suffer that kind of fate?


> Because people take pause

I generally agree, especially if the bar is "taking pause", but playing the devil's advocate, is this _really_ true.

They pay lip service to it. They weigh the politics of it. But I feel that between the Vietnam war and the 2nd Iraq war we can at least question the degree of their concern.

Project 100,000 is a specific example I'd point to: https://en.wikipedia.org/wiki/Project_100,000


I'm sure there are counterexamples to my statement but I personally believe that it is true that most people would struggle with knowingly sending soldiers to their death. It's just my opinion.


There are those who believe that a death in protection of their country is honourable, and those who believe whatever number of people are sacrificed in the name of their flag to be worth it.

I can't imagine it, but they do exist.


How would comparable technology protect against swarms of drones designed to track down specific people or even worse just a large swath of people?


> Imagine some kind of hunter drone tracking you, finding you, and dispassionately killing you.

In this case, you need to surrender and become slave to survive. It'd be dumb to fight that killing robot.

But it's your fault to lose your own freedom. You should have developed the killing robots before your adversary.


Where in that story did the robot ask for surrender?


It is not about morality in the case you describe but in the potential of having more wars and victims, some of my reasons I dislike the idea are:

- in a democracy you need to convince the society that they should give their lives so you need to have or invent a good enough reason, this means you can't start a war easy.

- humans trust computers too much this days, one example was a person that almost drove over a cliff because Google map told him to do it, he noticed for a while that something is not right but trusted the computer more then himself.

- humans can think and most of the time take good decisions and avoid pointless deaths, I am thinking at a Russian soldier that could have launched a nuclear strike because some faulty "sensors" a computer would have just done it.

- there was a case posted here on HN where a german pilot decided not to destroy a damaged enemy plane because he saw the enemy pilot and he could not just kill, aka most humans do not like killing where a killer robot would have no problem killing children, wounded soldiers,doctors.


It seems likely that soldiers will still be fighting on each side for quite a while and moreover, it is certain that there will be civilian casualties aside from the military casualties on at least the losing side.

Think of it this way - war isn't a game. Each side has a strong incentive to make the other side fear to go to war with it. Killing people is a necessary part of this. The progression of technology has generally resulted in an advantage for the attacker over the defender. So two armies in conflict aren't going work by "may the best robot win while we humans watch" Rather, it seems much more likely a robot war of the future would be something "my swarm of death bots kills your people while your swarm kills mine".


A well armed state sending an effecient army of machines against a less-well armed state makes for some power-problems, and it isn't machine-vs-machine. It's going to be machine-vs-people-who-can't-afford-machines.

Imagine the first Gulf-War if the USA didn't have to care about how many of their own people were dying. It makes it easier for scorched-earth policies.

... Then take one more step. Many military advances are then handed down to the services like the police.

It doesn't matter if you have the right to bear arms if the rather cheaper machines outnumber you, and can outgun you.

Governments have a serious problem with scope creep. They promise things will only ever be used for one thing, and definitely not the other thing... Until the next government takes over. Or in the case of the current US government, tomorrow.


This is my take on it.

A government with an AI army is free to suppress its populace. A government with a human army may suppress its populace, but the army can refuse to do so.

Civilian control of the military, and a military that is composed of citizens (particularly in a democratic republic) may not choose to obey an order to suppress fellow citizens. An AI army would (presumably) have no conscience.


Note that this point matters even short of full automation: the fewer people a state needs to rely on, the more tyrannical it can be.

(Yes, there have been plenty of horrors with mass participation.)


It's not an argument of sending men to die instead of machines.

It's an argument about sending machines to conduct an incredibly asymmetrical and highly efficient method of dispatching humans with reduced oversight and ability to determine if particular orders are controversial or not (Arkhipov comes to mind https://en.wikipedia.org/wiki/Vasili_Arkhipov)


When you send your own people to war there is a cost to society and to ones conscious. Society can also challenge that cost and it may be enough to stop or prevent an unjust war. What happens when we can wage war with impunity? When there no costs? As bad as war is, it may even be worse.


The monetary costs of war will still be huge even if humans are no longer on the battlefield. Modern weapons are extremely expensive already, and getting even more expensive with each new generation. An F-35A unit cost is $85M. A hypothetical AI controlled drone with similar capabilities would only be marginally cheaper.


Right, but when those who have corrupted the government for their purposes actually benefit from that equation, that cost is actually an encouragement, not a deterrent, towards war. Of course it at least offers a slightly easier antiwar political target, but we've seen how that has failed in the past.


The environmental and human costs represented by that figure are overlooked. How much pollution was created to make that $85 million fighter jet? When a government of the future spends 500 billion to make a field full of robot trash fighting with another government, what are the externalities of that?


if only one side has access to automated weaponry, which is easy to imagine in our current environment of asymmetric warfare the barrier to use lethal force may rapidly drop.

We already see this in drone warfare which is practiced in semi-legal fashion, without civil society having a clue of who is being targeted or when. We face the danger of slipping more into some sort of shadow realm where private actors and intelligence engages in war without democratic legitimacy.

Then there is the issue of the technology being abused to target vulnerable populations, like immigrant or refugee groups, again lowering the barrier of using violence or surveillance.


The problem is AI taking the decision to kill people.


There's a lot to the question of morality. Can an AI exercise the same moral judgement a human would make, or is it just an imperfect copy doomed to eventually commit an immoral act? I think the latter is obviously true, given that humans aren't perfectly moral decision makers, either. Is the creator of this AI responsible for the actions of their creation? And many other dilemmas and consequences that seem overwhelming to consider, regardless of your conclusion.

Also, there's the more practical worry of the Skynet scenario. Once a machine is empowered to make judgement decisions to kill humans it's a real possibility that even something as innocuous as a bug or design flaw could turn the world into a sci-fi cliche.

I don't really know where I stand honestly, but I do find the prospect paralyzing.


We don't yet have any AGI that takes decisions to kill people.

So let's be frank, Real Humans are here making decisions to kill people. Only that delegation of non-critical decisions is transitioned to programs.


Get your own AI, problem solved.


> I cannot tell why the spokesmen I have cited want the developments I forecast to become true. Some of them have told me that they work on them for the morally bankrupt reason that "If we don't do it, someone else will." They fear that evil people will develop superintelligent machines and use them to oppress mankind, and that the only defense against these enemy machines will be superintelligent machines controlled by us, that is, by well-intentioned people. Others reveal that they have abdicated their autonomy by appealing to the "principle" of technological inevitability. But, finally, all I can say with assurance is that these people are not stupid. All the rest is mystery.

-- Joseph Weizenbaum


One robot vs state violence?


I should've probably elaborate on my comment cause of course people will reflexively downvote it, but proxy wars, smaller armies, and now AI driven conflicts are the natural evolution of warfare.

We went from slaughter of millions to hanful of extremely professional soldiers to, eventually fight of machines. (MGS4 depicted it in striking details years ago).


Do you think that a country would surrender because your robots won a robot fight? There will still be an army and an occupation resistance movement that will fight your robots and lives will still be lost. The attackers would continue to destroy the roads,train station, food reserves, energy production facilities ... maybe AI is the future but is not better and we should try to make killer robots usage illegal.


The concept of proxy fight is not new, it is probably as old as humanity. Bible had David vs Goliath. Maya used a variation of soccer to solve military conflicts. (You can see it depicted on temple walls). You are focused on "destroying" part, but in fact it is only a part of the warfare, and a very expensive one. (Part of the reason why we see progressively less and less large scale conflicts is because we are growing more and more economically interconnected and a cost of open warfare just not worth it anymore)


I would love to see politicians or generals fight directly, but this does not happen at all , we see proxy wars where powers A and B fight indirectly by getting involved in some country C conflict.

I do not believe that if we get AI robots involved we will not see bombs land on bridges,as recently as the war in Yugoslavia bombs were droped on economic targets:

"NATO bombed strategic economic and societal targets, such as bridges, military facilities, official government facilities, and factories, using long-range cruise missiles to hit heavily defended targets, such as strategic installations in Belgrade and Pristina."

Are you imagining 2 robot teams fighting each other and the winner gets the loser resources and impose it's politics, the population will accept that their robots lost ?


I see no problem with that. The killed don't care who took the decision. And we could tune the AI to be as trigger happy as we like it to be. So if you want strict rules of engagement - program them. If you want to cry havoc and unleash the dogs of war - ditto. The moral decision is still yours but as standing order, you ignore couple of middlemen.


Or rather than being trigger happy they could have higher standards for targeting than even the ICRC suggests. Leave the cook instead of shooting him as a combatant. Shoot to disable rather than kill. Use less lethal weapons even in a real war if appropriate. Deliberately take 90% casualties in order to minimise civilian or even enemy casualties.


You won't send machines to die, you send them to kill other humans.


Men don't have the potential to all have the same bug or get hacked which causes them to kill anything that moves.


With the AI soldier/killer drones the second amendment becomes fairly useless. The government cannot bleed.


Unfortunately the Supreme Court already ruled that the second amendment applies to self defence irrespective of the “militia” part (DC vs Heller, 2008).


The ruling specifically focused on the militia aspect militia. Quoting that ruling:

The prefatory clause comports with the Court’s interpretation of the operative clause. The "militia" comprised all males physically capable of acting in concert for the common defense. The Antifederalists feared that the Federal Government would disarm the people in order to disable this citizens’ militia, enabling a politicized standing army or a select militia to rule. The response was to deny Congress power to abridge the ancient right of individuals to keep and bear arms, so that the ideal of a citizens’ militia would be preserved.

People conflate militia now a days with military, but they are not the same. The "militia" are the armed citizens of a state. And similarly the security of a free state is talking explicitly about security against government itself - not foreign invaders, which was the domain of the federal government. An armed population can prevent the imposition of tyrannical rule, an unarmed population cannot. In verbose modern text the amendment might read something like, "A well regulated and armed population being necessary for the protection of a state against tyranny, the right of the people to keep and bear arms shall not be infringed."


Unfortunately people tend to forget or were never taught that the people have natural rights they still retain even if they aren't referenced in the bill of rights, and that indeed that was one of the main arguments against the bill of rights in the first place (that the government would see the incomplete list and think those were all we had, which is a repudiation of the foundations of the American system). Only one of such rights is the right of self defense, being independent from the right of a militia to keep and bear arms in defense against tyranny. In other words, you could remove the second amendment and the people would still have the right to bear arms.


Additional read:

"Autonomous Military Robotics: Risk, Ethics, and Design" By Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo, sponsored by the Department of the Navy

    In this report, we will present: the presumptive case for the use of 
    autonomous military robotics; the need to address risk and ethics in 
    the field; the current and predicted state of military robotics;
    programming approaches as well as relevant ethical theories and considerations 
    (including the Laws of War, Rules of Engagement); a framework for technology 
    risk assessment; ethical and social issues, both near- and far-term; 
    and recommendations for future work.
http://ethics.calpoly.edu/ONR_report.pdf

and "Malak" by Peter Watts

Inside the mind of an autonomous drone with a conscience, but as usual with Watts it all goes wrong.

https://rifters.com/real/shorts/PeterWatts_Malak.pdf


>> and it has proved incredibly powerful and effective for all sorts of practical tasks, from voice recognition and image classification to controlling self-driving cars and automating business decisions.

> and it has proved incredibly powerful and effective for all sorts of practical tasks, from signal processing and signal processing to signal processing and a dubious application of deep learning


This is not worth reading.

I worked with Bengio for a couple of years, and he's a classic example of a not-particularly-talented mid-level prof who's been elevated by the bubble of hype in his field, attracted some talented students who publish papers on which he ends up as a coauthor... and now thinks he is a voice of authority in AI (and many other fields).

Those who disagree with me -- can you name a single important contribution to the field he has made, that wasn't in fact done by one of his students?

Hinton did backprop, leCun did convnets, Schmidhuber did LSTMs, and Bengio did ... ?


FTA :

>>> The best students want to go to the best companies.

If you're a parent, maybe it's time to teach your kids that "going to the best company" may not be the best outcome for a citizen of the world.


As a devils advocate for military AI, it might be better in some ways - after all if you are lunching military action do you want it to be unintelligent? The trend has been from blanket bombing killing mostly civilians to precision strikes taking out some bad guys and some wedding parties and in the future AI robots might be able to do things like disable vehicles without killing the soldiers.


>Right now, we don’t really have good algorithms for this, but I think if enough people work at it and consider it important, we will make advances.

Yeah, I think there's a lot of space for advances, if we can combine different backgrounds and intuitions. I'm working on a project where I try to combine some programming-language semantics and Bayesian learning to learn structures.


I think we should all remember that being a pioneer in AI does not give you any experience or authority in international politics...


The right to bear arms was given to humans who happen to live in this so-called freedom loving country. It was never given to autonomous non-human entities. So while the military may develop robotic kill squads their non-military use is illegal. Unless the treasonous Supreme Court doubles down on their treason and give the right to bear arms to AI, by considering AI to be a digital person with rights. It could happen. The Supreme Court judges have already shown that they're fully capable of making decisions that favor big corporations over us the people. Why not autonomous bots? After all, big enough corporations are pretty much autonomous (as they follow profit above all and do so regardless of who is in charge of them).


Seems like it could go either way, its not like the bots themselves need the rights or become legal entities beyond a weapon.

Now maybe this weapon fires by you telling it to shoot that guy over there, or it's programmed to shoot anyone who enters through that door over there, but in the end it's still a weapon under control and responsibility of the owner.


What you describe is not Autonomous AI that is driven by a dynamic mission, as the military AI will be. An AI killer bot can be developed that not only tries to achieve the goal specified by the mission but also alter the mission as needed to achieve a higher level goal, and the higher you go up in the goal specification the more autonomy you're giving to the machine.


How do you go from "... the right of the people to keep and bear Arms, shall not be infringed" to an assumption that this somehow limits "autonomous non-human entities"?

The default situation with rights for people is essentially "everything is permitted unless it has been explicitly forbidden by the laws", with the added restrictions of what the government shall not ever forbid.

If the government does nothing, autonomous non-human entities have the right to bear arms - it doesn't need to be given, it's sufficient if it hasn't been taken away. Since the second amendment does not apply to such entities, the government can freely restrict these rights if it chooses to, but currently it has not.


Live and learn!


> treasonous Supreme Court

How is it treasonous?


They allowed corporations to vote with their money and buy candidates. Undermined our ”By the People. For the People” fantasy.


AI researchers over romanticize their robot AI technology. They don't know enough about wars. Morality is something we talk about outside of the battlefield.

It is not going to be your robot army fighting a human army. Instead, it will be your robot army fighting another robot army, perhaps less sophisticated.

They'll supplement their disadvantage with humans. You want to be the army with more advanced robots. Otherwise you'll need to put humans on the line. Always better to equip yourself with better fighting capability.


It is not going to be your robot army fighting a human army. Instead, it will be your robot army fighting another robot army, perhaps less sophisticated.

I don't think there are any current conflicts[1] that could reasonably be described as being between two armies. It's more often between one group of armies from several countries working together and several groups of [terrorists|freedom fighters|"unlawful combatants" ] using guerrilla tactics to attack them.

The exception are the ongoing civil wars, but they tend to be an army fighting rebel groups in that country - would rebels ever get their own robots? How would that happen?

It's obviously possible that we'll see two armies of robots fighting in the future, but that's not really what war is right now.

[1] https://en.m.wikipedia.org/wiki/List_of_ongoing_armed_confli...


The Syrian government got Russian weapons, air support. They even have chemical weapons.

One side or many sides. You can be sure that the other sides are not fighting with their bare hands.


"“Some rogue country will develop these things.” My answer is that one, we want to make them feel guilty for doing it"

Feel guilty?! The man may have contributed greatly to the field of AI, but that kind of comment just comes across as very naive about how the world works.

"Shouldn’t AI experts work with the military to ensure this happens?

If they had the right moral values, fine."

The military adheres to the laws of war and is lead by a civilian politician, not a pope. A nation's military carries out the policies of the civilian leadership. If you want a moral military, get moral politicians.

I don't understand why the distate at one's own military forces. These people are not aliens, they're fellow Americans, or French or Japanese or wherever you're from. At least for the major democracies, these men and women are volunteering to defend your ass. They don't get to start wars, the politicans you vote for do.

I very much want the military forces of my country to kick the ass of any threat, and if some tech such as AI could help eliminate that threat faster and with less blood than brilliant.


> These people are not aliens, they're fellow Americans, or French or Japanese or wherever you're from.

Unless they're a threat that needs its "ass kicked", of course. Which is an euphemism for killing them dead. But that's somehow not worse than simply critizing people, wanting them to be more alive, by connecting their deeds and the consequences of those deeds?

Any teenager can realize this: Militaries only positive use is to defend against other militaries, it's like a debugger that can only debug bugs in itself, but it can also be used to slaughter civilians, and does that quite a bit too much already.

> They don't get to start wars, the politicans you vote for do.

Then how come George Bush wasn't booed of that aircraft carrier when he declared mission accomplished? If you want respect, if you don't want to be ashamed, don't get caught in situations like that. If you want respect, instead of shame, by the time something like Abu Ghraib reaches the public it should include details about the ruckus that caused in the military, and how people got beaten up by their comrades for partaking in the torture of people, long before the police could get to them. And so on. Instead, we get these uncanny valley stories about honor and duty and whatnot, by people who don't quite remember what being human is, but also can't quite leave humans alone so they can find a solution for this mess.

When I opted against military service I didn't just say no, I wrote them a kinda fiery letter. If 19 year old me can do that, others can do it, too. But now I don't even get to criticize the military because I'm not in it, because they "volunteered to defend me"? Nah. I see the ads for the Bundeswehr, they volunteered because they're not right in the head if they responded to any of that. They want power, importance, comraderie. Anything BUT personal deep responsibility, which is why all the promotional material is about how joining the army is "stepping up", just like you repeat the old chestnut of the military protecting us, rather than leeching off and killing us. By "us" I mean humans, not $country, since as I said out that country stuff cancels itself out.

If people want respect, let them be respectable. If they do shameful shit, they get shamed. If you are for the swift elimination of threats you should welcome that.

And yes, I do feel for soldiers, I just hide it very well. The thought of kids getting sent off to be made murderers for the protection of the wealth of people who couldn't give a shit about them, that doesn't make me think "serves them right", it breaks my heart. But that doesn't mean I buy into all those rationalizions that always get trotted out. They don't hold up, and at at this level of technology we simply cannot allow that level of foolishness anymore.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: