Hacker News new | past | comments | ask | show | jobs | submit login
The best defense against malicious AI is AI (technologyreview.com)
111 points by etiam on July 23, 2017 | hide | past | favorite | 69 comments



On a related note, at Blackhat this year there's a presentation on using Machine Learning based malware detection to train your malware to evade ML based malware detection:

https://www.blackhat.com/us-17/briefings/schedule/index.html...


Depending on ones level of idealism, the best defense against malicious AI is actually educating programmers / hackers / engineers on the implications of their work and to have a sense of decency & foresight.

It baffles me that we just accept the kind of malice inflicted on people by programmers because "someone will always do it". As a profession / collection of skilled persons, we should really be better than that.

Obviously, one cannot see the future, nor would we want to be paralized by fear of doing anything. But there is a certain minimum requirement for collective responsiblity which I really don't think we are meeting at the moment.


Decency and foresight is highly subjective. Actually encoding what we understand as "decent human behaviour" into AI is a huge problem: If you create a system that decides about giving somebody a loan in the US based on a financial data set it might come up with any of these hypotheses:

* Don't give loans to people living in [poor area].

* Avoid people with names that aren't similar to the most common ones in the database (i.e. foreign ones)

* When linking the customer data to their social media and you see the picture is dissimilar to [preferred ethnicity], do not give a loan.

Now without any malice from the developer, the system has become racist: It saw the correlation that in the US blacks and hispanics live in poverty more often that others [1]. It knows that poor people pay back loans less frequently and makes the rational decision to not give a loan to that group. This of course reinforces the problem.

But how are we to solve this? Introduce an additional column "race" to the data and bias the results with it? Would that not be just as racist? How do we give the system an awareness not to discriminate against ethnic groups, if the data contains implicit clues? This comes down to giving such an AI human intuition about such questions.

[1] http://www.ssc.wisc.edu/irpweb/faqs/faq3/Figure1.png


I'm not saying teach people some universal moral code, or suggesting we can accurately encode a set of appropriate morals into an AI. Just that people stop and think if what they are doing is appropriate.

It seems silly to take problems which humans cannot solve with all the data, intuition and humanity that we have, and then use that data to train machines to make decisions, we know will be flawed.


>Now without any malice from the developer, the system has become racist: It saw the correlation that in the US blacks and hispanics live in poverty more often that others [1].

If the data really suggests that these factors (poor areas, foreign names, social media) affect liklihood of default, should they be completely ignored? I'm sure credit scores and salaries are correlated to race in the US too. Should those be ignored? Just give out loans indiscriminately?

They're not ignored now by loan approval. Worse, inaccurate stereotypes are used as a proxy for some of these factors that may be relevant to a borrowers ability to pay. We don't live in a perfect world where people ignore certain factors.


Are you familiar with correlation vs causation? Without a causal link, making an inference based on correlation is unsound. Encoding unsound reasoning in an AI model, particularly when it reinforces an existing social imbalance, would be less than ethical.

"Weapons of Math Destruction" by Cathy O'Neil delves into this in more depth. It's a very valid concern, particularly in the way non-technical people are trained over time to give deference to the algorithm.


I'm not sure what you mean. AI doesn't involve "encoding unsound reasoning" into anything. That's more akin to how humans think and act, by making decisions based on heuristics learned from a lifetime of experience and societal influences. It would be unethical to encode a "race" factor to be considered in an algorithm, although even if it were included I think it would prove insignificant since race has no direct causation to ability to prepay when you consider all the other factors (e.g. poor white person in same neighborhood would have same default prob). However other factors might, such as the ones you have listed (where they live, credit score, etc)

I am familiar with the book you mentioned.

You still didn't answer my question. If credit score and income are correlated to race in America, should they be ignored from loan applications? What are valid factors to consider?


> If credit score and income are correlated to race in America, should they be ignored from loan applications?

Is correlation causation? Is it fair to encode a judgement based on correlation but not causation?

Or to make your position more clear, do you support racial profiling? If not, why not, and how do you justify that position, and how does that justification apply to judgements based on correlation but not causation?

(My position is that making decisions based on correlation but not causation is the general case for which racial profiling is a specific case.)


I already answered that. No race should not be a factor as it does not convey any info not explained by other factors.

What are factors that are causation? Presumably not where you're living and other considerations you determined to be implicitly racist


+1 for Weapons of Math Destruction.


You know, if you code your machine right, and give it the real information it wants (income level), an inference machine will detect that race is a worst proxy than the direct value and will not become racist.

It's when you start to deny proper information into decision making that those bad proxies start to look good.


But a correlation between income and race might be detected and used by the system as well, making it at least partially racist. Maybe no better than humans today but we should at least aim to make our AI not overtly racist.


Ethics can get pretty subjective, so I think it would be hard to teach a uniform view of decency. Also I hate to sounds pessimistic, but there are a lot of smart people who do much worse things than create malicious software when money is at stake. I think educating the people is a step in the right direction, but I fear it will fall short of your ideal.


>> I think it would be hard to teach a uniform view of decency

Ofcourse, but we don't avoid studying moral philosophy because we can't find a uniform model. Nor does not being able to find a uniform model mean that there are not things worth learning from it.

>> there are a lot of smart people who do much worse things than create malicious software when money is at stake

Agreed. Two wrongs do not make a right, and as engineers in the know, we have a duty to stand up to such people when we have the chance.

>> I fear it will fall short of your ideal.

Me too, but the alternative is worse in my opinion.


The problem is that if engineers in the US don't take the job then someone somewhere in, say, Asia will. Ethical rules are not the same everywhere. You can already tell this by looking at experimentation with stem-cells, cloning, et cetera, which is happening faster in Asia than in the US because of regulations.

Also, refraining from building these systems means that other countries get ahead of us. Besides being an economic disadvantage, this could also threaten security.

So I do agree that engineers should adhere to ethical guidelines, but they should be considered in a global context.


this, we need to wake up to the fact that China is willing to do whatever it takes to dominate. They have done massive research on genetics and intelligence and the top schools in china make parents take IQ tests to determine potential for children when accepting students. They don't care about diversity or equality, only results.

The US and Europe need to wake up or we'll be facing a billion hypernationalist, gentically modified super-geniuses while the only thing we have is the moral highground. Based on what I've seen over the last decade we should probably start learning Mandarin.


I'm learning mandarin and have a chinese girlfriend, fully ready for the takeover when it happens ;)


I don't think that's the best defense, because even if 99.999% of programmers are perfect, ethical, decent people, all it takes is one psycho to go ahead and let the genie out of the bottle anyway.

Imagine there's a big red "end the world" button that anyone on the planet can push. No matter how idealistic and utopian your views are, do you really trust that there's not a single person out of 7 billion people that might push the button? That is the kind of situation I worry we may face once people figure out how to create strong AI.


Turning what I think you are saying on its head:

There is a 0.001% chance of something bad happening, so there is no point in the other 99.999% changing their behaviour in order to stop that bad thing happening.

I don't think that follows either. Second guessing human nature to the point where you resign yourself to catastrophe is surely less worthwhile than striving to create a sense of collective responsibility before it is too late?


I'm not saying there's an 0.0001% chance of it happening, I'm saying that if you give enough people a chance to create an AI, eventually someone will do it, even if 99.999% of people would choose not to. Right now, nobody knows how to do this, but my point is that eventually people will figure out how: once everyone knows how, then it's just a matter of time before someone does it.

I'm not second guessing human nature, just saying people do crazy things all the time. The vast, vast majority of people are not suicide bombers, for example, but it still happens sometimes. It doesn't matter how much collective responsibility we have; eventually you may just end up with a brilliant programmer with schizophrenia or something who writes an AI in his basement and unleashes it on the world.

I'm also certainly not resigning myself to catastrophe. There are probably ways to prevent or mitigate the negative impact of AI, like the one being suggested in the original article, for instance. I would hardly consider myself a cynic, but forgive me if I'm skeptical of the idea that every human in the world can somehow agree to never do anything that might create a dangerous AI. If it can happen, it eventually will happen.


By that logic the gamma knife that has saved many people with cancer would not exist - before practical applications comes laying the basic foundation, and once that foundation is down, the genie is out of the bottle - I'm working on an autonomous drone, I am of course deeply opposed to weaponizing what I know is a ridiculously stupid system, but I know that it will happen - implication is not action, and knowledge is mostly neutral, it's the application of that knowledge that might be problematic - whichever way you slice it, it's a really complex issue, and anything worth having is often capable of both great and terrible things, depending on the application. AI is way too primitive to be malicious on it's own and will remain that way for years - now, as for abhorrent malicious uses of what we have now, here's a great example -- https://goo.gl/HmBY9k


Sorry that's how natural selection works. When there's money at stake, someone will always do it.

edit: typo


And people like that are why we can't have nice things.

I'm not saying that education / provoking thought will stop everyone doing something (else) monstrously stupid with AI. Just that we should be doing that educating / provoking more than we currently are.


Even to the detriment of themselves... see "Tragedy of the Commons"


So... the only way to stop a bad guy with a GAN is a good guy with a GAN?


AI, AI, captain.


You know, HN could use some of this /.-like humor.


For something that is more oriented toward patches and network defense see http://archive.darpa.mil/cybergrandchallenge/ .


I competed in that! It was a fun contest and if you want to read more about it, we have a section on our blog dedicated to it with lots of rich technical detail: https://blog.trailofbits.com/category/cyber-grand-challenge/

I am sorry to admit, though, that it did not involve AI or machine learning in the slightest. Most of it was about melding the capabilities of fast, dumb dynamic testing like fuzzing together with the deep analytical capabilities of more advanced program analyses like symbolic execution.


I was there so I probably saw you. I remembered that the competitors used symbolic execution . My comment should have been more general and say automated .


The sports-caster style commentary was hilariously awkward at times. A room full of people staring a stage full of server racks with a projector showing basic shapes being spewed between locations. Then off on another screen a technical person trying to explain to the Science channel person what the hell was going on with the different types of exploits and code execution graphs.

It's completely indescribable and epically nerdy. +10, would attend again. :)


It was awkward at times, the visualizations were cool but didn't really help you understand what was occuring. But, given their content and also to get people not very technical excited? That is a heruclean task for anyone. But I hope DARPA tries again to do it. They could attempt to do better and make it interesting. OR they could embrace the awkwardness and own it. Make it hilarious, make it MEMEy, make it like Tim and Eric.


I'm following the original title for the post for now, but frankly I'm unhappy with almost everything about the style of it. I would encourage changing it.


I would go with "Kaggle Hosts Adversarial Machine Learning Competition".


I like it. Accurate and descriptive.


Would strategies used by AI's that play imperfect information games like Poker be useful for winning a contest like this?


Those who have not seen WarGames are clearly doomed to reinvent it. Just after I finish this game of "Falken's Maze"...


No kidding - what else is a potentially hostile autonomous agent if not everybody.


Yet another thing predicted decades ago by William Gibson in Neuromancer.


Came here to comment "Wintermute?"


That's how Skynet got started.


Well, if there were no AI, there were no malicious AI also.


This reminds me of a cyberpunk story.


Reminds me of Person of Interest.

I'm still amazed this show had a full run on mainstream TV. One of the best TV/movie treatments of "Hacker News topics" there has been.

https://en.wikipedia.org/wiki/Person_of_Interest_%28TV_serie...



Sure Xerxes seemed like a good idea but we all saw how that worked out...[1]

[1] http://shodan.wikia.com/wiki/XERXES


A name of the topic reminds me "Watchbird" novel written by Sheckley


Or, The City and the Stars by Arthur C. Clarke, though it is was not exactly computer AI.

That book still makes me goosebumps every time I remember it.


let's hope that this good AI won't turn bad once it's too late to switch to something else...


the best defence against AI is to reduce the # of bits for algorithm to use


The Oracle.

She possesses the power of foresight, which she uses to advise and guide the humans attempting to fight the Matrix


Are we still wasting our time talking about evil AI at this premature time? Can someone dig up Andrew Ng’s comments on this?


Boolean algebra was developed by George Boole in 1847.

Konrad Zuse built the Z1 computer in 1936, almost 90 years later.

Just because the technology isn't here yet doesn't mean we can't start discussing the theory and its implications.


Especially when the impact of strong AI will make the impact of automobiles look downright trivial by comparison.


This isn't Elon Musk-style evil AI. This is the more prosaic idea of hackers being able to hack AI systems. For instance, a hacker can put stuff on a stop sign (which won't be visible to humans) to make a Tesla car think it's not a stop sign, and therefore cause an accident. I recommend reading up about it. This is the kind of stuff that can actually happen today.


There are practical applications too. For instance, getting around automated filters like copyright or porn detection. Making adversarial captchas that fool computers but not humans.


If something looks like a stop sign to the human but like something else to the AI, then "I" stands for "Idiocy" not "Intelligence".


Your brain is not immune to the problem, it's just hard to automate the creation of optical (and audio, and presumably all other sensory) illusions when we don't have a synapse resolution connectome of the relevant bits of your brain.

Examples include That Dress, duck-or-rabbit, stereotypes, "garden path sentences", and most film special effects.



Ironically your second link starts with Clever Hans, which is another example of my point. Machines, even organic ones like our brains, are not magically able to know objective reality, and the failure itself isn't idiocy — the rate of failure (and things like metecognition about the possibility of failure) is (at last part of) the intelligence-idiocy spectrum.


The ever-timely: 'Fearing a rise of killer robots is like worrying about overpopulation on Mars.'


Killer robots are already here : https://www.youtube.com/watch?v=MXm-ofKLB3M

"Generally, the vehicle will have a set of sensors to observe the environment, and will either autonomously make decisions about its behavior or pass the information to a human operator at a different location who will control the vehicle through teleoperation."


This is a strawman. Elon's idea and what Andrew is countering is that killer robots can be unintentionally created by someone trying to do good.

Evil robots could always be created by evil actors, and there is nothing interesting here.


Not saying that creating military killer robots counts as "trying to do good", however ... if we're talking about a terminator-like scenario, let's not forget the robots from the movie initially are human-designed military appliances.


Possibly by someone trying to do good. More likely by someone just following their rational incentives. Very few people are 'evil' actors but most follow incentives.


Look at the ransomware and DDOS botnet epidemics though. If these programs were to infect and control self-driving cars all sorts of science fiction level bad stuff could happen.


Yes, but that is a computer security issue (and a damn important one!), not an ML issue.


I really hate that smart comment. Two big differences are that with overpopulation on Mars, (1) we would be able to address it before it happened or while it was happening, on a time-scale of decades, and (2) even if there was a total disaster and Mars became permanently unlovable and a lot of people died, we'd still have Earth. Whereas (for those concerned about runaway AI) the time-scale could be minutes and the stakes are the end of known intelligent life in the universe.


it's kindof irrelevant to andrew ng's comments. It's about something like Cyberattacks. Something can be done very soon like tricking spam filters




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: