Hacker News new | past | comments | ask | show | jobs | submit login

Are we still wasting our time talking about evil AI at this premature time? Can someone dig up Andrew Ng’s comments on this?



Boolean algebra was developed by George Boole in 1847.

Konrad Zuse built the Z1 computer in 1936, almost 90 years later.

Just because the technology isn't here yet doesn't mean we can't start discussing the theory and its implications.


Especially when the impact of strong AI will make the impact of automobiles look downright trivial by comparison.


This isn't Elon Musk-style evil AI. This is the more prosaic idea of hackers being able to hack AI systems. For instance, a hacker can put stuff on a stop sign (which won't be visible to humans) to make a Tesla car think it's not a stop sign, and therefore cause an accident. I recommend reading up about it. This is the kind of stuff that can actually happen today.


There are practical applications too. For instance, getting around automated filters like copyright or porn detection. Making adversarial captchas that fool computers but not humans.


If something looks like a stop sign to the human but like something else to the AI, then "I" stands for "Idiocy" not "Intelligence".


Your brain is not immune to the problem, it's just hard to automate the creation of optical (and audio, and presumably all other sensory) illusions when we don't have a synapse resolution connectome of the relevant bits of your brain.

Examples include That Dress, duck-or-rabbit, stereotypes, "garden path sentences", and most film special effects.



Ironically your second link starts with Clever Hans, which is another example of my point. Machines, even organic ones like our brains, are not magically able to know objective reality, and the failure itself isn't idiocy — the rate of failure (and things like metecognition about the possibility of failure) is (at last part of) the intelligence-idiocy spectrum.


The ever-timely: 'Fearing a rise of killer robots is like worrying about overpopulation on Mars.'


Killer robots are already here : https://www.youtube.com/watch?v=MXm-ofKLB3M

"Generally, the vehicle will have a set of sensors to observe the environment, and will either autonomously make decisions about its behavior or pass the information to a human operator at a different location who will control the vehicle through teleoperation."


This is a strawman. Elon's idea and what Andrew is countering is that killer robots can be unintentionally created by someone trying to do good.

Evil robots could always be created by evil actors, and there is nothing interesting here.


Not saying that creating military killer robots counts as "trying to do good", however ... if we're talking about a terminator-like scenario, let's not forget the robots from the movie initially are human-designed military appliances.


Possibly by someone trying to do good. More likely by someone just following their rational incentives. Very few people are 'evil' actors but most follow incentives.


Look at the ransomware and DDOS botnet epidemics though. If these programs were to infect and control self-driving cars all sorts of science fiction level bad stuff could happen.


Yes, but that is a computer security issue (and a damn important one!), not an ML issue.


I really hate that smart comment. Two big differences are that with overpopulation on Mars, (1) we would be able to address it before it happened or while it was happening, on a time-scale of decades, and (2) even if there was a total disaster and Mars became permanently unlovable and a lot of people died, we'd still have Earth. Whereas (for those concerned about runaway AI) the time-scale could be minutes and the stakes are the end of known intelligent life in the universe.


it's kindof irrelevant to andrew ng's comments. It's about something like Cyberattacks. Something can be done very soon like tricking spam filters




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: