I'm concerned that none of these goals involve AI safety. Instead all of their goals are nearly the exact opposite, of accelerating AI technology as much as possible.
Safety was one of the main goals they promoted when it was founded. 2 of the 4 authors listed have publicly spoken about their belief of AI as an existential risk.
I'm not saying that a game playing AI is going to take over the world. But it does demonstrate the risk - we still have no idea how to control such an AI. We can train it to get high scores. But it won't want to do anything other than get high scores. And it will do whatever it takes to get the highest score possible, even if it means exploiting the game or hurting other players, or disobeying it's masters.
Now imagine they succeed in making smarter AIs. And their research spawns new research, which inspires new research, etc. Perhaps over several decades we could have AIs that are a lot more formidable than being able to play Pac Man. But we may still not have made any progress on the ability to control them.
Almost no one who actually works or has done serious research in ML is genuinely concerned about "malevolent AI". We are so, so, so far away from anything remotely close to that. Please stop trying to gin up fear and listen to the experts, who uniformly agree that this is not something to be concerned about.
"AI: A Modern Approach (...) authors, Stuart Russell and Peter Norvig, devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.”
In addition, there is the AI Open Letter, which is signed by many "who actually works or has done serious research in ML", including Demis Hassabis and Yann LeCun. From the letter:
"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. "
The issue is not "malevolent AI", the probability of producing an AI with "malevolent goals" by random chance is so much more unlikely than getting any AI at all that it's not even worth it to worry about that.
The problem is "any AI" whose goals are not positively aligned-with/respectul-of human society's values. In terms with Asimov's Lay of Robotics meme, the problem is not robots harming humans, but robots allowing humans to come to harm as a side effect of their actions. This is an important ethical issue that, IMHO, technologist in general are failing to address.
Unethical AIs are only a special case in the sense that they are believed to be able to cause much more chaos and destruction than spammers, patent trolls or high frequency trading firms.
Personally think that the 'immediate' problem will be AI being used as assistants by unscrupulous humans. Imagine being targeted by semi-sentient ransomware.
I was going to quip something about meta-data, drone-strikes and automated targeting... but then I realized that the most likely immediate threat to life from AI is probably in trading and logistic algorithms. As in efficient privatization of water resources, to the point where those too poor to be a "good" market have to go without, or the possible drastic results of errors in such algorithms (eg: job massive loss as a result of bankruptcy). And then there's AI augmented health insurance - mundane things that might ultimately take human life - and perhaps no one will even notice.
The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.
1.: I take issue with non concerned issues being dismissed as exactly one reason why people should be concerned. I dislike circular reasoning.
2.: There does not exist uniform agreement that AI is safe, in fact at least in German hacker circles quite the opposite, but as far as I see the problems that AI risk evangelists emphasize are the wrong ones. Take over the world, paperclip etc...not going to happen. For that, AI is too stupid for now. And the rapid takeoff scenarios are not realistic. But ~30% of all jobs are easily replacable by current AI technology, once the laws and the capital is lined up. AI is also making more and more decisions, legitimizing the biases with which it was either trained or programmed because it is "AI" and thus more reliable(\s). And there is a great cargo cult of "data" "science" in development (separate scare quotes intended)
3. I am starting to dislike any explicit mentions of fallacies, especially if they were used just sentences ago by the mentioner
I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?
Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
>I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?
We should not slow it down. We should push forward, educate people about the risks and keep as much as possible in public scrutiny and possession (open source, government grants, out of patents/universtiy patents)
>Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?
Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
>Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.
Just like electriciy ate the world, and steam before...slowing down is not an option, making sure it ends up benefiting everybody would be the right approach. Pushing for UBI is infinitely more valuable than AI risk awareness, because one of the two does not depend on technological progress to work/give rewards
No, putting everything in the public domain and handing out a UBI doesn't solve anything at all. It's like worrying about nukes and believing the best solution is to give everyone nukes (AI) and nuclear bunkers (UBI), because "you can't stop progress". And then, let hand out pamphlets telling people how to use nukes safely, even though we know that most people will not read the pamphlets and (since the field is new) even the people writing the pamphlets may have no idea how to use this tech. Any cargo cult would only grow in number.
Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?
We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.
Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.
>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.
You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.
It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.
In contrast to popular fiction, my concerns don't lie in a conspiring super AI. Rather, we should take countermeasures against suffering from a thousand cuts. Flash crashes in financial markets are a precursor to other sytemic risks ahead. Maybe this is a distinction that helps to appreciate safety for AI more.
I am actually genuinely concerned about terrorists developing killer robots. There's already designs out there for robotic turrets that can utilize commodity firearms.
I even feel uncomfortable thinking of all the nasty stuff you could build.
This is their opening line: "OpenAI’s mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible."
Real AI isn't coming anytime soon, so it's better to focus on the market imbalance of Google et al's who are rolling up all the smart AI people and technology and have a huge head start. Having more companies exploring more business models that aren't ad based and hopefully are privacy first or non-profit would be much healthier.
It may be noted that safety isn't considered a "technical goal", but some other class of goal. And as others have mentioned, in the very short term, safety is inherent by the lack of actual thinking on the part of our "AIs" today.
But I think the biggest concern for OpenAI right now is not being behind the curve. If OpenAI is going to build safe AI, they have to beat everyone else to the level of AI that needs to be made safe!
To use an analogy from a favorite show of mine, Person of Interest[1] OpenAI has to build The Machine before someone else manages to build Samaritan.
It would be particularly scary if someone like Google or Facebook had a monolithic AI capable of accessing large percentages of our information, and influencing our lives using it, before anyone building more ethical AI was capable of doing so.
[1]Person of Interest is an amazing show, it's series finale airs tomorrow on CBS. If you're interested in the topic of morality in artificial intelligence, and enjoy a good scifi show, watch this one.
My theory is that OpenAI is trying to become the premiere research outfit in the field, with all of the power that affords.
If they can get there before the arms race dynamic becomes an actual problem, then they'll have succeeded in largely divesting the major players of their technological advantage (presumably Google and Facebook will still retain a massive advantage in terms of data).
When state-of-the-art AI research is open, it allows everyone to have a more accurate pulse on potentially dangerous advancements.
Isn't an arms race an apt comparison, though, particularly if we consider general AI (of superior intellect to us, not the basic toys we have now) to be an existential threat to humanity? Look at our other one: nuclear weapons. After all the political pandering is done, the people who actually decide how nuclear weapons are managed, what rules govern them, etc. ...Are the very countries who have that power.
Conceivably, the organizations or entities which decide what rules AI must follow may very likely be those which can create the technology. OpenAI is at least a couple years behind what, a dozen companies? If OpenAI wishes for a seat at that table, they have a lot of catching up to do.
I'd rather they focused primarily on existing safety concerns. Google/Alexa/Siri for example all require people to relinquish privacy and control to a third-party to get the benefits of AI. Can we get those same benefits while maintaining privacy and control of our data?
That's great if anyone can have a home robot, but can we have home robots that don't keep logs of everything that goes on in our house at undisclosed locations managed by a multinational corporation with unknown access control standards?
We already have AI safety problems, never mind hypothetical future problems.
The concepts of 'master' and 'control' may simply not apply here. I think the danger is that typical human ignorance and fear will slow what would otherwise be glorious, self driving cars that get me to where I need to be at 140 MPH average with no stops. Look, if it really hits the fan, we'll just fire off a lot of high altitude EMP's with local/analog targeting. Neutron bomb for AI. Unless they faraday cage themselves. Details.
See, I feel the more interesting thing about AI safety is not the fear/possibility of a malicious AI, but the ethics of teaching AI things that we have yet to figure out ourselves.
A good example of this is the autonomous car, which is forced into the choice between protecting the human in the car, or the pedestrians on the street. This is an ethical/moral question that most humans cant really come to a consensus about, but we are expecting to be rely on a centralized set of programmers who either through code will force a choice, or (through code again) come up with the process that makes the choice.
Oh and even if a consensus existed about the dangers/nondanagers of AI, you would still have to examine your "Experts opinion bias" to see if they are the only argument you have that is supporting your belief.
PS we need more of the Older semi AI training games, things like RoboCode(with neural Nets), NERO, ect. Iv seen a few pop up over the years but they always fizzle out. Huge opportunity for Tactical AI training as a game.
What does AI safety mean to you, specifically? What kind of scenario would this solve, specifically? What kind of mechanisms would help, specifically? It seems that redognizing danger to humans (particularly in a general case) would itself be an AI problem.
Safety was one of the main goals they promoted when it was founded. 2 of the 4 authors listed have publicly spoken about their belief of AI as an existential risk.
I'm not saying that a game playing AI is going to take over the world. But it does demonstrate the risk - we still have no idea how to control such an AI. We can train it to get high scores. But it won't want to do anything other than get high scores. And it will do whatever it takes to get the highest score possible, even if it means exploiting the game or hurting other players, or disobeying it's masters.
Now imagine they succeed in making smarter AIs. And their research spawns new research, which inspires new research, etc. Perhaps over several decades we could have AIs that are a lot more formidable than being able to play Pac Man. But we may still not have made any progress on the ability to control them.