Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI technical goals (openai.com)
220 points by runesoerensen on June 20, 2016 | hide | past | favorite | 70 comments



I'm concerned that none of these goals involve AI safety. Instead all of their goals are nearly the exact opposite, of accelerating AI technology as much as possible.

Safety was one of the main goals they promoted when it was founded. 2 of the 4 authors listed have publicly spoken about their belief of AI as an existential risk.

I'm not saying that a game playing AI is going to take over the world. But it does demonstrate the risk - we still have no idea how to control such an AI. We can train it to get high scores. But it won't want to do anything other than get high scores. And it will do whatever it takes to get the highest score possible, even if it means exploiting the game or hurting other players, or disobeying it's masters.

Now imagine they succeed in making smarter AIs. And their research spawns new research, which inspires new research, etc. Perhaps over several decades we could have AIs that are a lot more formidable than being able to play Pac Man. But we may still not have made any progress on the ability to control them.


Almost no one who actually works or has done serious research in ML is genuinely concerned about "malevolent AI". We are so, so, so far away from anything remotely close to that. Please stop trying to gin up fear and listen to the experts, who uniformly agree that this is not something to be concerned about.


"AI: A Modern Approach (...) authors, Stuart Russell and Peter Norvig, devote significant space to AI dangers and Friendly AI in section 26.3, “The Ethics and Risks of Developing Artificial Intelligence.”

https://intelligence.org/2013/10/19/russell-and-norvig-on-fr...

In addition, there is the AI Open Letter, which is signed by many "who actually works or has done serious research in ML", including Demis Hassabis and Yann LeCun. From the letter:

"We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. "

http://futureoflife.org/ai-open-letter/

Are the experts concerned about a "Skynet scenario"? No. But there is certainly genuine concern from the experts.


Slight nitpick at "devote significant space"... AI: A Modern Approach is a 1000+ page book, 3.5 pages is a footnote by comparison.


The issue is not "malevolent AI", the probability of producing an AI with "malevolent goals" by random chance is so much more unlikely than getting any AI at all that it's not even worth it to worry about that.

The problem is "any AI" whose goals are not positively aligned-with/respectul-of human society's values. In terms with Asimov's Lay of Robotics meme, the problem is not robots harming humans, but robots allowing humans to come to harm as a side effect of their actions. This is an important ethical issue that, IMHO, technologist in general are failing to address.

Unethical AIs are only a special case in the sense that they are believed to be able to cause much more chaos and destruction than spammers, patent trolls or high frequency trading firms.


Personally think that the 'immediate' problem will be AI being used as assistants by unscrupulous humans. Imagine being targeted by semi-sentient ransomware.

Edit: typos


I was going to quip something about meta-data, drone-strikes and automated targeting... but then I realized that the most likely immediate threat to life from AI is probably in trading and logistic algorithms. As in efficient privatization of water resources, to the point where those too poor to be a "good" market have to go without, or the possible drastic results of errors in such algorithms (eg: job massive loss as a result of bankruptcy). And then there's AI augmented health insurance - mundane things that might ultimately take human life - and perhaps no one will even notice.


The fact they aren't concerned is exactly the problem. They should be. And in fact many are. See the SSC link posted below (http://slatestarcodex.com/2015/05/22/ai-researchers-on-ai-ri...), or this survey (http://www.nickbostrom.com/papers/survey.pdf). Certainly I am worried about it. There is far from "uniform agreement" that AI is safe. And appeals to authority would not comforting even if there was an authority to appeal to.


Couple of things

1.: I take issue with non concerned issues being dismissed as exactly one reason why people should be concerned. I dislike circular reasoning. 2.: There does not exist uniform agreement that AI is safe, in fact at least in German hacker circles quite the opposite, but as far as I see the problems that AI risk evangelists emphasize are the wrong ones. Take over the world, paperclip etc...not going to happen. For that, AI is too stupid for now. And the rapid takeoff scenarios are not realistic. But ~30% of all jobs are easily replacable by current AI technology, once the laws and the capital is lined up. AI is also making more and more decisions, legitimizing the biases with which it was either trained or programmed because it is "AI" and thus more reliable(\s). And there is a great cargo cult of "data" "science" in development (separate scare quotes intended) 3. I am starting to dislike any explicit mentions of fallacies, especially if they were used just sentences ago by the mentioner


I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?

Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?

Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.


>I agree with you, igk. AI is a threat right now and has to be countered. Now the problem is: is there an organization dedicated to countering current AI trends through government regulations to slow down AI research and prevent AI misuse? Or are we just going to talk about it on Hacker News and German hacker meetups?

We should not slow it down. We should push forward, educate people about the risks and keep as much as possible in public scrutiny and possession (open source, government grants, out of patents/universtiy patents)

>Because at least OpenAI and other "AI Safety" people are attempting to try to stop this 'risk'. They may fail, but at least they can say they tried to deal with "strong AI". How about those worried about current "weak AI"? If the cargo cult spreads and we do nothing...should we get some of the blame for letting the robots proliferate?

Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.

>Then again. Maybe it is impossible to stop or slow down these future trends. Maybe AI is destined to eat the world, and our goal should be to save as many people as we can from the calamity.

Just like electriciy ate the world, and steam before...slowing down is not an option, making sure it ends up benefiting everybody would be the right approach. Pushing for UBI is infinitely more valuable than AI risk awareness, because one of the two does not depend on technological progress to work/give rewards


No, putting everything in the public domain and handing out a UBI doesn't solve anything at all. It's like worrying about nukes and believing the best solution is to give everyone nukes (AI) and nuclear bunkers (UBI), because "you can't stop progress". And then, let hand out pamphlets telling people how to use nukes safely, even though we know that most people will not read the pamphlets and (since the field is new) even the people writing the pamphlets may have no idea how to use this tech. Any cargo cult would only grow in number.

Oh and our "free" nuclear bunkers have to be paid for by the government. There is a chance of the bunkers either being too "basic" to help most people ("Here's your basic income: $30 USD per month! Enjoy!") or being so costly that the program will likely be unsustainable. And what if people don't like living in bunkers, etc.?

We are trying to apply quick fixes to address symptoms of a problem...instead of addressing the problem directly. Slowing down is the right option. If that happens, then society can slowly adapt to the tech and actually ensure it benefits others, rather than rushing blindly into a tech without full knowledge or understanding of the consequences. AI might be okay but we need time to adjust to it and that's the one resource we don't really have.

Of course maybe we can do all of this: slow down tech, implement UBI, and have radical AI transparency. If one solution fails, we have two other backups to help us out. We shouldn't put our eggs in one basket, especially when facing such a complicated abd multifaceted threat.

>Are they though? All I here and read (for example "Superintelligence") is about the "runaway AI", very little is about societal risk.

You are right actually. My bad. I was referring to how AI Safety people has created organizations dedicated to dealing with their agendas, which can be said to be better than simply posting about their fears on Hacker News. But I don't actually hear of anything more about what these organizations are actually doing other than "raising awareness". Maybe these AI Safety organizations are little more than glorified talking shops.


First, slowing down is not going to happen. Capitalism is about chasing efficiency, and automating things is currently the pinnacle of efficiency.

Second, there's no magical adjusting. Just as you can't just adjust to having nukes in the living room.


It is also better to be far ahead of unregulatable rogue states that would continue to work on AI. Secondly, deferring AI to a later point in time might make self-improvement much faster and hence loss controllable since more computational resources would be available due to Moore's Law.



In contrast to popular fiction, my concerns don't lie in a conspiring super AI. Rather, we should take countermeasures against suffering from a thousand cuts. Flash crashes in financial markets are a precursor to other sytemic risks ahead. Maybe this is a distinction that helps to appreciate safety for AI more.


I am actually genuinely concerned about terrorists developing killer robots. There's already designs out there for robotic turrets that can utilize commodity firearms. I even feel uncomfortable thinking of all the nasty stuff you could build.


Yeah, this scenario is a million times more plausible. ML that augments human capabilities to empower bad people to do even more evil things.


We think safety is extremely important, and will have concrete things to say here soon.


Who are you?


CTO of OpenAI ;)


He might want to put that in his profile if he intends to speak in the name of OpenAI...


(Wasn't in my profile but just added. Thanks!)


He's a bot!


This is their opening line: "OpenAI’s mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible."

Real AI isn't coming anytime soon, so it's better to focus on the market imbalance of Google et al's who are rolling up all the smart AI people and technology and have a huge head start. Having more companies exploring more business models that aren't ad based and hopefully are privacy first or non-profit would be much healthier.


It may be noted that safety isn't considered a "technical goal", but some other class of goal. And as others have mentioned, in the very short term, safety is inherent by the lack of actual thinking on the part of our "AIs" today.

But I think the biggest concern for OpenAI right now is not being behind the curve. If OpenAI is going to build safe AI, they have to beat everyone else to the level of AI that needs to be made safe!

To use an analogy from a favorite show of mine, Person of Interest[1] OpenAI has to build The Machine before someone else manages to build Samaritan.

It would be particularly scary if someone like Google or Facebook had a monolithic AI capable of accessing large percentages of our information, and influencing our lives using it, before anyone building more ethical AI was capable of doing so.

[1]Person of Interest is an amazing show, it's series finale airs tomorrow on CBS. If you're interested in the topic of morality in artificial intelligence, and enjoy a good scifi show, watch this one.


> If OpenAI is going to build safe AI, they have to beat everyone else to the level of AI that needs to be made safe!

This is exactly the kind of arms race that the AI safety people have been warning about basically from the start.

Just saying, this does not fill me with confidence.


My theory is that OpenAI is trying to become the premiere research outfit in the field, with all of the power that affords.

If they can get there before the arms race dynamic becomes an actual problem, then they'll have succeeded in largely divesting the major players of their technological advantage (presumably Google and Facebook will still retain a massive advantage in terms of data).

When state-of-the-art AI research is open, it allows everyone to have a more accurate pulse on potentially dangerous advancements.


Isn't an arms race an apt comparison, though, particularly if we consider general AI (of superior intellect to us, not the basic toys we have now) to be an existential threat to humanity? Look at our other one: nuclear weapons. After all the political pandering is done, the people who actually decide how nuclear weapons are managed, what rules govern them, etc. ...Are the very countries who have that power.

Conceivably, the organizations or entities which decide what rules AI must follow may very likely be those which can create the technology. OpenAI is at least a couple years behind what, a dozen companies? If OpenAI wishes for a seat at that table, they have a lot of catching up to do.


Just realize it takes a couple seasons to get good. Don't quite after the first few shows.


I'd rather they focused primarily on existing safety concerns. Google/Alexa/Siri for example all require people to relinquish privacy and control to a third-party to get the benefits of AI. Can we get those same benefits while maintaining privacy and control of our data?

That's great if anyone can have a home robot, but can we have home robots that don't keep logs of everything that goes on in our house at undisclosed locations managed by a multinational corporation with unknown access control standards?

We already have AI safety problems, never mind hypothetical future problems.


The concepts of 'master' and 'control' may simply not apply here. I think the danger is that typical human ignorance and fear will slow what would otherwise be glorious, self driving cars that get me to where I need to be at 140 MPH average with no stops. Look, if it really hits the fan, we'll just fire off a lot of high altitude EMP's with local/analog targeting. Neutron bomb for AI. Unless they faraday cage themselves. Details.


In the opening paragraph, they mention missions of safety and wide accessibility.

Then they say:

> We’re also working to solidify our organization's governance structure and will share our thoughts on that later this year.

I interpreted that to mean that their governance structure will be key to the "safety" aspect of the mission.

Like you, though, I await their discussion of safety with great anticipation.


See, I feel the more interesting thing about AI safety is not the fear/possibility of a malicious AI, but the ethics of teaching AI things that we have yet to figure out ourselves.

A good example of this is the autonomous car, which is forced into the choice between protecting the human in the car, or the pedestrians on the street. This is an ethical/moral question that most humans cant really come to a consensus about, but we are expecting to be rely on a centralized set of programmers who either through code will force a choice, or (through code again) come up with the process that makes the choice.

Oh and even if a consensus existed about the dangers/nondanagers of AI, you would still have to examine your "Experts opinion bias" to see if they are the only argument you have that is supporting your belief.

PS we need more of the Older semi AI training games, things like RoboCode(with neural Nets), NERO, ect. Iv seen a few pop up over the years but they always fizzle out. Huge opportunity for Tactical AI training as a game.


What does AI safety mean to you, specifically? What kind of scenario would this solve, specifically? What kind of mechanisms would help, specifically? It seems that redognizing danger to humans (particularly in a general case) would itself be an AI problem.


#4, solving multiple games with one agent, seems like a reasonably interesting step towards more general intelligence. An agent that can play a new game without any game-specific information other than "what can I do" and "what should I value" starts to sound a lot more like an agent that can solve non-game problems. Especially if that agent can play games that model real-world problems.


I find it really interesting that Goal 4 is a game playing agent. Deepmind has been focussing on this since the beginning[0] and actually has great progress as far as Atari games go[1].

And DeepMind is able to use a single agent rather than a different specific one for each game. I wonder if OpenAI wants to go in a different direction although RL has considerable success. Whereas the other goals definitely need a lot of work before they are "real-world" functional, especially Goal 3. But of course that depends on their definition of 'useful'.

[0]:https://www.youtube.com/watch?v=rbsqaJwpu6A

[1]:http://arxiv.org/pdf/1312.5602v1.pdf


You're mistaken, DeepMind uses a different agent for each game. What's the same across games is the learning method. But the model is trained separately for each game.


I feel like I'm a bit confused at what constitutes a model. You basically have the same 'agent' to start off on the RL process. You can just group the learning agents together and make it (easier to tell it) automatically detect the game that has to be played.

I guess I wanted to say that there isn't a lot different to be done when an agent is being trained for different games. And that makes it a general game playing agent, doesn't it?


If you group the agents together and call it a single "general" agent, the research community (and pretty much everyone else) will call you out on your BS. That's the difference.


Them calling 'out on my BS' doesn't make it any less of a general agent if you train it sufficiently with enough games. If you consider yourself a 'general agent' who can play games with reasonable scores, I will ask you this. If I give you a new game you've never played, how will you score compared to your favorite game which you played a lot in your childhood? With the favorite game, you basically remember the gameplay and you use it when you play again. So is this me 'calling you out on your BS' at your ability to play games because you remembered your gameplay? The same way that I'm suggesting that a general agent remember its gameplay and use it to make a general game playing agent?

I really doubt we will see an agent which can be an expert at a game without even doing some computations which can fall into a grey area which people consider to be game specific computation and not general gameplay. The general agent which you are thinking of, which can be the best at a game without any thinking (about what its game playing process should be) is a fantasy. It will definitely need to have a 'gameplan' which it can get by simulating the outcomes without actually playing it.


Humans can learn how to figure our the score, can learn the state/action space itself, and can learn new games without external intervention. All those things are hard-coded into a system where you just string together a bunch of agents trained separately on different games, and where another human has to add a new agent for a new game. Humans learn the gameplan and any game-specific features. We don't have programmers plugging it into us for each new game.


I feel like both companies are trying to "solve intelligence" by aiming for true AI, in which case there has to be some overlap in the work they're doing to get there.


Question:

"Build a household robot" is high up the list. That doesn't seem inherently 'general'.

Certainly, people have been working on that for years; there are all sorts of subproblems like vision, contextual reasoning etc.

It could be treated as a general problem, requiring a lot of 'common sense'.

But a team which sets out to optimize that particular goal, could spend years on relatively narrow tasks that get good performance returns on household chores (e.g. developing version 10 of the floor cleaning algorithm), but don't really make progress towards the problem of general intelligence.

For me, what was really interesting about the benchmarks that Deepmind chose (the choice of a selection of Atari games) was that they were inherently somewhat general.

Are you not worried that by putting a narrow domain fairly high up, you'll get distracted by narrow tasks, rather than making progress towards what's really interesting - generality? Won't it introduce tension to try and keep the general focus in the presence of a narrow goal, where you can get good returns by overfitting?


This isn't a particularly meaningful issue.

The problem you describe of falling into the trap of brute-force optimizing a narrow task also applies to the Atari games. In fact it applies even more-so: it would be trivial for a lot of HN programmers to brute-force code an AI for challenging Atari games that deep learning still struggles against (like Montezuma's Revenge). But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks. You avoid this problem by... not hard-coding brute force solutions/heuristics! The research community can smell BS very easily (HN, not so much).

A household robot is substantially (probably an order of magnitude) more general than Atari games, even for narrow tasks (obviously it is nowhere near the vicinity of the generality of AGI). The perception problem is tremendously more complex. The control/planning problem is similarly tremendously more complex.


>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.

How difficult are you talking about here? Similar to training game agents? I really doubt it.

I feel like the training problem is VERY hard in case of real world random object handling (factory like fixed, mechanical situation can be purely hard coded and be much better than a human). In case of virtual games you can just use a bunch of GPUs and accelerate the process. But it is a much more difficult problem in reality. The grasping ability that we have with everyday objects is a marvel once you try to make a computer to do it.

This might be of interest: http://spectrum.ieee.org/automaton/robotics/artificial-intel...


Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)

This is the state of the art of "traditional AI" (not deep learning) robotics: https://www.youtube.com/watch?v=8P9geWwi9e0

Most decent HN programmers could code an AI for an Atari game in a few weeks.

Again, I would encourage you to read the literature instead of speculating.


It seems like you've misunderstood what I'm trying to say. I'm saying your statement

>But it would be nontrivial (difficult but still possible) to brute-force code a program for many narrow household robot tasks.

is very wrong. It is not really possible comparing atari games to even the same class of difficulty as household chores. You've just agreed with my point and said that I'm speculating.

>Atari game agents are trivial if you hard-code / do traditional brute-force search AI because there is no noise in observation and no noise in control, and the control is very simple (usually just up down left right, no torques or anything physically complex)

I never said that they are trivial. Although the point I'm making, again, is that you can't say that we can brute force even narrow household chores. It has a level of complexity - friction (which is a huge problem), elasticity and even air flow can mess up the actions, and they lack the computing power to account for everything. Whereas we have something called intuition (which I may add everyone interested in AGI to properly read up on, starting with Brain Games - S4:Intuition which is on netflix)

And it seems like you don't consider brute-forced solutions as proper solutions. I agree with that, as will any one who has common sense and read a couple of wikipedia articles. But RL is not exactly the brute forcing as we think of it, although it might look like it. We all employ brute force learning in our own lives, to whatever extent it might be, although our feedback and thought processes are much more complex so we feel we are acting out of pure intelligent deductions we make in our brain. We still need a couple of 'brute force' attempts, although with the number of iterations we need, you can't call them that.

I suggest you read some literature too, and please point out where I'm speculating.

1. DeepMind's reinforcement learning paper : http://www.readcube.com/articles/10.1038/nature14236?shared_...


If you're agreeing with my original assertion that household robot tasks are more general and more difficult than atari games, great.


Almost. I'm saying the difficulty (of household tasks) is so much more than they are in a different class of problems, and cannot be equated using a comparative adjective


This is great! The goals seem reasonably ambitious and mostly doable over a few years.

I am surprised by #2: "Build a household robot". It's my understanding that efficient actuation and power are largely unsolved problems outside of the software realm. What's the plan for tackling stairs, variable height targets, manipulator dexterity, power supply, etc. in a general purpose robot with off-the-shelf parts? (Answering these questions may be part of that goal but maybe someone knows more on the subject.)


Pieter Abbeel has been working towards household robots for quite a while; given his involvement and several of his students at OpenAI it's not surprising they'd be thinking along similar lines. The issues you mention are real, but current hardware is already capable of useful tasks if we had software to control it properly. For example, here's a demo of a tele-operated PR2 performing household chores: https://www.youtube.com/watch?v=S2GAz5F03Ls#t=60s


I wonder if OpenAI could define and build against a robot API that hardware robots would implement in different ways. That way OpenAI could focus on the software and various other teams could compete to solve all the mechanical challenges you mention.


When it comes to AI/ML most problems seem "doable over a few years". Yet in practice this rarely is the case. These are some extremely lofty goals. The team behind OpenAI is quite capable but to say they'll have any of these done in a few years is quite a stretch. I'm guessing they may achieve one of the goals in a decade. But I'd love to be wrong.


It was an off-hand remark. I'm aware of the landscape, though perhaps slightly more optimistic. The first goal is simple enough and largely underway with the Gym. Significant progress has been made on #3 and #4 just in the last year but I agree that "a few years" is a bit brief. I remain doubtful about #2.


At least on stairs, I think this is a solvable problem: https://www.technologyreview.com/s/601240/an-impressive-walk...

In terms of off-the-shelf: http://makezine.com/2015/05/01/meet-stair-bear-adorable-clim...

I also vaguely recall seeing some demos of first responder robots and stairs. I don't know about the other issues you've raised...


Off-the-shelf robots are convenient for developing learning algorithms. When the learning algorithms are good enough to take advantage of more capable hardware, it'll make sense to build it.


Are the objectives of "OpenAI" conflicting with the interests of the startups applying to YC? OpenAI is building their own products/platforms and insights acquired from AI/Robotics startups applying and sharing information about what they're working on might be used as a competitive advantage.

Is there an information firewall between what startups are sharing in the hope of investment and what is shared to advance OpenAI? If there is a firewall, how is it enforced?


Goal 5: Inventing the Next Paradigm

Any impetus into actually dreaming up what may come next after GPUs and Async DRL? Non-neural models, quantum computing based AI, optogenetic hacking ;)

Otherwise, excellent list!


I wish they had mentioned asimovs three laws.


Is OpenAI willing to support true AI having basic rights to given to humans?

If so, why is this not one of the fundamental technical goals?

If not, why?


You should have a look a PETRL[0]: People for the Ethical Treatment of Reinforcement Learners :)

[0] http://petrl.org/


That was way more of an interesting read that I thought it was going to be. Thanks for posting that.


I would argue that we should avoid building sentient AI in the first place. Human-level or better problem-solving ability does not require the inclusion of sentience or independent agency.


Then I assume you're against any tech that would allow brains to be digitized, right?


Not at all; I'm incredibly in favor of doing so, and I would argue that a human mind running in a computer is still unambiguously human. The hard part comes in defining when a computer has reached that point. I'm arguing that we shouldn't develop computer AI that comes anywhere close to needing to answer that question; we don't need that to solve the problems that AI can solve.


Why would it be undesirable for AI to have the capabilities you described, but desirable for a digitized brain to have them?


> Why would it be undesirable for AI to have the capabilities you described

Precisely because of the ethical questions. I want problem-solving AIs to work on all the problems humanity needs solved, starting with "humans die" and then going on to lower-priority problems. I don't want AI to have goals and values of its own, which might potentially diverge from those of humans; I want it to serve humanity's goals and values. We can build a system capable of human-level problem-solving and well beyond, without actually creating a sentient being.

If, and that's a big if, we want to create an artificial sentient being, that's a separate problem with its own set of ethical concerns; that seems both more dangerous and far less useful than human-level problem-solving. (In the event we did create such a being, such a being would absolutely need to have the same rights as humans or any other sentient species; I'd just rather avoid having to define and draw such a line, and get distracted by the fight over that, when it'd be far more useful to have machines capable of solving problems.)

> but desirable for a digitized brain to have them?

I hope the value of preserving human life is self-evident. Humans already have those qualities and many more; I want to see human life last forever, with all its qualities.


Digitized-humans goals would potientally diverge bio-humans too. For that matter, goals of humanity in general is pretty divergent. Divergence is good, not bad.


Not all divergence is good. Optimizing for [the elimination of all entities which have an idea of good], would be bad, and would be very different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: