Hacker News new | past | comments | ask | show | jobs | submit login
Learning to Cooperate, Compete, and Communicate (blog.openai.com)
199 points by maccaw on June 9, 2017 | hide | past | favorite | 36 comments



This is great. I've always thought intelligence can only be defined as an emergent property of self-replicating systems operating under stress, and this provides a good framework for that "stress".


> intelligence can only be defined as an emergent property of self-replicating systems operating under stress

That's a prescription for making one, but it isn't a good definition of intelligence. It doesn't tell us what to expect from it.

Personally, I prefer intelligence defined as extremely strong, cross-domain optimization power.


Peter Abeel ( OP link author with Igor Mordatch ) explains his groups work [1] as a guest lecturer for Berkeley's cs294-112 Deep Reinforcement Robotics.

[1] https://www.youtube.com/watch?v=f4gKhK8Q6mY&list=PLkFD6_40KJ...

This great talk starts with his work at OpenAI on neural net safety and adverserial images, then the OP research paper Emergence of Grounded Compositional Language in Multi-Agent Populations[2] concluding with his work ( with Andrew Ng ) reinforcement learning helicopter flight and stunt controllers from human pilots.

The OP multi-agents divide labour and apparent collaborative plans appear. That the goal seeking agents split up, appear to dance to and from from their goal, distracting the predators from their kin is to all appearance coordinated and clever.

Dawkin's Selfish Gene espouses alturism as an inevitability of genetic relatedness, the individual sacrifices but the genes persist in siblings.

In this work alturism emerges purely memetically.

The Nash equilibrium of cooperation jumps the local minima of selfishness in this prisoners dillemma.

Multi agent enviroments are difficult to learn with many false minima for the learners.

This work hints that the loose coupling of language, rather than direct sharing of memories or genes, is noisy enough to find more global solutions that appear complex or 'plan-like'.

Maybe these AI's should be considered as planners, yet in a bottom-up immediate-heuristic emergent way.

[2] https://arxiv.org/abs/1703.04908


Am I the only one that finds this scary?

As I have repeatedly said, so far the intelligence we have been producing is NOT the kind that applies abstract logic rules to figure out meaning.

It is the kind that takes full advantage of computers' strengths: perfect copying and speed.

So good ideas are copied and propagate. Neural networks are just this on steroids... basically extracting signals from noise and doing a search in a space to maximize something, and storing the results.

Humans were able to transmit knowledge, then produce books etc. Now bits can be perfectly copied with checksums.

This isn't general intelligence in the human sense. But that's what makes it scary. It can solve these problems with brute force. Resistance soon may really be futile. Not just in running, avoiding capture and death. But also in ANY human system we rely on, including voting, due process of law, trust, reputation, family, sex, humor, etc.


No, I don't find this scary at all. My impression is that these algorithms are brittle, working only on very simple and structured problems. And these algorithms and their hyperparameters have to be specifically set by human researchers to match the structure of the problem. And if computer hardware progresses to reach orders of magnitude more performance, that may come at the cost of its present advantages in reliability, copying, and speed.


Still, better this is done in the open. Otherwise Russia and China will just continue in secret, which is definitely a world I don't want to live in.


Not sure what you want to say. US can do it open, China and Russia can still do it in secret.

While, as a matter of fact, you will be naive if you believe that there is no secret research happening inside US government agencies.


My point is there is no stopping this research now, open or not neural nets just have too much potential. So I'm happy there is an open initiative that the big players support.


I think you are being too optimistic.

Big companies are just equally a problem as big governments. They are even more effective and efficient in collecting user data and reaching privacy sensitive information.

AI in general has become a game for big players, that's in itself a worrying trend.


Big companies are not like government. Government is the law and what everyone adheres to; companies simply aren't. A private company can't force you to do anything you didn't agree to (agreement = contract/interaction), governments force people to do things they can't opt out of. Corporations are bad, but governments are much worse because they can use information they have to force people. If you are thinking that lobbying means corporations are basically government and therefore it's all the same, this isn't true either as lobbying doesn't always happen, and even through lobbying corporations normally lobby for favorable non-restrictive legislation, it doesn't allow them to force people as a government does.


Nah, you're thinking too directly. Private companies can absolutely force you to do anything you don't want to, but they rarely do this - too costly in a system with a somewhat working rule of law - and if they do this, they do it pretty subtly. Examples from all across the spectrum:

- Strong companies can and do force weak governments to adopt whatever policies they want; see e.g. tobacco companies and their fight against health warnings on cigarette boxes. Or private companies and their private armies - happened in the past (e.g. East India Company), happens today (why do you think people kill each other in Africa over tantalum they don't have technological capability to use?).

- Lobbying you mentioned. Doesn't always happen, but happens frequently.

- Various low-level deals and bribery on e.g. city scale.

- Low-skilled jobs. Just talk to people who are stuck with those, especially in smaller cities. Employers can, and routinely do, make them do anything because they basically own them.

So yeah, private companies are not like governments. They can't just go and coerce you to do something in the name of law. But they have lots of less direct ways to coerce people if they really want to. The job of the government is usually to make them not want to.


on the flip side, big companies are only driven by profit and are often impossible to hold accountable by the people they affect.


The point being, top-line research still happens on somewhat open market, where knowledge is shared and people - including those working on AI safety - can participate. The alternative would be countries and/or corporations developing all that tech in secret.


I agree it should be done in the open. Doesn't make it much less scary.


It does make it less scary because we progress into this 'new world' together as a society. Therefore laws and regulations can hopefully keep up and we don't end up in a dystopia nightmare scenario.


> laws and regulations can hopefully keep up

I strongly doubt that.


The question was "is this good", not "could this be done in a worse way".

> Russia and China

Those use the US as an excuse. Of course, these entities are only convenient monoliths when it comes to pushing the interests of very small groups within them, not when a poor American or Russian or Chinese asks a wealthy one to share.

Also, why is something done in secret and something done out in the open a zero sum game? I would think the opposite is the case, when you already have the infrastructure for 99.9% and only have to set the evil bit, that's a lot easier than doing it secretly when nobody else is doing it at all.


You're not the only one who finds it scary, as there are massively popular books on the topic..

https://www.amazon.com/Superintelligence-Dangers-Strategies-...


I found it difficult to make it through the first couple of chapters.

Having just read The Gene, his analysis of the artificial selection option for super intelligence came across as very wrong - underestimating the complexity of polygenic traits and making (likely inaccurate) assumptions about their heredity. This type of thing has a dangerous history.

The idea that a super intelligence could emerge from the internet unexplained also seemed pretty weak, but the first issue was so bad I found it hard to take anything else seriously (didn't trust the author's analysis).

There are interesting issues with artificial consciousness, but I think they're in some ways similar to the issues with biological consciousness - the testing data the neural net is exposed to and it's underlying model can lead to minds that wouldn't be considered intelligent (and dangerous outcomes as a result).


I would suggest giving it another go. There are few authors who have given this issue as much thought as Bostrom, and if the conclusions he draws are false at the very least that opens up the door for further conversation about the subject.


As someone that is teaching themselves Machine learning solving problems with "brute force" doesn't actually happen and Neural Networks are a way around using brute force. It seems that you are caught in the hype around machine learning. Systems that can copy ANY human system are very far off. Also, there are human systems where AI's would not be found to be acceptable.


Just to be clear: I didn't say they can copy any human system. I am saying they can find new ways to beat humans or groups of humans at the games we have set up. Chess and Go are just examples. Trust and Reputation are easily attacked. Voting is easily manipulated (look at gerrymandering and the increasingly shrill "fake news" claims.) Look, they are already better at making diagnoses than doctors.

I am saying that the machines won't really "understand" the human meaning behind things. They will just have "deep" insight in having explored a huge amount of variations and extracted the combinations that maximize some goal. You think you are persuasive now? You think comedians are funny? Fast forrward 17 years when any software can deliver blistering putdowns to rappers, destroy the best comedian to the point where every human agrees the guy looks dull in comparison. The AI that can easily outcompete guys at wooing a girl online. And so on. Forget the Turing test. We are in for a future where programs simply "solve" the systems we set up faster and better than we can. And share that knowledge in compact form (such as artificial neural net calibration).

Most of our systems rely on the inefficiency of an attacker. That they can't instantly start up 3,000,000 businesses or do sybil attacks in voting or destroy someone's credit overnight or win an election. But that is going to change. And I am saying that's just in the near future. Humor and trust is a little further.


Unfortunately, many of your assertions are inaccurate. As far as game playing goes, the ways in which AI agents beat human opponents is simply a matter of computers being able to process more information in a given period of time. The selection and evaluation criteria are actually pretty similar to human decision-making, if a bit more systematic. I can expand on this point in another comment, if you'd like.

Chess and Go are not solved games, and will likely not be for some time (10+ years, for Go at least). Solving non-deterministic problems are even harder, and many likely will not be accomplished within our lifetimes[1].

There is no AI system that can generally make more accurate diagnoses than a physician. We're not there quite yet, and it will be a while. The best we've come are some pretty advanced expert systems, but these require very heavy input from physicians[2]. Please provide sources to validate your claims on the progress of AI. Extrapolation from data is dangerous, extrapolation from falsity is ignorance.

As far as your take on understanding goes, the human "meaning" of things is inherently subjective. If we define "meaning" as some thing's value or role based on environmental context, then just like humans, any artificially intelligent system will only be able to determine meaning based on observations of the environment and both individual and collective experience.

It's interesting that you seem to be focusing primarily on social domains that have inherently "human" contexts. Beyond misunderstanding the point of this paper, and the way AI systems work, I think you're missing the point. AI is, and for the time being will remain, an extension of the human mind. The decision-making needs to be developed (at some core level) by a human. Those goals need to be set by a human. The experiences and observational capabilities ultimately need to be determined by a human. Even AI systems that build other AI systems need to be directed to do so, and with strict goals, set by a human [3].

I highly suggest you take a moment to read openai's mission statement: https://openai.com/about/. AI is a tool. And like any powerful tool it must be used responsibly, freely, and openly. Openai is pursuing this goal and making efforts to ensure that this tool is available to as many people as possible to avoid the abuses implicit to your concerns.

You obviously have an interest in AI and some knowledge of the field, but I worry your comments veer a bit towards fear-mongering. I suggest you use openai and resources like it to enrich your knowledge of both the advances and concerns of AI, because those are important, and we definitely need people thinking about these things.

Ultimately, you are absolutely correct that eventually these systems will probably have the technical capability to influence elections, the economy, and more. But the only way they will is under the direction of humans. It it not the machine you should fear, but the man behind it. The same thing you ought to have been fearing all along.

1. http://fragrieu.free.fr/SearchingForSolutions.pdf 2. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1307157/ 3. https://arxiv.org/abs/1611.02779


Yes please expand. BUT you are attacking a straw man.

First of all I did not say Chess and Go are Solved!! I said that computers will be able to get solutions in our systems faster than we can and of such high quality that it's pointless to even challenge them (for a doctor or poker player or a man running away from a robot or someone trying to prevent the ruin of their reputation or someone trying to have a trust based relationship with their neighbor in a crazy world).

It is also not true that it is "simply" a matter of computers being able to process more information in a given period of time. This is a major deal.

Imagine a textile worker 3000 years ago making clothes. Now imagine the tread count today of a cheap shirt. No one would think of making shirts of such high thread counts back then.

With Chess and especially with Go, there is just a whole other level of intellence. It misses the "human meaning" part but it can manipulate a vector that involves 10,000,000 variables. It is in terms of such vectors that concepts like "cat", "dog" or diagnoses are made. How to explain why it made this diagnosis in a way a human can confirm?

What I said was actually pretty straightforward: in 17 years computers will be able to rap and make better jokes than humans and convince better than humans. They will be able to hack our systems of reputation, trust, voting, legal argument and so on. What's scary is that we will essentially be giving up control to systems that don't really have the same concepts as we do, and perhaps never will. And if one day it all blows up somehow or gradually shifts to something unpleasant for humans, then that's freaking scary!

Just look at how we are already doing it when it comes to wealth inequality. As a society we are richer than ever but the inequality is greater. This is just a mild example. What if computers could do a lot more?


Happy to! And I'm not sure I am, but you're right in that I should be more explicit in my point (I have a tendency towards logorrhea). I think if I'm attacking anything, it's what I believe to be the subtext of your comments: 'AI = Bad = The Terminator'.

Pardon me if the following comes off as pedantic, I do not know your level of expertise and want to continue the discussion from the same point of base knowledge.

To my game playing point, and to start from a simple example, in Tic-Tac-Toe, an AI opponent will always beat humans because they can search through every possible move to end-game and always make the correct move. For games like Chess and Go, the search space is far too large to search to a terminal board position, so we need to use ML and heuristics to evaluate the value of a given board. These evaluation functions and heuristics are designed by utilizing human insight into the game. IBM's retrospective on Deep Blue is a fascinating read, and I suggest anyone check it out if they're interested in AI and game playing[1]. You'll see how they built their evaluation function with the input of chess grandmasters, particularly in implementing opening books and prioritizing center play in the early game. AlphaGo's system is not entirely dissimilar[2]. You'll note that significant advances were made as AlphaGo continued to play and learn from human opponents.

The point is that both these systems (and all game playing AI agents that I know of) search through possible board states and make decisions that are fundamentally reliant on the intuitions of humans. Furthermore, humans make decisions like this as well. We generate all the possible decisions we can make, and rule out invalid choices either implicitly (though sub-conscious bias and knowledge) or explicitly (thinking through a decision). Computers do not have the advantage of implicit evaluation, so we must program that explicit evaluation and use massive amounts of data (for deep learning, anyway) with ML techniques to validate those intuitions.

Both Go and Chess are deterministic games. Given that even these games haven't been solved, the stochasticity of the various domains you described are orders of magnitude more complex and we honestly need several breakthroughs before we come close (I have no sources here, this is just my opinion. I feel that most of the success of AI right now is the standard M.O. of a lot of Academic CS: Things 'work' for certain definitions of 'work'. The breakthroughs are great and impressive, but the constant extrapolation by pundits and the general media is both irresponsible and fallacious).

And yes, it really is a lot about processing power. Neural Networks have been falling in and out of fashion since the 1950s. One big factor of the recent ressurgance in popularity is due to GPU utilization and cloud computing (admittedly, the availability of data via the internet is another large factor, among others). That's why Google, NVidia, Apple, and others are investing so much into ML specific hardware.

And let's not kid ourselves: training any ML model takes a lot of time and a lot of manual adjustment of hyper-parameters. We're talking about possibly hundreds of hours of manual input for a single model (novel ones mostly). That's why every minor breakthrough merits a white paper (sort of joking, sort of not...)!

I think we're making the same argument with your linear algebra example; that machines can't reasonably replace humans. My amended version of that argument is that machines can and should extend and augment human capability. Despite the linear algebra that happens, any form of decision making and cognition is in someway designed by a human. So despite the vectorization of the world (as seen by an AI), they will process through the lens of human cognition; because I don't think we can build systems that don't somehow stem from our own cognitive processes.

As to your specific fears, I seriously doubt we'll be able to make enough progress within 17 years for AI to dominate those fields completely. I agree that AI will probably become a presence in many of those domains, but I do not think we will be "giving up control". Remember, these systems will be operated by individuals. So, I would say that there is some evidence to suggest that within 10-20 years, humans will be using AI to produce higher quality art, jurisprudence, etc. It is also true, that this raises the possibility for humans to abuse this technology, but this is inescapable for almost any human achievement. I think openness and transparency is the best safe-guard against this possibility, and I would encourage everyone to vocally oppose any integration of AI into public systems without extreme transparency.

Beyond that, humans that have and always will be the cause of wealth inequality. Also, you provide no evidence that inequality is greater. In the 21st century, inequality has increased [I don't feel like sourcing this, but Google is reasonably good here], but I would like to see research confirming that we're worse off than the heights of Feudal society, or other equally tyrannical periods of human history. How would you foresee AI contributing to wealth inequality? I only see AI as a contributing factor to increasing wealth inequality if it remains in the hands of few.

I've definitely rambled here, but I blame that on the few drinks I've had. I think my point still stands; humans kill people, AI doesn't kill people (and we're at a point where I think we can ensure that it doesn't).

As a side-question, what exactly is significant about 17 years? Is there some prediction out there that uses this number, or was it an arbitrary number?

1. https://pdfs.semanticscholar.org/ad2c/1efffcd7c3b7106e507396... 2. https://storage.googleapis.com/deepmind-media/alphago/AlphaG...


Drinks? Fun. But the one area I would like to push back on your assertions is that:

"these systems will be operated by individuals"

maybe and maybe not. In some major sense, even today's systems are bigger than any one individual.

Just because something was designed by people doesn't mean they will be operating it years from now.

There was a period where "centaurs" - combinations of grandmasters and computers - would beat computers. Judging by Kasparov's latest book, he still thinks that's the case. But where is the evidence?

Eventually doctors will just press a button and a diagnosis will come out, and dietary program. They will have a very vague idea as to why. This is actually too conservative. There will be no doctor and no button. The system will know exactly when and where to intervene. Humans will live on a zoo being taken care of like animals do now. And this is the rosy picture.

Already, Watson can outperform people and we don't have a great way to explain why. Any more than the proof of the 4color theorem is an explanation why.

Explanations are reductions to simple things. We humans derive meaning from relatively simple things with few moving parts. Something that requires 10000000000 moving parts to explain may as well be "chaotic" or is not really explained. But if predictions can be made far beyond simple explanations then that's a major thing.

I think that humor, court arguments, detective work etc. can all be automated in this manner. And then there is also the access to all the cameras and so forth.

I'm just saying that our systems were designed with the idea that an attacker is inefficient. That assumption is going to break down.

It doesn't have to be the terminator. It just means computers will write better books, jokes etc. a million times a secod and devalue everything we hold dear. They will first be wielded by individuals - at least that's a comfort. But later, the automated and decentralized swarms are the scariest part because they are so totally different from us in goals and everything else too.


Human skill (and intuition/wisdom?) improves and is sustained by practice in the domain. The real unknown how things will pan out when automation reaches a point where we humans just do not bother putting in the hours because many of the trivial tasks or just lose touch. What happens if a driver relies on just the auto pilot system and gradually loses the skill to drive? Stuff does fail and complex systems will fail in unknown ways.


"Extrapolation from data is dangerous, extrapolation from falsity is ignorance."

Beautifully stated.


Upvotye, but I wonder about this statement:

> The decision-making needs to be developed (at some core level) by a human.

With neural nets you essentially throw a ton of data at them, and then they get better and better at recognizing certain patterns, and then can then 'see' them in new data you provide. As this gets more and more advanced (less training data, and yet less and less false positives), we will start to stray into the area of 'emergent behaviour', where we really are no longer in charge of making the decisions.


I'm not sure where this idea of NN's as a black box came from. It doesn't really work like that in real world applications. Even basic multi-layer perceptrons require adjustment and fine-tuning. There are tons of hyperparameters to adjust, feature engineering to do, and even just cleaning your data sets is a non-trivial task that can't be completely automated (yet).

Also, training a model is not as easy as dumping data in. NN's often suffer from high variance, so you need to constantly make slight adjustments. This cycle of adjust-process-analyze is very time-consuming both in terms of computing time (even on Google's servers training can take a few hours) and human-time.

Sometimes you'll let lucky. You can build a NN that gets accuracy in the ~80% range for handwriting recognition with ~50 lines of code, if not fewer. But that missing 20% is critical for any important task and getting there requires a lot of "parenting". And most times you won't be working with a vanilla NN and you won't be getting more than ~50% to start with.

It's also important to note that NN's are not a panacea; in fact, they're often not the right tool for the job. They tend to be outperformed by simple statistical learning techniques in a variety of tasks. Deep NN's can do a lot, but require a lot of data and constant adjustment of hyperparameters.

The biggest advances and most impressive predictions these days come from a combination of techniques and models, and these ensemble methods require a lot of work on part of us humans. Ensemble learning is where the magic really happens, and by magic I mean tons of work.


> Unfortunately, many of your assertions are inaccurate.

Even more unfortunately, the overarching point, the wood you're not seeing for that group of trees you're dabbling with, totally stands.

> But the only way they will is under the direction of humans. It it not the machine you should fear, but the man behind it. The same thing you ought to have been fearing all along.

Yes, and? That means until we dealt with that, we're just helping others to own the technology that will ultimately allow them to kick away the ladder for good. Yes, technology is neutral, but the human world is as it is now, and how technology could be used in a completely different human world is used as an excuse too. damn. much.

> I worry your comments veer a bit towards fear-mongering.

You'll love this then:

> If machines produce everything we need, the outcome will depend on how things are distributed. Everyone can enjoy a life of luxurious leisure if the machine-produced wealth is shared, or most people can end up miserably poor if the machine-owners successfully lobby against wealth redistribution. So far, the trend seems to be toward the second option, with technology driving ever-increasing inequality.

-- Stephen Hawking, https://www.reddit.com/r/science/comments/3nyn5i/science_ama...

You will find many serious thinkers with similar concerns, and they're not fear mongering, they're throwing pearls to pigs. And people get super squeamish as soon as any possible consequences are spelt out in detail, and by the time it hits one one is too busy drowning, and if one has spare resources to speak nobody would listen to the "obviously jealous" loser. "Inequality" is a neutral word, but it contains all atrocities of humanity. Whenever someone crushed a baby, robbed an old woman, or murdered millions of people, there was an inequality, one was helpless and without someone to be for them, the other was stronger and without someone to stop them. That's what inequality means, that bad shit happens.

Sure, meaning is subjective. Yeah, it's subjective. So is morals. Essentially, who is to decide whether Stalin or Hitler were monsters, since they were fine and dandy in their own eyes? You seem to claim that fear mongering and ignorance are bad, isn't that also a subjective assessment?

> AI is, and for the time being will remain, an extension of the human mind.

Later on you talk about individual humans doing specific things. That's something real, "an extension of the human mind" is just rhetoric.

> Please provide sources to validate your claims on the progress of AI. Extrapolation from data is dangerous, extrapolation from falsity is ignorance.

Do you have any data on hard, mathematically proven safeguards? Or hey, prove that humanity cannot survive without AI (or actually, just without slowing some things while we sort out the power problems); but don't ask me to prove that humanity might die with it developed as it is with the current distribution of power first.

> Openai is pursuing this goal and making efforts to ensure that this tool is available to as many people as possible to avoid the abuses implicit to your concerns.

Ensure. Oh that's such a relief that these guys are making sure nothing bad can happen. No wait I misread, they're only ensuring to make it available to as many as possible, which is caveat number one, and even that with the intention of avoiding horrible abuses. That, without further qualifications or evidence, is about as convincing as me singing a song with the intention of turning the sky green with it.

I for one am not afraid, that's just wishful thinking. Try disgusted and bored.


yep. put simply: they will be better, once they are correctly configured.

the task doesn't matter. there's nothing that's off limits, in the long run.

the trick will be getting them to serve us on command, and to get them to hold back when we'd rather do something imperfectly and interpersonally rather than perfectly and computationally.


We'll never be able to have them serve us, if for no other reason, because "we" have many diverse interests that are often antagonist between themselves.

The best we can achieve is getting some of them to serve us on command, to protect us from other ones serving other people.


I was honestly hoping this would be about communications within the openai organization itself.


> a multiagent environment has no stable equilibrium

It does. https://en.wikipedia.org/wiki/Evolutionarily_stable_strategy


True in the general case, but the idea is to set things up (possibly by constantly changing the environment) so that there isn't one. This has happened at least once, when the brains of our ancestors have tripled in size relatievly quickly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: