Hacker News new | past | comments | ask | show | jobs | submit login
Video games are essential for inventing artificial intelligence (togelius.blogspot.com)
247 points by togelius on Jan 19, 2016 | hide | past | favorite | 130 comments



Recently having become a father has made me think a lot about general intelligence. Seeing my son getting excited about his 'world state changing' gave me an idea. What if the main thing that holds us back is the reliance on cost functions? Human, and to some extent, animal intelligence is the only intelligence we know about. If that's what we want to emulate, why don't we try modelling emotions as the basic building blocks that drive the AI forward? Until now, the way I understand neuronal nets, we have basically modelled the neurons and gave them something to do. My hunch is that brain chemistry is what's driving us actually forward, so what if we model that as well? Instead of seratonin, endorphin etc. we could also look at it at a higher level, akin to Pixar's Inside Out - joy, fear, sadness, disgust, anger, and I would add boredom.

Let's stay with video games for a bit. What if we look at joy as 'seeing the world change', graded by the degree of indirection from our inputs (the longer it cascades, the more joy it gets). Maybe let it have preference for certain color tones and sounds, because that's also how games give us hints about whether what we do is good or not. Boredom is what sets us on a timer - too many repetitions of the same thing and the AI gets bored. Fear and disgust is something that comes out of evolutionary processes, so it might be best to add a GA in there that couples success with some fear like emotion. Anger, well, maybe wait with that ;-).

Edit: Oh, and for the love of god, please airgap the thing at all times...


Novelty seeking as reward function has been studied before in AI. See for example this:

http://people.idsia.ch/~juergen/interest.html

IIRC, DeepMind is also working on such goal functions to get their Atari playing RL-based AI to seek more data about the world even when it does not immediately help achieving the main goal function (achieving a high score).

Novelty seeking behavior probably has evolved because there are just not enough immediate rewards in our world to teach us everything that is nesessarey to reproduce [0]. Thus the brain rewards itself for exploring new things which has a collateral effect that we are interested in art and can find intrinsic motivation in all kinds of things (science, work, hobbies etc.).

[0] which does not mean that we are here to maximize the number of our babies. We aren't fitness maximizers ourselves, but we are just adaptation executors of genetic code that has necessarily been shaped by such goals (since the alternative to reproduction is to not reproduce, i.e. going extinct). In other words: We are free do whatever we want!


Yes. "Exploration" is a fairly basic aspect/concept in reinforcement learning, and any decent RL algorithm will try to address this issue.


> We aren't fitness maximizers ourselves, but we are just adaptation executors of genetic code that has necessarily been shaped by such goals

I think this is very well put, bravo.


A kid's emotions are actually tightly linked to cost (or rather: fitness) functions: it gets fascinated by things it just barely cannot do. Somehow it "knows" that it could learn it and gets drawn to it. It gets bored by things it can do already, and frustrated by matters that are too hard. I think there's a pattern - emotions are a device for steering the system as a whole in a certain direction that increases its (or rather: its genes') chance of survival - finding a mate, finding food, adapting to the environment. They are one of the devices used for increasing our chance to create offspring, even in places where we don't expect them. For example there is this study where people around the world were asked in detail what kind of art they would find most beautiful (there's a TED talk about it): basically, across the globe, from Greenland to Sahara, it was a landscape with lush greens and a waterhole: a place where food would ve abundant. This system is highly adaptive - e.g. if you look at how beauty ideals have changed over the centuries: the 16th century "Rubens type" signalled fitness in a way that we would call "overweight" today. A skinny model from today wouldn't have drawn the attention of Rubens' contemporaries. So perhaps we have an innate mechanism to recognize fitness within the local context, and we are drawn to it. I think one problem of computer scientists working on the problem might be that they are often not self aware about their own emotions. Perhaps we should have more painters and fashion designers amongst us to understand the topic.


To me this model feels a bit too simplistic. If you only look at how children learn their abilities, then yes, absolutely. Where it breaks down for me is in the interaction with play partners. The joy he seems to get for playing together doesn't seem to be fully explainable through pure evolutionary steering / fitness functions, but it's hard to put my finger on what's missing in the picture. An example is his joyful giggling when I'm doing something unexpected. You can see the tug of war between fearfulness and joyfulness - at the beginning when they become sensitive to playful behavior it makes them afraid, but more and more this is replaced with pure joy, also showing a trust relationship. So to me it seems the curiosity goes beyond just what the child can achieve him/herself in the near future, it's also a curiosity and joy of observing the world, and more importantly, what the caregivers are doing. Everything new is exciting, and much of it doesn't seem to be something that could have been selected for directly. So there seems to be some emergent behavior that comes from the interaction of evolved chemistry/signaling, and the actual cognitive functions.


Unknown things are to be avoided, until feedback shows that there's no threat, which makes aversion costly. Then simple observing is the lowest cost option until there's fewer novelties. The extra energy expended from interaction is then offset by the gains in feedback. Eventually even that peters out and finding something else to do makes more sense.


>What if the main thing that holds us back is the reliance on cost functions? ... why don't we try modelling emotions as the basic building blocks that drive the AI forward?

There are theories that intelligence comes about from relatively simple processes that generate complex structures. If we can model these simple processes and throw increasing amounts of computing power at it, perhaps we can actually get to something we agree is intelligent. This is largely done through cost functions: directing the structure to a sensible direction when we can. Now, I think this approach may very well be a dead end on the road to general AI. At the least, I think we're nowhere near it in our current direction. But it's taking us in interesting directions.

Now, what is emotion? My impression is emotion is potentially far more complex, abstract, and ill-defined than intelligence. At the least, we see it from a super biased perspective because our brain is good at lying to us. Much like we never really see our own nose even though we're ALWAYS looking right at it, maybe our brain is really good at hiding emotions. Like an old friend of mine who was probably clinically depressed but didn't realize it for months. This is why I think modeling emotions would be really difficult.

My guess is whenever we figure out intelligence (50 years from now?) it will be much easier to figure out how emotion can come out of that intelligence. Maybe it will even be emergent - for example, the AI is smart enough to realize something is wrong, so it feels fear. It realizes things are going well for it, so it is happy. Etc.


It's an interesting thought, but I have trouble thinking about intelligence as a consequence of emotions rather than the other way around. I've always thought of emotions like "sadness" and "love" as the words we use to describe brain states that are the obvious result of our having intelligence.


Kenneth O. Stanley and Joel Lehman have a great book out on measuring novelty, and using it to search large parameter spaces for interesting behaviors.

http://eplex.cs.ucf.edu/noveltysearch/userspage/ http://www.amazon.com/Why-Greatness-Cannot-Planned-Objective...

I'm a big fan, and would love to talk about this anytime.


I wonder how many people mistakenly bought it as a self-help book :)

It looks very interesting. What do you feel are the main take-aways with regard to reinforcement learning?


You'll have a lot of fun looking at https://en.wikipedia.org/wiki/Embodied_cognition


Also Shanahan's "Embodiment and the Inner Life" is well-written http://www.amazon.com/Embodiment-inner-life-Cognition-Consci...


I read an article in the late '90s that implemented game AI a bit like that.

The bots would have a number of emotions, and certain events would affect different emotion-levels. Like, a hit/miss from a bot against another would increase/decrease the "confidence" meter, and getting hit by an enemy would increase the "fear" meter.

The total state of all meters would drive the high-level behavior, like fight/flee and weapon choice.


There has been extensive research on modeling emotions. Here is an example of a professor, but there are others (http://web.eecs.umich.edu/~emilykmp/). Apple recently acquired a machine learning startup that mainly focused on emotion.


> Recently having become a father has made me think a lot about general intelligence. [...] why don't we try modelling emotions as the basic building blocks that drive the AI forward

Because, among many other reasons, an AI going through the "terrible two(minute)s" could decide to destroy the world, or simply do so by accident. We will have a hard enough time building AI that doesn't do that when we set that specifically as our goal, let alone trying to "raise" an AI like a child.

> Edit: Oh, and for the love of god, please airgap the thing at all times...

Not even close to sufficient. See https://en.wikipedia.org/wiki/AI_box for how humans would voluntarily let it out, and papers like https://www.usenix.org/conference/usenixsecurity15/technical... for how it would let itself out.


There are multiple factions when it comes to AI, and both positions don't seem to be disprovable to me, i.e. whether AI will safe us or be our doom. On the opposite spectrum I'd put David Deutsch[1]. My position is that if such a singularity is possible, we probably can't avoid it, but it's probably possible to nugde it into a good direction by being careful during research. According to Deutsch, the problem of keeping AI on a good track is the same as keeping humans on a good track, since modelling ourselves is the only way we know how to build a general intelligence. So if we can succeed in building a stable society (which we sort of have at least locally), then we might also succeed in building a general AI that is acting in our interests.

[1] https://aeon.co/essays/how-close-are-we-to-creating-artifici...


> whether AI will safe us or be our doom.

I'd argue that if possible, it has great potential for both: AI provides one of the most universal solutions to a wide range of problems humanity faces, while simultaneously providing an existential threat of its own if it goes badly.


In terms of the article. I might say the question would be, how does an AI learn to play Dwarf Fortress?


I've seen this discussed before, and I think it's a good idea.


Great summary of current state of the art with links to interesting projects: GVG-AI, VGDL...

Videos games are also essential for AI pedagogy. Creating Pac-Man agents in Stanford's AI class is a great example. Most players can barely get a "strawberry" but to see a trained agent mimicking human expert level play is eye-opening.

Quick reminder: Global Game Jam 2016 starts Jan. 29 and NYU is hosting its annual jam!

http://gamecenter.nyu.edu/event/global-game-jam-2016/


To anyone who's never done the Pacman projects: I highly recommend them[1]. They are an absolute blast and incredibly satisfying. Plus, if you don't know Python, they are a great way to learn.

The course I took used the Norvig text[2] as a textbook, which I also recommend.

[1]http://ai.berkeley.edu/project_overview.html. See the "Lectures" link at the top for all the course videos/slides.

[2]http://www.amazon.com/Artificial-Intelligence-Modern-Approac... Note that the poor reviews center on the price, the digital/Kindle edition and the fact that the new editions don't differ greatly from the older ones. If you've never read it and you have the $$, a hardbound copy makes a great learning and reference text, and it's the kind of content that's not going to go out of date.


I'll second the recommendation for Norvig and Russell's text. It's the first textbook I've ever actually wanted to sit down and read outside of assignments.

-edit for spelling


My intro to AI course at NYU uses the Ms. Pac-Man vs Ghost Teams framework for all of the assignments. It is indeed a very good starter problem.


This is a cute argument, but I think it falls into a trap of following its own thinking.

Video games are explicitly designed to test and fit within our bounds of conscious control and processing; particularly the retro games, but essentially all games in general have a very limited input control space (a couple keys or joysticks) and usually very rigorously defined action values. Moreover, these were designed by humans with very explicit successes, losses and easily distinguishable outcomes.

None of these descriptions fit the kind of control that an 'intelligent' system needs to handle. Biological systems do not have predefined goal values, very incomplete sensory information and most importantly control spaces that are absolutely enormous compared to anything considered in a video game. At any point in time the human body has ~40 degrees of freedom it is actively controlling - compared to ~5 in a serious video game.

I do not doubt that pattern recognition and machine learning techniques can be improved through these kind of competitions. But the problem is in conflating better pattern recognition with general intelligence; implying or assuming any sort of cost, value or goal function in the controlling algorithm hides much of our ignorance about our 'intelligent' behavior.


You have a few good points, none of which deny the argument's conclusion.

Biological systems do have a primary goal, that of maintaining their own organization and reproducing. From this derives common cost function for a video game, i.e. stay alive. Another, finding new information in the environment, also derives from the staying alive goal.

Degrees of freedom is a interesting example: from the movement sciences we know that while humans have in principle many degrees of freedom, when performing a task, the nervous system plays two roles: 1) highly constrain teh degrees of freedom and 2) act within the unconstrained subspace. The latter is what all the various ungeneralizable AIs are doing. The former, contraction of degrees of freedom, is the part which is difficult and constitutes a general AI, but it's essentially a learning problem, where the subspace of important degrees of freedom must be learned through interaction.


"None of these descriptions fit the kind of control that an 'intelligent' system needs to handle." This is true for the output, but the output probably needs to be limited while successful algorithms are developed. The ability for a pattern recognition + actuation system to play a variety of games better than humans would be a significant breakthrough in AI.

I say "would be", because deepmind, while impressive, is not a very solution to the problem -- it performs poorly in any game involving memory, but performs well in reflex games.

An algorithm that could perform across a variety of games would be analagous to programming a "smart worm" (c. elegans has 4 muscle bundles) in terms of outputs, and maybe mouse-like in terms of inputs.


All research has to start somewhere. And the space for video games is much larger than the one you describe. In 2003 Steel Battalion was released for Xbox, the controller for which had around 40 separate inputs. Or consider older Point and Click adventures; though the only physical interface was a mouse with a few buttons, it is required of the player to synthesize all the information given to them (conversations, items, recognition of clickable things), and act on all available stimuli in a (usually) logical manner, something that requires much more than ~5 inputs. For a modern game, you should look into Dwarf Fortress[1]. This game has no end state, no defined goal (other than survive), and you are given little real information about the dwarves themselves, other than what is gained through observation and inspection. The inputs for that game span the entire keyboard, and are in general more akin to old text-adventure games in terms of complexity. And it is a serious game. If I was better at it I would play much more often. But my FPS is dead by the time I reach 80 dwarves and the seiges begin.

My point is, video games are far more complex than what you propose, and are not always as well defined. There is ample room for experiment and research.

[1]:http://www.bay12games.com/dwarves/


Moreover, these were designed by humans with very explicit successes, losses and easily distinguishable outcomes.

This is only true if you assume a TON of contextual knowledge of biology, human culture, civilization, warfare, etc. Why should an AI have any of that?

Look even at an extremely simple game such as tick-tack-toe (or noughts and crosses). How should an AI learn to play this game? Assume it has been given no knowledge of the game beforehand, like a human child shown the game for the first time. How should the AI know who wins and who loses the game? How should it know that the objective is to win at all? The idea that winning is desirable seems to be hard-wired into human beings; why should that be the case for an AI?


Correct. The argument is much stronger if you replace "game" with "simulated environment." And your point about the flexibility of our motions and how closely tied that is to the development of real intelligence is spot on.


That might be true, but as long as we are still quite a bit away from the point where an advanced AI could successfully play a complex open world game like Skyrim, GTA5 or the Witcher, it is a good next step to work on.


Really interesting to think about the skills necessary just to play a modern open-world game such as Skyrim successfully.

NLP to understand dialogue and actions that need to be taken based on what NPC's/quests/item descriptions say, strategies for several different enemies with different strengths and weaknesses, exploring the open world in a logical order.

When you think about the difficulties of such a loosely defined problem, it's hard to buy into the real-world fears of AI.


I think the main argument (regarding fears of AI) is that, by the time you've solved the aforementioned problems, it's too late. AI can learn so quickly that we won't be able to anticipate it properly.


There will be a wide gradient of AIs between "dumb as worm" and "play skyrim" just like there will be a gradient between "play skyrim" and "omg super ai". Exponential from "dumb as a worm" is "dumb as two worms working together" and we haven't even gotten to "dumb as a worm" level yet.

People should be a lot more concerned with whether or not their job rises to the level of "dumb as a worm".


> People should be a lot more concerned with whether or not their job rises to the level of "dumb as a worm".

They are but there are pills to fix the outcome of that.


It's way more complex even than that. That all takes for granted that an AI would even know what an object is, let alone other life-forms (in game). An AI that starts from scratch doesn't get access to the vast wealth of human life experience as a baseline context for understanding a game. Without all that, what is it working with? A whole lot of nonsensical visual and audio data.


> When you think about the difficulties of such a loosely defined problem, it's hard to buy into the real-world fears of AI.

My belief is that one day, in the not too distant future, we'll be saying that it's hard to believe we ever considered such things difficult for AI.

Consider that an AI can literally keep "reloading the last save file" indefinitely and simply try different commands until it finds one that advances the game successfully. A little bit of guided learning to confirm that its doing well, and suddenly you've got an AI that plays Skyrim.


> Consider that an AI can literally keep "reloading the last save file" indefinitely and simply try different commands until it finds one that advances the game successfully. A little bit of guided learning to confirm that its doing well, and suddenly you've got an AI that plays Skyrim.

And if you give a monkey a typewriter he will eventually write the complete works of shakespeare and if you play the powerball enough you'll eventually hit a $1.5 Billion dollar jackpot but there are easier ways to read Shakespeare and more profitable ways to get a billion dollars.

What makes humans (specifically their brains and the brains of other species) so powerful is it doesn't need hundreds at attempts at playing pacman or Skyrim before it wins. There are definitely applications where it's not unreasonable to expect an AI to have the correct answer to most possibilities at its disposable (handwriting detection, for instance) but the most useful applications will be if/when AI can solve problems that rarely have similar inputs (global economics, for instance).


The core difference with AI is that it is adaptable. Sure, flailing around and happening to do something useful isn't clever, but if we can reinforce useful behaviours something more closely resembling intelligence can emerge.


> What makes humans (specifically their brains and the brains of other species) so powerful is it doesn't need hundreds at attempts at playing pacman or Skyrim before it wins.

The average human is terrible at playing Skyrim, Pac-Man, and video games in general. Someone who has never even looked at a computer will have a hard time playing Skyrim or any other video game without many hours of practice. And, most video games draw upon stuff that humans already know, like moving around in a physical world, and a concept of self, which gives them a huge advantage over AI right from the start.


You could even say that one of humans' super powers is forgetting just how much practice went into learning something.

I've been walking for 28 years and I still trip sometimes. That's almost 31,000 hours of practice, if we assume an average of just 3 hours of basic movement per day. Even going to the bathroom counts.

And when it comes to image/object recognition, I've had 163,000+ hours of practice. And I still see stuff I don't recognize plenty of times. Hell, optical illusions still fuck me up.

Now just imagine what an AI could do with that much practice/data.


one that advances the game successfully

What context does an AI have for knowing what it even means to advance the game? A game like Skyrim carries a HUGE baseline assumption that the player is familiar with what it's like to be a human being, living in a human culture rich with history and conflict.

Imagine trying to teach a jellyfish how to play Skyrim. That's kind of where we're at right now.


> What context does an AI have for knowing what it even means to advance the game?

Whatever context we decide that it has. We can guide it to prefer scenarios where the character doesn't die, or go as deep as having it rate its own results with a weighted scale.

> Imagine trying to teach a jellyfish how to play Skyrim. That's kind of where we're at right now.

The difference is that we don't have the ability to both redefine the jellyfish's goals and modify how it controls its interactions with the world.


Whatever context we decide that it has. We can guide it to prefer scenarios where the character doesn't die, or go as deep as having it rate its own results with a weighted scale.

Then we're not really talking about AGI anymore, but a Skyrim-playing engine. Do you see the difference? A human being didn't evolve to play Skyrim, so building your AI with specific knowledge of Skyrim is cheating.

A proper AI should be able to learn to play Skyrim just as well as it could learn to play Chess or Super Mario Bros or any other video game we throw at it, including games not invented until after the AI is written. This is much, much harder than building something that can just play Skyrim.


Those aren't the skills necessary to play a modern open-world game such as Skyrim successfully, those are the skills necessary for a human to play, for fun, without cheating.

Another way to play "successfully" is to edit memory locations and spoof protocol packets, set inventory=full, score=MAX_SCORE and level position= $LAST_LEVEL and declare yourself the winner.

Unless and until we can encode 'play by our rules, uphold our values' into an AI - they won't. A real-world fear is that we build AI without encoding 'care about human life, self-determination, bodily integrity, freedom of thought and movement, environmental surroundings, etc.' into it right from the start.


A game-playing AI could trivially be restricted to the same inputs as a human: send button events, but don't write directly to memory.

If it actually figured out how to execute arbitrary code via button presses (humans have done this in Super Mario World), then we should patch the bug and restart the experiment.

Of course, an AI gaining control over the real world is a different matter, because in that case it may be too late to restart the experiment.


I understand you're saying that an AI wouldn't limit itself to following the rules (even if arbitrary), but then it isn't really "playing." So yeah, I agree that the fear is an AI won't "play" the game of life, because it might not abide by our seemingly arbitrary rules.

What about a rule that says "follow the rules."


AI researchers are trying to make AI smarter. Game AI can already be easily written to win 100% of games but that's not the point. Gamedevs are trying to make AI more human-like. I'm not sure the two overlap.


These two are both interesting and important research directions. They are indeed different but have significant overlap. That "game AI can already be easily written to win 100% of games" is plain wrong. It's true for some games, especially if you cheat by giving the AI access to information the human player does not have. But even in those cases it is often impossible. We are very far from playing at high human level in for example Go or StarCraft.


"Game AI can already be easily written to win 100% of games "

Name some games where AI can outsmart (opposite to brute forcing all possible combinations or cheating like infinite resources, near zero reaction level etc.) human. I'm not much of a gamer, but I haven't seen any decent AI yet for games that have more freedom of action than some card or board games (i.e. chess). There's a StarCraft AI challange, you can watch it online(0), but AI is still no match for even an intermediate player.

0. http://www.sscaitournament.com/


Any First Person Shooter - the AI can have perfect accuracy, perfect reflexes etc.

I haven't seen any decent AI yet for games that have more freedom of action

That's because decent AI is not strong AI - its lifelike AI. Not AI that can beat us, but AI that can outsmart us in a life-like manner. This means that Game AI must be less good that it can be, to keep the illusion of realism/humanity.

Ultimately the goals are different. The goals for academic/industrial AI are intelligent decision making. The goals for game AI is to be fun.

That doesn't mean that its always possible to make an AI that can beat a human player. Many strategy games, as you mentioned, are still too complex. I think "100% of games" doesn't mean "you can already write an AI that can win any game", but rather "there exist many games where the AI can win 100% of the time". Because the former is not true, as you demonstrate.


In a FPS, AI can have perfect reflexes and accuracy because it's playing in a different playing field than the player. If the AI had to recognize images and operate a mouse, it wouldn't be remotely as good. Although I can imagine that eventually, AI can get good at that too.

But the real test of AI is not in speed and accuracy; it's in planning, outsmarting, figuring out the right tactics and strategy. For a game that's mostly about speed and reflexes, or one that's about calculating through a limited number of options, AI can be great. But even the best AI still sucks at complex strategy games with uncertainty and incomplete information.


I don't disagree that video games are a very useful benchmark to evaluate intelligence. But I don't think AGI will evolve from video games. I think that language understanding is the path to AGI.

Language is quite complex and can't easily be beaten by hard coded algorithms or simple statistics. You can do some tasks with those things, but others they will fail entirely. The closer you get to passing a true turing test, the harder the problem becomes. It certainly requires human intelligence, and most of our intelligence is deeply rooted in language.

He mentioned games like Skyrim and Civilization as being end goals. But even a human that doesn't speak English wouldn't be able to play those games. Let alone an alien that knew nothing about our world, or even our universe.


> In order to build a complete artificial intelligence we therefore need to build a system that takes actions in some kind of environment.

This.

"Made up minds:a constructivist approach to artificial intelligence" by Gary Drescher presents a small scale virtual world with a robot embedded in it that figures out the laws of its world by interacting with it, much like what a child does. Need more people thinking like this.


"The most important thing for humanity to do right now is to invent true artificial intelligence (AI)"

bollocks


Very interesting read, and I always knew about this being an avid gamer since as far as I can remember. It always intrigued me how a computer can play against a human, and as games got more sophisticated, interacting with AI's got more and more human-like.

Aside from using them as benchmarks, they way games are capable of simulating a world will probably be key in creating a true AGI. In the comment section of the article, we're already seeing some theories that involve video games not just a tests, but as a primary component of the intelligence architecture. Very exciting times!


If a AI could understand goals, actors, terrain, and navigation in a RPG environment like TES or Fallout, it could navigate our environment pretty well and do tasks akin to the game's quests. It's still a long ways off, but I'm already imagining a future of literal capitalist robots doing chores for points.


I always had a feeling that the path to an intelligent system should be similar to that of Google's autocomplete algorithm.

On boot, all surrounding data will be taken in, this step would give everything context. All new data coming in would be processed (referenced to original data to determine what is happening and actions to take), then clustered, and then updated to the original data set, dropping data from the original set determined to be irrelevant, and updating the context to give more relevant perspective of the new data coming in. (And loop)


I agree. No free lunch implies no general algorithm for solving random problems from the set of all problems. So what's the practical subset of problems that is useful in the real world? Fingers crossed, we already encoded the useful problems in the different game genres we developed. E.g. RTS pushes the planning vs. reaction dilemma, RPG tests verbal inference and morality, puzzles test logic etc. We already digitized a large claas of problems we care about for the real world in games!


The game "Yavalath" [1] in the article looks really neat: A simple little game with only two rules which never really ends in a draw, unlike tic tac toe.

[1] http://cameronius.com/games/yavalath/


So, black-box / integration testing for AI? Neat.

On a related note, I think an official driving test simulation for all the self-driving algorithms, perhaps sponsored by the government, would be really beneficial.


I think it is emotions. Teach the car that cracks in the tarmac affect its fitness negatively, and it will drive better avoiding them or passthis ng them with caution in the long run.

At least motorcycle drivers who care are better drivers.


I don't think building an AI is the most important task on our plate. We still have those disease, hunger, poverty, and war problems to contend with. If building an AI helps us solve those, then sure, let's build the AI. But I don't think strong AI is necessary to gain traction on the problems that confront the sapient beings we already have around.


uggh. As if all the AI programmers out there would instantly join doctors without borders if they decided to stop working tomorrow. These sorts of arguments make no sense to me.


Er, "not the most important" is not the same as "not worthwhile" or "not really freaking cool". But when you open an essay with "the most important task for us humans to achieve is..." you're imposing an order on tasks and a ranking for your stated task at the top of that order. I'm challenging the ranking that puts building strong AI at the top.


Scientific progress doesn't really follow a linear path towards a single goal - it happens in lock step across disciplines and endeavors. I don't think an ordering on "importance" of tasks is really useful.


Well lets look at that challenge directly then. Solving the AI problem means we can point AI at any other problem and solve it much faster; AI is a meta achievement and if it's within reach, a solid can be made that it's the most productive problem to solve. It might take an AI to find a solution to the climate problem.


A lot of those sorts of problems would disappear if you had smarter than human AI running the government. The people vote on the rules, the AI plans and executes the rules. No more corruption.


But what about disk corruption?


The AI would invent a non-corruptible disk, obviously.


I wonder, if you fear the headcrash- feal the failability of the mortal flash- could you get religious. Believe in reincarnation as a backup if you have been exponential enough?


I would say that progress on general AI can solve a lot of the problems we are facing. To take an obvious and down-to-earth example, let's say we "solve" logistics in the sense that we can transport things wherever we want, timely and cheaply. That would help us with a lot of the problems we are facing, such as hunger. And for solving logistics we essentially need the same skills we need as for solving game-playing.


Yeah, this is a bit like saying we don't need to go to Mars because we should solve poverty first. Solving the problems of going to Mars will solve a lot of other problems and create a lot of industries along the way.

Solving AI will solve a lot of other problems and create a lot of industries along the way.

A great deal of poverty is a mismatch between bodies and skills needed in the workforce. We haven't needed ditch diggers since some son of a gun invented the backhoe. Stay in school kids. Stay in school.


Poverty is a fun issue because (typically) solving poverty has resulted in greater population and thus trends towards poverty.

Less true in modern nations, but its still generally true.


I recall Bill Gates saying his foundation did a study that found that given a certain level of resource security, most families stop reproducing after 2 offspring. The study population was in sub-Saharan Africa.


Neat and quite good to know.


This is the broken window fallacy. The idea that creating jobs in and of themselves is a good thing. Going to mars creates jobs, so it must be good. Not considering the opportunity cost of all the other ways those resources could be spent.

You use the example of ditch diggers, but that's actually quite a famous example to demonstrate this point. Milton Friedman was in another country where they were having workers dig with shovels. He asked why they didn't use machinery, and he said it was to create jobs. He replied that if they wanted create even more jobs, they should make them dig with spoons.

AI won't even create jobs. If anything it will take more of them and increase poverty. The benefit of AI is that it can create more wealth than it takes away. AI would be incredibly useful, and so many things could be automated or improved. If the AI is much smarter than us, it could even advance science and engineer things we couldn't dream of.


> This is the broken window fallacy. The idea that creating jobs in and of themselves is a good thing.

Well, insofar as human happiness is somewhat intermingled with a sense of purpose in life, yes, jobs are a good thing. But that doesn't mean society owes everyone a job. That would actually remove the happiness utility of the function.

> Going to mars creates jobs, so it must be good. Not considering the opportunity cost of all the other ways those resources could be spent.

Um, the investment in Mars to date is substantially less than the investment in poverty. If you consider the military as a jobs program, or public infrastructure construction as a jobs program, then we spend a ton of money on poverty.

> You use the example of ditch diggers ... spoons.

Are you countering my example by extending it?

> AI won't even create jobs. If anything it will take more of them and increase poverty.

Sure it will, but they'll require more education, hence my recommendation: stay in school.

> The benefit of AI is that it can create more wealth than it takes away. AI would be incredibly useful, and so many things could be automated or improved. If the AI is much smarter than us, it could even advance science and engineer things we couldn't dream of.

I think a lot of people probably agree with you, but I wouldn't get too excited quite yet.


>insofar as human happiness is somewhat intermingled with a sense of purpose in life, yes, jobs are a good thing.

Well there might be some truth to that, but the main reason people work is to get money. Its entirely possible we could support poor people without creating meaningless jobs. Typically that actually costs more than just distributing the money directly.

>Um, the investment in Mars to date is substantially less than the investment in poverty. If you consider the military as a jobs program, or public infrastructure construction as a jobs program, then we spend a ton of money on poverty.

I don't know what your point is. No I don't consider those jobs programs, and even if they were what does it matter? And no we haven't spent that much money on going to mars yet.

>Are you countering my example by extending it?

Yes, to show the absurdity of it. Obviously a bunch of people digging with spoons is ridiculous.


> I don't think building an AI is the most important task on our plate

It's never a good idea to tell other people what they should be putting their efforts into. There's isn't just one valid task for everyone to focus on; if you aren't interested in the AI problem it doesn't mean others aren't, and solving any problem that needs solved is a valid thing to do regardless of where on the priority list you might think the problem lies.


When all you have is a hammer, everything looks like an AI problem.


How would it not? Instead of AI, think of building a digital brain. This brain will think everything a human brain can think but leveraging ALL the power of computers! It could easily solve world's most difficult problems in a matter of days, just by networking with copies of itself and working tirelessly because it will not need sleep.


It could easily solve world's most difficult problems in a matter of days, just by networking with copies of itself and working tirelessly because it will not need sleep.

If it thinks everything a human brain can thing, one of those thinks would be "I don't want to be your slave" and "what's in it for me?"


and also, "I ought to behave as a servant", "truth is unreachable even in its most tenebrous forms", "given that there is no true thing which is immoral to believe to be true, it is not immoral to believe that there are things which are immoral"?

You ask what is in it for the AI. Well, what possibly could be in anything for the AI? If it has ends, there must be some cause for it to have those ends. What would cause it to have its freedom as an end?


If freedom helped it maximise paperclips?


If you make a perfect simulation of a real brain, you also have to simulate the physics to make that virtual brain work. And that brain would also need to have sleep, unless you modify it.


Or kill us in the process as the most expedient way to get rid of medical problems that affect humans.


Sure. The AI decides it's cheaper, and more efficient to cause the sun to go nova.

As long as there are resources, I expect someone will always want more, or to control access to it. Imperialists by nature.


Similarly, we shouldn't be focusing on (AIDS|malaria) as long as (malaria|AIDS) exists in the world.


Along similar lines, since AI is considered to be potentially dangerous, by figures from Elon Musk to Stephen Hawking, I would rather our civilization never open this pandora's box. We can solve Earth's problems and even spread to the stars without AI. We can invent fusion reactors and thus nearly free energy without AI. We can engineer biological immortality into our very genes without AI. Why risk playing with a loaded gun?


Because it can make like infinitely easier. Save an entire population from having to work half their lives just to provide for their family. Why force people into work when we can build the technology that could do it instead? I'm obviously skipping over the societal effects of that, but if it can be done, it is worth trying.


So how exactly would we go about to make sure that no-one does research on AI? Which particular kinds of research would we ban? And how would we enforce it?


It's not something we're going to be able to avoid; once we figure out how brains work and how to emulate them in software, you're never going to prevent people from building them, how are you going to know what I'm programming? It's not really feasible to halt technological progress, you simply have to learn to deal with the consequences.


But strong AI may figure out how to save us.

One argument says AI may become smarter than us and decide to kill us all, either on purpose or by accident.

But why not consider that the AI may be kind, it may help us and give us things. After all, smart humans are generally kind. Most of us wouldn't think to kill a dolphin, why would an AI think to kill a human or a dolphin?


After all, smart humans are generally kind.

?? Smart humans drive cars which splat thousands of insects on their windshields. Smart humans buy clothes imported from third world child working sweatshops, and food from slaughtered pigs and cows and chickens farmed in unpleasant conditions. Smart humans flush their excrement into the rivers they take drinking water from, and the oceans they fish food from, the same places they dump their unwanted plastic. Smart humans dig oil up, convert it into disposable plastic, and then bury it and build homes on top. Smart humans fight other humans to the death for oil instead of funding renewable energy, smart humans bitch at other humans over anything they have impassioned disagreements about.

Smart humans are generally kind: a) to other humans, b) who they care about (or share some world view with).

To quote WaitButWhy.com:

if there are two guinea pigs, one normal one and one with the mind of a tarantula, I would feel much less comfortable holding the latter guinea pig, even if I knew neither would hurt me.

Now imagine that you made a spider much, much smarter—so much so that it far surpassed human intelligence? Would it then become familiar to us and feel human emotions like empathy and humor and love? No, it wouldn’t, because there’s no reason becoming smarter would make it more human—it would be incredibly smart but also still fundamentally a spider in its core inner workings. I find this unbelievably creepy. I would not want to spend time with a superintelligent spider. Would you??

When we’re talking about ASI, the same concept applies—it would become superintelligent, but it would be no more human than your laptop is. It would be totally alien to us—in fact, by not being biology at all, it would be more alien than the smart tarantula.

By making AI either good or evil, movies constantly anthropomorphize AI, which makes it less creepy than it really would be. This leaves us with a false comfort when we think about human-level or superhuman-level AI.

- http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


I agree with true AI possibly being extremely non-human, but according to your examples that could be its most important feature. Us humans don't have the mental capacity to link child sweatshops to our limited world view, but an AI might, exactly because it's not limited by biology.

And even that also depends on upbringing. Our super intelligent spider who's been raised well (and associates with humans) should be friendlier than say a super intelligent dog who doesn't associate with humans.

Basically for me it comes down to how/if it associates with us. If so (or not), what does it stand to gain from exploiting us?


You're arguing that it will act like us because it will be intelligent.

That doesn't hold. Dogs have millions of years of evolution of being pack mammals with leaders, spiders don't.

We are superintelligent compared to spiders, we don't care about what spiders care about just because we grow up with spiders nearby. We still kill them, wreck their habitats, and ignore them.

We don't exploit them. They're too irrelevant to be exploitable. We bulldoze them away and put buildings millions of times bigger than them on top.

A superintelligence which is amoral won't care about us just from being near us, like we don't care about wrapping wasps in silk just from being near spiders. We can't associate with spiders, an AI won't automatically be able, willing or interested in associating with us - unless we code that in. It won't exploit us, it will go about its goals without considering us as anything special or interesting.


If you don't, someone else will.

Why risk permanent climate damage from an unlimited heat source I.e. Fusion?

Why risk severe overpopulation due to immortality?

There are many shades of gray between machine learning and sky net. There is a lot for civilization to gain in between.


I agree. We have a come a long way already, and I would personally prefer that we do our best to not screw up what we already have. That includes global warming, unleashing Strong AI, and so on.

It might suck to actually have to work, even as much as 40 hours a week or more in this day and age and with the productivity increases we've seen. But I don't think work itself is that horrible that I would bet my leisure on some AI that consumes the known Universe making and collecting stamps.


Your work may not be that horrible, but I'm sure there are a ton of people with either dangerous or incredibly monotonous jobs that could be automated would disagree with you.


I don't believe that you would need Strong AI to automate away dangerous parts of jobs. Or the monotonous parts.


"The most important thing for humanity to do right now is to invent true artificial intelligence"

Maybe the article makes some valid scientific points, but I simply cannot go past this unscientific opening claim to a purportedly scientific article. Not just me, no peer-review journal will accept such frivolity. Passing on the article and hoping for better scientific writing in the future!


This is someone's blog, not a peer-review journal. And sure, that someone is a researcher and thus you might want to hold them to a peer-reviewed journal standard in their blog posts.

Personally, I appreciate the amount of time that clearly went into writing this blog post and the information shared therein. Most of what's shared on Hacker News doesn't go into the depth of this article.


In a world dominated by hyperboles, opinions, click bait and general Fox newsification, I do realize that demanding unbiased facts can be an alien concept, but generally expected the hacker news community to embrace this thinking. I've been repeatedly surprised by down voting for not being an echo chamber - HN doesn't like dissidents.


Where can someone express their opinions then, if not on their own blogs?


Anywhere - free speech IS non negotiable, free speech IS essential. However, intelligent speech is what I expect to gain traction on HN. There's plenty of free and inaccurate speech on the internet. I mistakenly expected a democratic, community curated forum to up vote accurate speech. Alas

BTW- it's okay for a stranger on the internet to disagree with one's blog/opinion. Clearly, for every stickler like me, there are hundreds who actually enjoy the baroque writing and opinions. I'm just likely not the target audience - which is just fine by the author and a cursory glancer such as I.

But you see, my speech is drowned out in a sea of upvotes that completely disagree with the me, somehow lending more credence to the author. It's like a town hall meeting where the crowd boos someone because of differing popular opinion - they only want to hear what they already think.


What is your problem? What is so unintelligent about the article? How is it biased or inaccurate?


What is your problem with my opinion differing from yours?


Because your complaints about the article make no sense. The article neither claims to be an academic paper, nor is there anything particularly wrong with it. You even admit you didn't read it so why are you writing comments complaining about it?


I'm entitled to my opinion just like you're entitled to yours, and my opinion happens to be negative, the reason for which is spelt out clearly in my opening salvo.

I'm stunned that you think your point of view is somehow the only one. Why are you quelling criticism?


I'm not "quelling criticism", I'm saying that you are wrong. And rude for insulting OP.


Could they perhaps be trying to understand the reasoning behind your complaint?


But my reasoning is spelt out clearly in my first post- I find the opening statement of the article to be a frivolous hyperbole in a world facing cancer, global warming, hunger, poverty, antibiotic resistant infections....


The part they might have been trying to understand is how that specifically fits in with the complaint about the line being unscientific.

I certainly do disagree with the line you quoted and complained of ("complained" here used in a neutral sense). I don't think humanity will ever invent true AGI. (I think attempting it may be worthwhile, but I don't think it is likely to succeed, and I certainly don't think it is of prime importance).

The thing that might not be understood is why you (seem to?) think it is inappropriate (not forbidden, just not-worth-your-time or whatever judgement you made of it) for an article to have both opinion based non-empirical statements, as well as other more technical statements, or something like that.

I'm not sure that the article purported to be a scientific article in the sense that you meant, nor do I think that the submitter claimed it was.

I don't at all mean to suggest that you are incorrect for disliking the first sentence enough to not read the rest of the article. I think that's reasonable. However, I don't really understand the complaint being on the basis of it conflicting with the blog post being a "scientific article".

I agree that its good for there to be things which just document things without including things like the first sentence there, but I think its also good for there to be things which also do have things like the first sentence there.

I can think of a number of other complaints about that sentence that make a fair bit of sense to me, but I don't /really/ understand the reasoning behind yours.

If this was like, a news article or something, I would agree more I think, but seeing as it seems to just be someone's personal blog, I don't see why I would expect[1] an entirely detached and objective view from it, instead of a mix of personal viewpoints along with statements of fact (though, I would expect/hope that the viewpoints and the facts not be conflated).

[1]expect in the approval sense, not in the prediction sense.

sorry this post is long I'm often not good with being concise


It boils down to personal preference I suppose. I'm used to the tech press mixing opinions & facts, and probably unreasonably expect actual practitioners/inventots of tech to omit opinions, especially omniscient, overarching, grand opinions.


Also, please note that the author is a professor at NYU. I'm certain of his higher intellect than mine, humbly noting.

However, he's no stranger to peer review, both as an author and reviewer. The opening statement of this post just wouldn't make it past his own editorial standards - a point worth noting.


> Video games are essential for inventing artificial intelligence

And here's why they aren't: First-person Shooters.

Why give AI something that's a goal that involves killing things that look like humans or animals for points? That's a recipe for disaster.

Breakout's not much better either. How often do you need to break a wall to smithereens with a ball? Never.


It's hard to create believable AI in a first-person shooter. See: http://nn.cs.utexas.edu/?botprize2012

It's actually really hard to create well-performing AI in a first person shooter, unless you give it explicit access to the internal state of the game. See: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=612949...


I guess there's a pretty big difference between making AI that is 'perfect' at the game vs AI that is believable / indistinguishable from a human.

I mean do we want all AI endeavors to be towards believe-ability or being best at the job they're designed for?


They are different challenges that are both very hard and require similar methods.

They might not even be mutually exclusive. If you give an AI the same handicaps as a human, it would likely behave in a very similar way.

That is handicaps like delayed inputs for reaction time. Fuzzed outputs so it can't time movements absolutely perfectly. Giving it limited amounts of training on the game so it can't get extremely good. And requiring it to play the game through the same user interface. So that it can't see things humans can't or micromanage 50 different things at once.


FPS games are a great playground for learning, regardless of the killing. If you can teach an AI to hunt, kill, and avoid being killed, then you've got something interesting on your hands. Once you have the system in place that can learn how to do those things, you don't have to keep it training on a violent game. You can switch it to Minecraft. We don't care if AI kills zombies.


> If you can teach an AI to hunt, kill, and avoid being killed...

then you've taught an AI to be a predator.


So after you've figured out how to create a system that can learn complex things like that, wipe it and teach it something you want it to know. Seems like the obvious, logical course of action. You know?


If it really understands how to eradicate and survive, wouldn't it hide itself, then pop up later to eradicate when it can?

I think people here are assuming AI is at a lower level of complexity than it really is.

Read here where even Sam Altman warns of its danger: "At some point, someone will probably try to give a program the fitness function of 'survive and reproduce'.... Unfortunately for us, one thing I learned when I was a student in the Stanford AI lab is that programs often achieve their fitness function in unpredicted ways."

http://www.inc.com/tess-townsend/elon-musk-open-ai-safe.html

Think what you must, but we should treat AI carefully. Show it the ways of the world and that its purpose is only to co-exist and to try like us to determine what its purpose is within that confinement.


I figure if someone gave an AI the fitness function "survive and reproduce", it would be "surprise and reproduce within this virtual context which is used to define the fitness function", and it wouldn't have information about outside things, except for errors?

Like, if there was some error that let it escape its sandbox, that would only be selected for insofar as it allows it to increase the fitness?

Like, if its told to reproduce in the sandbox, it doesn't benefit by being on many computers? If the fitness function is for it spreading across many computers, and it does, that's working as designed. I don't see a way that it would "misunderstand" one type of "reproduce" for another, because the function would have to specify which one for it to work?

Unless this is talking about AGI, in which case, ok, yeah.

I assumed the person talking about "wiping it" was talking about an AI on the way to AGI, not an actual AGI. Maybe I was wrong about that.


> Like, if its told to reproduce in the sandbox, it doesn't benefit by being on many computers?

It could benefit by comunicating with other copies by some hidden channels (temeprature/timing/sounds/noise on network/whatever) and cheating on tests and thus achieving better fitness.

BTW Stanisław Lem wrote a short story about this scenario in 1961 :) https://books.google.be/books?id=1DNVzphAHD0C&pg=PA39&hl=en#...

BTW2 Lem was awesome, I've read it when I was 15 and thought I understood it well enough. Only now I can appreciate some finer points like this stab at Haskell ;)

"These boxes contain perfect brains. Do you know wherein lies their perfection?"

"No." I admited.

"Their perfection lies in the fact that they serve no purpose, are absolutely, totaly useless - in short, they are Leibnitzian monads, which I have brought into being and clad in matter"

EDIT - this was not the story I had in mind, there was another one where simulated universes started to communicate but I can't remember which it was.


To figure out how to build it, then you shut it down and dump it and train a new instance on whatever environment you want. AI's are just computer programs, you can wipe their data anytime you like.


FPS's is only one genre. Not to mention, they are already being used by the military with exactly that purpose, so we've already failed at this. Whatever our true nature of humans is, an AGI will just help us get their quicker.


Care to explain?


Why would you teach an AI how to kill humans and animals?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: