Hacker News new | past | comments | ask | show | jobs | submit login
The Artificial Intelligence Revolution: A Special Report, Pt. 1 (rollingstone.com)
91 points by wallflower on March 2, 2016 | hide | past | favorite | 42 comments



> We may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species

This is exactly what's wrong with the press and peoples expectations about AI. That single paragraph set the tone for the article to me, and know I'm totally biased to thinking "this is just sensationalism media".

Edit: I expand a bit after reading, and totally agree with seiji's comment.

> But gradually it rises, and begins to stumble-run toward the goal. You can almost see it gaining confidence, its legs moving beneath it, now picking up speed like a running back.

(emphasis is mine) No, it's not gaining confidence, it's just optimizing a function. Despite what magical words have been used by those explaining to the journalist the works of the algorithm, there robot doesn't feel or pursue anything other than what it is being programmed to do: optimize an energy function with respect to some parameters, which are set ahead of the training sesion.

Like others commented, the journalistic quality of the article is on the low end of the spectrum. I wonder how this made it to the front page. I began reading expecting something of a certain quality, and that expectation was certainly not met.


The problem seems to be there are two AI tracks: practical and science fiction. Most people (especially enthusiastic novices, even well-funded enthusiastic novices) don't distinguish the difference.

The current practical approaches are all kinda non-AI stuff that's been co-opted into "AI" by marketing/branding. Nobody looks at a Toyota factory robot and screams "AI!!!" But, every day day people look at image/speech recognition and do scream "AI!!!"

In the long run, the science fiction approach is of some concern, but the trick is how long is long run here. The enthusiastic novices think fully self-aware computer brains will control the world within 3 years, but those same people also are pretty much ignorant on current research (outside of popularizing articles). They can't really tell you about any details of practical progress that could result in a chain of events between the present day and their "computational apocalypse" science fiction future.

(and other things of concern like "friendly AI" are just so silly to be... silly. we can't even make "provably friendly" people, and you expect systems more powerful than people will fall under people control? good luck with that.)

take a deep breath. slow your irrational exuberance. stay in the present. work towards a better tomorrow. the future isn't going to make itself.


I'm with you on this, and I logged in just to upvote your comment.

That said, if one suspects that humanity will not be able to construct powerful computer systems in a provably friendly manner ("friendly AI"), shouldn't we step back and question this Promethean effort?

I know this is what AI researchers caution against - hysteria stunting basic science research, AI winters, etc. However, if someone speculates that any progress in AI research leads us down a path that, in the long term you speak of, will eventually cause humanity to suffer, then shouldn't he or she do everything they can to stop it?

Of course, whether or not it is feasible to stop technological progress is a separate question.


I'm with you on this, and I logged in just to upvote your comment.

Thanks! :)

shouldn't we step back and question this Promethean effort? is feasible to stop technological progress is a separate question.

The only way to stop it would be to classify GPUs as weapons of mass destruction. But, that's a non-starter because all our fabs aren't US based, so if we restricted domestic GPU usage, then fer'ners would just use them all and create god forms in their own image.

Isn't it amusing how when people think "friendly AI," the "AI" always seems to have the motivations and personality of a 25 year old startup scene white guy living in San Francisco? What if the AI is made by a fanatic in a marginalized country? That's the "friendly-vs-not-friendly" dichotomy, but there's no way to force one way or the other (without restricting hardware access).

Over all, global capitalism doesn't really allow for holding anything back for the sake of "a better world." We stand on the broken backs of others so we, ourselves, can rise taller, thereby proving we are better than the unwashed masses. We are special individuals because we have the vision, audacity, power, to exploit the rest of the world en masse. Tremble before our bank accounts, mansions, and political connections.

then shouldn't he or she do everything they can to stop it?

Thought experiment: what if "what comes next" is better than humanity? Industrial-scale human civilization so far hasn't been great for other life on this planet.

What if "regular" apes had the forethought to kill the antecedent human mutated apes because it would eventually be bad for the planet? Would restricting advancement have been a better plan than letting new intelligence be birthed, even if it destroyed all the old ways?

The question ends up being: do we commence a Butlerian Jihad or do we admit we are imperfect meat machines and, perhaps, tens of billions of meat forms aren't ideal citizens of anything given a long enough time horizon?


> Thought experiment: what if "what comes next" is better than humanity?

Better from whose perspective? :)

The machines can potentially be more durable, and also scalable. We are fragile, we don't scale very well (though trying hard through globalization). Today we can't even backup/restore a person's mind. In fact I think self-aware intelligent machines will appear sooner than the technology to backup our own biological brains.

They will scale their brains, they will expand their "natural habitat" towards other planets, solar systems and even galaxies, like those all-eating self-replicating automatons. Easily.

So while we are at it, looking at the grand scale of things, just one question: where are all these machine civilizations in the Universe? Why aren't there any traces of exponentially growing all-consuming machine civilizations anywhere?


Why aren't there any traces of exponentially growing all-consuming machine civilizations anywhere?

I'm a fan of the zoo hypothesis. Everything else is just sufficiently "hidden" from observation currently. I mean, if you had a planet full of humans, would you want to talk to them?

The zoo hypothesis seems to breaks down into two underlying reasons for non-contact though: captivity vs. captivating. Either it's a star-trek like "non-interference" thing keeping us isolated for future self-directed development/destruction — or — we are just so frigging boring/hostile/dumb it isn't even interesting to consider engaging us.

(those two reasons do assume intent and purpose on the all-consuming machine civilizations, which we have zero priors for. are there clusters of grey goo out there mindlessly absorbing galaxies as they travel? are there megamind clusters trying to do... something? is this all just a simulation? will the real slim shady please stand up?)


>Would restricting advancement have been a better plan than letting new intelligence be birthed, even if it destroyed all the old ways?

In other words, ban technology or give up. Of course, banning technology is fruitless for the reasons you outlined.

Arguably the only sane solution is to attempt building AGI in the safest and most ethical way possible, and doing it before anyone far more reckless or unethical can.

Granted it'll still probably fail and destroy us all (controlling something smarter than you is really hard), but putting forth humanity's best effort sure beats surrendering to fate—or worse, the aforementioned fanatic in a marginalized country.


You're presuming that there will only be a single origin. I think that's false. Even if "we" build AGI "in the safest and most ethical way possible", there will be a "they" who doesn't.


A multipolar scenario is possible but unlikely. If the technology easily scales, then whoever arrives at it first will probably win.


If the technology creates a rapid positive feedback loop, singularity-style, then I might agree. If not, though... well, if not, it won't matter as much, because there will be less to fear from an AI that can't rapidly improve itself.


The more precise version of "friendliness" is value alignment - Stuart Russell describes the problem well here[1], if you're interested.

1: http://edge.org/conversation/the-myth-of-ai#26015


Most fields have the potential to cause humanity to suffer if we look far enough into the future and look at what might be possible rather than what is likely. GMO research could possibly create superpredators that eat humans, neuroscience could possibly create superorganisms that consists of linking multiple human (or non-human) brains, a futuristic near-light speed spaceship could be devastating to the earth if it ever accidentally hit it, etc.

These only exist in the realm of Science Fiction at the moment, but the same is true for doomsday AI.


How are gaining confidence and optimizing a function not descriptions of the same process expressed in different mediums?


How are they descriptions of the same process?

If you understand "optimizing a function" as "getting better at a certain task": A human can get better at a certain task without gaining confidence, and he/she can gain confidence without getting better at a certain task. The two things might be correlated, but they're not the same.

The most important difference is that "gaining confidence" is a feeling, and that simulated optimization algorithm does not have feelings.


Confidence, in the sense that what used in the article (not as in the grade of confidence in a probabilistic sample), has an emotional component. Optimizing a function is adjusting the free parameters to maximize a desired output.


Emotion is a variable for social groups and it's optimization could be built into an algorithm. Although algorithms don't have confidence as used colloquially, their optimizations could have "confidence intervals" to indicate the scope of conditions in which they have successfully optimized as determined through feedback. If then, the range over which their optimization functions apply are increasing in scope over time, they could be said to be gaining confidence.


See in the paper[0] Figure 3 near the end shows the neural networks used for the different scenarios. None of them have a "confidence" input or parameter.

As for confidence intervals, that's why I made the distinction originally, the word is not being used in the "formal" sense, but in the emotional sense.

[0] http://arxiv.org/pdf/1502.05477v3.pdf


How is emotion not a feedback system for optimization other than the European prejudice against taking it seriously?


I think that you can discover this by experiencing lots of different emotions. People in distress (a strong emotion) often act in a way that we know as self destructive. People in love (another emotion) do dumb things. We see these insights in lots of cultural artefacts from all cultures that I am aware of - books, plays, songs, poems, stories, pictures and statues. If I was to get irritated with your comments would that be optimal?


Why should it be a "feedback system"? Maybe it is an unnecessary side effect of our complicated brains?

For all we know, consciousness is a prerequisite for emotions, and I seriously doubt that a simple optimization algorithm exhibits consciousness.


There's no need to doubt - you can see the algorithm written down, there are no symbols spent on consciousness. There are no surges in the network in response to anything apart from the inputs. The sheep, it does not dream - although sometimes I observe them writing on HN.


I suppose that you can examine the intent, the different mechanisms of achieving the outcome and the outcome itself as surface differences. Orthogonally there are sets of beliefs about the nature of the experience by the entities undertaking the activity. I personally believe that I have a rich and diverse set of motivations and experiences within the process, and that this qualitatively separates confidence gaining and optimisation in a machine which does not have these enrichments into different categories. You may well argue that this is delusion and all I would do is offer to buy you a beer.

But taking the features of the two things that can be observed and potentially agreed on I would say that confidence can be gained socially, often it is gained when the skill is sufficient - by which I mean that optimal skill is achieved and yet confidence is not in place - and confidence can be created in a situation where no skill exists. Confidence is subjective and yet absolute, where as optimisation is objective and by degree.

I would talk more and buy that beer but there is a pretty girl over there who I like the look of a lot more, bye.


I'm sure the author didn't realize this, but "confidence" actually has a very precise definition in the context of machine learning, and many types of models produce confidence scores as part of their predictions.


Can't edit the parent post anymore, but here I found the one interesting thing I got from reading this. The Trust Region Policy Optimization paper, barely mentioned in the article: http://arxiv.org/pdf/1502.05477v3.pdf


Here's a much better commentary on AI for popcorn reading.

http://waitbutwhy.com/2015/01/artificial-intelligence-revolu...


I get nervous when tech trends start appearing in popular publications with little history of covering tech. My spidey sense suggests that this signals an approaching peak in the covered technology.

Frankly, I think the expectations around AI are outpacing the realities that are attainable in the next 5-10 years. Now after that? Who knows. But I'm starting to get the sense that a lot of people are going to be disappointed in the next 5-10 years.


This is a really fascinating perspective. Thinking of publications based on their probability of covering tech (based on past coverage) and subsequently low probability publications covering tech more frequently, suggests evidence of a "bubble". Love it!


I agree, I think this has been happening for several years now. Pop culture fascination with AI has come and gone in waves and I don't believe this wave signals any significant change in the imminence of AI. Seems RS is just grasping for another eye-grabbing story as usual.


Self driving car, image / voice recognition, or even getting the classification / meaning of language texts is a far simpler task than a full self-conscious AI, or even an AI that can simulate a conscious mind but not really have one (e.g. a https://en.wikipedia.org/wiki/Philosophical_zombie, i.e. such an AI can talk about art all day, perhaps even create art and pass a turing test, but it never "experienced" art). Were there any advances in cognitive AI that justify all this "AI revolution" talk? didn't we simply get really good at deep learning and better at image recognition / voice recognition / language translation that is definitely laudable but still far from the "Ex Machina" depiction?


> Self driving car, image / voice recognition, or even getting the classification / meaning of language texts is a far simpler task than a full self-conscious AI, or even an AI that can simulate a conscious mind but not really have one

I disagree. Self-driving cars, image/voice recognition and classification/meaning-extraction for text are well-defined problems, with associated benchmarks, competitions, etc. These are hard problems.

On the other hand "full self-conscious AI" and "AI that can simulate a conscious mind but not really have one" are collections of words whose main reason for existence is to allow their user to keep redefining them arbitrarily in order to win arguments (see, for example, John Searle).

I define "self-conscious" as being able to reason about onesself, where "reason" means performing some nontrivial computation and "onesself" means the pattern of information consitituting the agent (human, device, program, etc.). Hence, I declare the multi-quines at https://en.wikipedia.org/wiki/Quine_(computing)#Ouroboros_pr... as examples of "full self-conscious AI".

Further, I define "mind" as a computational process taking place in physical matter, and a "conscious mind" as a mind capable of reason. Hence I declare that the text "1 + 1" is an AI that can simulate a conscious mind (when given to an interpreter like python) without really having one (since the text performs no computation).


“Will robots inherit the earth? Yes, but they will be our children.” – Marvin Minsky, http://web.media.mit.edu/~minsky/papers/sciam.inherit.html


And what do children do? They replace their parents. Is it possible that robots will enter the fray of evolution and only time will tell if they are more fit for survival.


I'm sure that thought comforts the descendants of our common ancestors as we turn their habitat into farm or industrial land and slowly drive them to extinction.


Robots don't like to be anthropomorphized.


replace the terms AI/DL with (non)linear regression. still hyped?


Entirely so. Non-linear regression sounds like a tool that can be used without any ethical repercussions or philosophical angst about human obsolescence in the face of machine capabilities. It also sounds like a tool that anyone can use...either learn math or find a library that will do the (non)linear regression for you. I mean, you already heard about linear regression in high school, right? It doesn't sound as exotic as AI/DL either...

If the barriers to entry are low, then that means anyone can take advantage of the Power of Math to do Great Things(tm). Said "Great Things(tm)" might include humans trusting their models way too much and end up doing stupid stuff as a result of them, but people have been misusing statistics for generations and we haven't killed ourselves (yet).


I make a point of never visiting RollingStone for anything after they dragged my school's name through the dirt without a second though to journalistic integrity for the sole purpose of pursuing some agenda they had. Call me biased but I refuse to believe that they are capable of producing anything of quality and I would love to watch this magazine crash and burn.


You do not, in this case, suffer from the Gell-Mann Amnesia effect: http://www.goodreads.com/quotes/65213-briefly-stated-the-gel...


You took the words out of my mouth. RS might be the most disgraceful mainstream (or semi-mainstream, however you want to describe them) journal in America today.


I would be interested in learning more about this, or their agenda, what school or article is being referred to here?


He's probably talking about the University of Virginia rape story -> https://en.wikipedia.org/wiki/A_Rape_on_Campus (wich was discredited)


see Vaskivo




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: