Hacker News new | past | comments | ask | show | jobs | submit login

The cult of 'AI will be totally fine' is a religious belief that brooks no dissent, and unfortunately has its claws in a lot of otherwise smart people.



That's a strawman. People who scoff at the "AI risk" cult don't believe that "AI will be totally fine". They just believe that it doesn't deserve the undue amount of attention it gets, which distracts from other more important and related risks, such as what Cambridge Analytica did for Trump, fake news due to internet, Facebook's bubbles, etc. The belief is to solve problems we have right now, or will have in 10-20 years, not the problems we'd have in 100 years.


I've been personally told by multiple different people who scoff at AI risk that it will be totally fine, so it's not a straw man.

The median expert estimate for when we'll be 10% likely to have human-level AI is ~10 years.

AI risk research didn't receive a penny of funding until the last few years, and is still funded at way lower levels than a lot of things that have dramatically less impact.

In nearly every debate on the topic I've seen (with a few exceptions), the people concerned about AI risk have carefully considered the topic, are aware of the areas where there's still a lot of uncertainty, and make clear and well-hedged arguments that acknowledge that uncertainty; meanwhile the people who scoff at it haven't read any of the arguments (not even in popular book form in Superintelligence), haven't thought about most of the considerations, and have a general air of "assuming things will probably be fine". That's not a straw man, that's just direct observation of the state of the debate. People are doing serious academic work on the topic and have thought about it very deeply; the standard HN middlebrow dismissal is both common and inappropriate.


That number didn't pass my sniff test, so I went looking. It seems to have come from here[1], which aggregates a series of surveys.

I first opened the "FHI Winter Intelligence" report: it's an informal survey made to 35 participants of a conference, of which only 8 work on AI at all (let alone be an expert in AGI).

I then looked at the "Kruel interviews", which the site reports as giving a prediction of "2025" for 10% chance, yet reading the interviews it's quite clear that many gave no prediction at all. Also, averaging answers by people ranging from Pat Heyes to PhD students seems suspect.

Is your number based on these reports?

[1] http://aiimpacts.org/ai-timeline-surveys/


Sorry, gave a citation in another comment on the thread but not in this one. I was referencing http://sophia.de/pdf/2014_PT-AI_polls.pdf


Are they actually experts, though? From that paper:

  “Concerning the above questions, how would you describe your own expertise?”
  (0 = none, 9 = expert)
  − Mean 5.85

  “Concerning technical work in artificial intelligence, how would you describe your own expertise?”
  (0 = none, 9 = expert)
  − Mean 6.26
Also, the whole methodology of aggregating the opinions of random conference attendees seems suspect to me. Attending a conference doesn't make you an expert.


You can restrict your attention to the TOP100 group if you prefer.


Yeah, but then you just have 29 responses in total.


The word "just" feels awfully out of place in this context. If you were doing some kind of broad-based polling of public opinion, of course you'd want a bigger sample size, but 29 of the top 100 researchers in a field sounds like a hell of a good sample to me, and well worth listening to.


If they were 29 random researchers, I'd agree, but since they were voluntarily selected, not really. They try to check if the sample is biased, but it's not convincing.


From the first paragraph of Turing's famous essay on AI:

I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll.


You're piling strawman on top of strawman, while suffering from confirmation bias. You think the AI risk guys are making well-hedge arguments, because you believe in that thesis.

Andrew Ng, Yann LeCun, and many other people who have ACTUALLY worked in AI are the ones who scoff at it. They don't need to make arguments because what do you respond to a young earth creationist?

All of the arguments in Supreintelligence or otherwise are simply that AI will eventually exist. The only argument for that it will come by 2050 is a badly conducted survey of non-experts.

Should we worry about all sorts of existential risk which could arrive in any undetermined time in the future?

The whole project is so absurd, it's hard even to begin to make any counter arguments, because none of the arguments make any sense.


Is this a badly conducted survey of non-experts? http://sophia.de/pdf/2014_PT-AI_polls.pdf

Edit: In particular, the TOP100 subgroup.


Yes. The other 3 groups were not AI researchers. And the TOP100 group had a response rate of 29%, which adds self-selection to the process. People who are interested in AI risk et al. are more likely to respond to such a survey, adding bias. Andrew Ng or Yann LeCun or anyone actually working in AI would have refused (and probably did refuse) the invitation. Also, this TOP100 group of people are also more likely to be GOFAI folks who pretty much have no idea about the current data-driven deep learning-based AI.

Even Stuart Russel, the only CS guy in the AI risk camp, doesn't actually believe that AGI is anywhere near. But he works on it simply because he thinks we can do solve some of the problems like learning from demonstrations instead of (possibly faulty) rewards. That's actually a core AI research topic, not a AI ethics/values/blahblah topic. Oh, and also because this allows him to have a differentiated research program, and thus directs any funding on this niche to him.


Seems pretty clear neither of us will convince the other in this venue, so I'm going to leave it here. Thanks for the discussion :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: