Hacker News new | past | comments | ask | show | jobs | submit login

I don't think they are just using the money for preventing Apocalypse. As Andrew Ng says worrying about this is kin to worrying about overpopulation on Mars.

Openai will probably use the money to hire world class researchers and give them tools necessary to advance AI in a meaningful way that benefits humanity.

The other big labs from Google, Facebook and MS seems to be driven by gathering as much data as possible from users, learning from it and presenting click bait ads to make more money.




I agree with what I take to be the main point of your argument, which is that OpenAI will use the money for a variety of different things, many of which aren't averting the apocalypse, and that those things are good. I think it's important to note, though, that Andrew Ng's perspective is far from a consensus one. For one thing, overpopulation on Mars is something that can easily be dealt with when it happens, whereas an intelligence explosion is much more of a flipped switch. For another, though there is a lot of uncertainty in the expert forecasts, a non-negligible amount of experts think there's a reasonable chance of human-level AI within the next 10-20 years, and that's absolutely something we should be worrying about right now. In a 2014 survey of the top 100 most cited living AI scientists[1], the median estimate of the year at which there was a 10% chance we'd have human-level AI was 2024. Multiply that by whatever chance you think human-level AI leads to an intelligence explosion and potential apocalypse, and then multiply it by the impact of that event… I think for most reasonable inputs you get a pretty huge EV as the result.

[1]: http://sophia.de/pdf/2014_PT-AI_polls.pdf


That poll is interesting, but IMHO shows the implicit meaninglessness of predictions to 10/20 years.

Do you feel we've done 30% of the work towards human level AI between 2014 and 2017, given a target of 2024?


We went from not even beating good amateurs at go to beating the best pros in the world. I don't think it's like a progress bar we should be expecting to fill at a linear pace, but, that certainly seems like a strong indicator we've come a long way (and it's one of many examples I could have chosen).


it certainly isn't a linear bar, but any improvement in AI in the last three years does not look at all as getting closer to human level intelligence.

We have just become better at some tasks that we knew how to automate for a long time, albeit badly. We don't have a clue on how human intelligence is supposed to work.


Musk replied to Ng's comment: http://lukemuehlhauser.com/musk-and-gates-on-superintelligen...

Or as someone else quipped, the moment a human steps foot on Mars, Mars has become overpopulated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: