Hacker News new | past | comments | ask | show | jobs | submit login

> That's the feasability argument. The risk argument is that the consequences of an independent, runaway intelligent entity significantly more capable than humans would have such devastating consequences for humanity's future that even a small risk merits a significant effort to map out the territory.

While that argument would be sufficient to justify greatly increased attention to AI safety, I don't think it's the one that's being made here. A good overview of the argument by the Machine Intelligence Research Institute is at http://intelligence.org/files/IE-EI.pdf . They don't think the probability is small.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: