Hacker News new | past | comments | ask | show | jobs | submit login




I never said am certain, I'd assign both cases about the same probability. Since I've already stated why I think the malevolent simple AI scenario such as a paperclip maximizer is plausible, you haven't stated why you find it implausible. That is why I asked.

Also, what does the Intermediate value theorem have to do with it?


You asked why I was certain. So I asked why you were certain. The onus is on the people putting out apocalyptic theories to justify the apocalyptic scenario (extraordinary claims require extraordinary evidence - a central tenet of the scientific method).

We are going to have a normal-intelligent AI before a superintelligent AI.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: