Hacker News new | past | comments | ask | show | jobs | submit login

Good point, forgot about wrong visions of the future that excluded each other. People really get the future wrong in every way possible. Will be the same with AI.



Uh this is like saying "Y2K didn't happen so we wasted our time preventing it"

Like dude, it didn't happen because a lot of people prevented it, the same with nuclear war.

Because something could have tragic consequences is a reason to put great effort into ensuring it doesn't happen.


Not all hypotheticals justify great efforts in prevention.

Unlike AGI, nuclear weapons are not hypothetical in their existence.

On Y2K you can certainly ask if the efforts put into prevention were disproportionate to the risks.


Intelligence is not a hypothetical, we spend 100s of billions per year protecting against other human intelligences. 30 years ago how much were you spending on computer security? Yet somehow computers are supposed to get more capable and you expect us to do the same amount as now.


AGI is hypothetical. If it is just like another human intelligence we know how to work the problems.


Nuclear war is still an overhanging threat that we should be concerned about, especially in the day and age of automation and AI.


Nuclear weapons and their dangers are real, unlike AGI which poses hypothetical risks .


When did the dangers of nuclear weapons become real?

The Chicago Pile? Trinity? Hiroshima? Nagasaki? RDS-1?


Once you have them or can make them. If they had remained a hypothetical because there is no known way to make them, the world would look quite different.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: