Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Legislation? International treaties?

An AI-risk maximalist would believe AI is a near-term existential threat, with the prospect of total human extinction. In that scenario, the final backstop measure to a rogue country engaging in AI research is using nuclear weapons.

This... obviously... would be very bad. If it escalated to a full nuclear war, it would kill billions of people. But it would leave survivors, who wouldn't be interested in, or be able to, pursuing AI for decades or centuries. Better than the alternative.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: