An AI-risk maximalist would believe AI is a near-term existential threat, with the prospect of total human extinction. In that scenario, the final backstop measure to a rogue country engaging in AI research is using nuclear weapons.
This... obviously... would be very bad. If it escalated to a full nuclear war, it would kill billions of people. But it would leave survivors, who wouldn't be interested in, or be able to, pursuing AI for decades or centuries. Better than the alternative.
An AI-risk maximalist would believe AI is a near-term existential threat, with the prospect of total human extinction. In that scenario, the final backstop measure to a rogue country engaging in AI research is using nuclear weapons.
This... obviously... would be very bad. If it escalated to a full nuclear war, it would kill billions of people. But it would leave survivors, who wouldn't be interested in, or be able to, pursuing AI for decades or centuries. Better than the alternative.