Basically, the idea is that countries sign the agreement to stop the large training runs, and, if necessary, be willing to use conventional strikes on AI-training datacenters in the countries that refuse. Hopefully it doesn't come to that, hopefully it just becomes the fact of international politics that you can't build large AI-training datacenters anymore. If some country decides to start a war over this - the argument is that wars at least have some survivors, and an unaligned AI won't have any.
Because world war might kill millions of humans, but AGI has a non-zero and perhaps more like inevitable chance of actually ending humanity, full stop.
The argument that this is necessary isn't close to being convincing enough for governments to consider following through with such a drastic cause of action.
And, the "AI-might-end-up-killing-everyone" community doesn't seem to be able to see this through other people's eyes in order to make an argument for this without belittling the other perspective.
If other people change their minds, it probably won't be through persuasion but from catastrophe.
What’s interesting to me is that it sounds “radical” but on the other hand, it’s probably not much more radical than going to war with a country over weapons of mass destruction which don’t exist, or to take oil.
Because humans aren't powerful enough to completely exterminate each other (even a nuclear war wouldn't kill literally everyone in the world), but an unaligned AI, in the worst case scenario, could just kill everybody (to eliminate humans as a threat, or to use the atoms we're made out of for something else, or just as a side effect of doing whatever it actually wants to do). It could be powerful enough to do that, and have no reason not to.
Imagine humanity is some random species of wildlife, or insects, and AI is humanity.
As a "highly intelligent system", we have a long history of extincting animal species, and we're well on track to eventually extinct most of them, despite it's highly likely this will make Earth uninhabitable for us, ending with humanity dying off too.
Why do you think AI can't casually extinct us, because it doesn't need us (or thinks it doesn't), and we're just in the way of whatever it is that it wants to do?
> As we get more intelligent we care more about these things (biodiversity, etc). Why wouldn’t AI?
Well, we care for various reasons, major one being our own survival and comfort. Given what that means in practice, we'd be better off dead than having an AI care about us like we care about animals and plants.
https://www.lesswrong.com/posts/oM9pEezyCb4dCsuKq/pausing-ai...
Basically, the idea is that countries sign the agreement to stop the large training runs, and, if necessary, be willing to use conventional strikes on AI-training datacenters in the countries that refuse. Hopefully it doesn't come to that, hopefully it just becomes the fact of international politics that you can't build large AI-training datacenters anymore. If some country decides to start a war over this - the argument is that wars at least have some survivors, and an unaligned AI won't have any.