You will run an autonomous ai agent on your own hardware or by having your own local ai pass out commands to distributed systems online, ai, real people, or just good old fashioned programming. There is no stopping this.
It is in fact possible to stop training runs that consume billions of dollars in electricity and in GPU rental or depreciation costs. If no one does such a training run, then no one can release the weights of the model that would have been produced by the run, so you won't be able to run the model (which would never come into existence) on your own hardware. I don't care if you run DeepSeek R1 in your basement till the end of time. What my friends and I want to stop is the creation of more capable future models.
It is also quite possible for our society to decide that deep learning is too dangerous and to outlaw teaching and publishing about it, which would not completely stop the discovery of algorithmic deep-learning improvements (because some committed deep-learning enthusiasts would break the law) but would slow the discovery rate way, way down.
But it’s not actually
possible for our society to decide that. In the real world, at this moment when laws and norms are gone and a billionaire obsessed with AI has power, that will 100% not happen. It won’t happen in the next several years, and that is the time left to do what you are saying. Pretending otherwise is a waste of time.
I prefer to retain some hope that our civilization has a future and that humans or at least human values and preferences have some place in that future civilization.
And most people who think AI "progress" is so dangerous that it must be stopped before it is too late have loose confidence intervals extending for at least a couple of decades (as opposed to just a few years) as to when it definitely becomes too late.
Also there are only a few companies that can fab the semiconductors needed for these training runs.