this is basically the same logic behind nuclear weapons, and AI could potentially be even more dangerous if it kept advancing at the rate we've seen in the last few years. In theory the massive amount of compute needed to train and run these at scale could be tracked/regulated similarly to how nuclear refinement facilities are
your suggestion is that stopping nuclear proliferation shouldn't have even been attempted, despite the fact it actually worked pretty well
> In theory the massive amount of compute needed to train and run these at scale could be tracked/regulated similarly to how nuclear refinement facilities are
It seems likely there exists a fully distributed training algorithm and a lot of people are thinking about and I suspect a coordinated training network, perhaps with a reward system, can hopefully be created. Lots of GPUs out there and we just need to figure out how to coordinate them better and shard all the training data.
But that would only buy us 10 years. Eventually that massive amount won‘t seem very massive anymore compared to what will be available in consumer devices.
your suggestion is that stopping nuclear proliferation shouldn't have even been attempted, despite the fact it actually worked pretty well