I find it baffling that ideas like "govern compute" are even taken seriously. What the hell has happened to the ideals of freedom?! Does the government own us or something?
> I find it baffling that ideas like "govern compute" are even taken seriously.
It's not entirely unreasonable if one truly believes that AI technologies are as dangerous as nuclear weapons. It's a big "if", but it appears that many people across the political spectrum are starting to truly believe it. If one accepts this assumption, then the question simply becomes "how" instead of "why". Depending on one's political position, proposed solutions include academic ones such as finding the ultimate mathematical model that guarantees "AI safety", to Cold War style ones with a level of control similar to Nuclear Non-Proliferation. Even a neo-Luddist solution such as destroying all advanced computing hardware becomes "not unthinkable" (a tech blogger gwern, a well-known personality in AI circles who's generally pro-tech and pro-AI, actually wrote an article years ago on its feasibility through terrorism because he thought it was an interesting hypothetical question).
AI is very different from nuclear weapons because a state can't really use nuclear weapons to oppress its own people, but it absolutely can with AI, so for the average human "only the government controls AI" is much more dangerous than "only the government controls nukes".
Which is why politicians are going to enforce systematic export regulations to defend the "free world" by stopping “terrorists", and also to stop "rogue states" from using AI to oppress their citizens. /s
I don't think there's any need to be sarcastic about it. That's a very real possibility at this point. For example, the US going insane about how dangerous it is for China to have access to powerful GPU hardware. Why do they hate China so much anyway? Just because Trump was buddy buddy with them for a while?
If AI is actually capable of fulfilling all the capabilities suggested by people who believe in the singularity, it has far more capacity for harm than nuclear weapons.
I think most people who are strongly pro-AI/pro-acceleration - or, at any rate, not anti-AI - believe that either (A) there is no control problem (B) it will be solved (C) AI won't become independent and agentic (i.e. it won't face evolutionary pressure towards survival) or (D) AI capabilities will hit a ceiling soon (more so than just not becoming agentic).
If you strongly believe, or take as a prior, one of those things, then it makes sense to push the gas as hard as possible.
If you hold the opposite opinions, then it makes perfect sense to push the brakes as hard as possible, which is why "govern compute" can make sense as an idea.
>If you hold the opposite opinions, then it makes perfect sense to push the brakes as hard as possible, which is why "govern compute" can make sense as an idea.
The people pushing for "govern compute" are not pushing for "limit everyone's compute", they're pushing for "limit everyone's compute except us". Even if you believe there's going to be AGI, surely it's better to have distributed AGI than to have AGI only in the hands of the elites.
> surely it's better to have distributed AGI than to have AGI only in the hands of the elites.
The argument of doing so is the same as Nuclear Non-Proliferation - because of its great abuse potential, giving the technology to everyone only causes random bombings of cities instead of creating a system with checks and balances.
I do not necessarily agree with it, but I found the reasoning is not groundless.
But the reason for nuclear non-proliferation is to hold onto power. Abuse potential is a great excuse, but it applies to everyone. Current nuclear states have demonstrated that they are willing to indirectly abuse them (you can't invade Russia, but Russia has no problem invading you as long as you aren't backed up by nukes).
The world's superpowers enforce nuclear non-proliferation mainly because it allows them to keep unfair political and military advantages to themselves. At the same time, one cannot deny that centralized weapon ownership made the use of such weapons more controllable: These nuclear states are powerful enough to establish a somewhat responsible chain of command to avoid their unreasonable or accidental uses, and so far these attempts are still successful. Also, due to the fact that they are "too big to fail", they were forced to hire experts to make detailed analysis on the consequences of nuclear wars, and the resulted MAD doctrine discouraged them from starting such wars.
On the other hand, if the same nuclear technologies are available to everyone, the chance of an unreasonable or accidental nuclear war will be higher. If even resourceful superpowers can barely keep these nuclear weapons under safe political and technical control (as shown by multiple incidents and near-misses during the Cold War [0]), surely a less resourceful state or military in possession of equally destructive weapons will have even more difficulties on controlling their uses.
At least this is how the argument goes (so far, I personally take no position).
Of course, I clearly realized that centralized control is not infallible. Months ago, in a previous thread on OpenAI's refusal on publishing technical details of GPT-4, most people believed that they were using it as an excuse to maintain a monopolistic control. Instead, I argued that perhaps OpenAI truly values the problem of safety right now - but acting responsibly right now is not an indication that they will still act responsibly in the future. There's no guarantee that the safety considerations will eventually be overridden in favor of financial gains.
> surely it's better to have distributed AGI than to have AGI only in the hands of the elites
This is not a given. If your threat model includes "Runaway competition that leads to profit-seekers ignoring safety in a winner-takes-all contest", then the more companies are allowed to play with AI, the worse. Non-monopolies are especially bad.
If your threat model doesn't include that, then the same conclusions sound abhorrent and can be nearly guaranteed to lead to awful consequences.
Neither side is necessarily wrong, and chances are good that the people behind the first set of rules would agree that it'll lead to awful consequences — just not as bad as the alternative.
No they really do push for "limit everyone's compute". The people pushing for "limit everyone's compute except us" are allies of convenience that are gonna be inevitably backstabbed.
At any rate, if you have like two corps with lots of compute, and something goes wrong, you only have to EMP two datacenters.
The concerning AGI properties include recursive self-improvement to superhuman capability levels and the ability to mass-deploy copies. Those are not on the horizon when it comes to humans.
If hypothetically some human acquired such properties that would be equally concerning.
The government sure thinks they own us, because they claim the right to charge us taxes on our private enterprises, draft us to fight in wars that they start, and put us in jail for walking on the wrong part of the street.
Taxes, conscription and even pedestrian traffic rules make sense at least to some degree. Restricting "AI" because of what some uninformed politician imagines it to be is in a whole different league.
IMO it makes no sense to arrest someone and send them to jail for walking in the street not the sidewalk. Give them a ticket, make them pay a fine, sure, but force them to live in a cage with no access to communications, entertainment, or livelihood? Insane.
Taxes may be necessary, though I can't help but feel that there must be a better way that we have not been smart enough to find yet. Conscription... is a fact of war, where many evil things must be done in the name of survival.
Regardless of our views on the ethical validity or societal value of these laws, I think their very existence shows that the government believes it "owns" us in the sense that it can unilaterally deprive us of life, liberty, and property without our consent. I don't see how this is really different in kind from depriving us of the right to make and own certain kinds of hardware. They regulated crypto products as munitions (at least for export) back in the 90s. Perhaps they will do the same for AI products in the future. "Common sense" computer control.
I feel a bit like everyone is missing the point here. Regardless of whether law A or law B is ethical and reasonable, the very existence of laws and the state monopoly on violence suggests a privileged position of power. I am attempting to engage with the word "own" from the parent post. I believe the government does in fact believe it "owns" the people in a non-trivial way.