Exactly. I’d imagine this is a major reason why Google hasn’t gone to market with this already.
ChatGPT is amazing but shouldn’t be available to the general public. I’d expect a startup like OpenAI to be pumping this, but Microsoft is irresponsible for putting this out in front the of general public.
I anticipate in the next couple of years that AI tech will be subject to tight regulations similar to that of explosive munitions and SOTA radar systems today, and eventually even anti-proliferation policies like those for uranium procurement and portable fission/fusion research.
ChatGPT/GPT3.5 and its weights can fit on a small thumb drive, and copied infinitely and shared. Tech will get better enough in the next decade to make this accessible to normies. The genie cannot be put back in the bottle.
> ChatGPT/GPT3.5 and its weights can fit on a small thumb drive, and copied infinitely and shared.
So can military and nuclear secrets. Anyone with uranium can build a crude gun-type nuke, but the instructions for making a reliable 3 megaton warhead the size of a motorcycle have been successfully kept under wraps for decades. We also make it very hard to obtain uranium in the first place.
>Tech will get better enough in the next decade to make this accessible to normies.
Not if future AI research is controlled the same way nuclear weapon research is. You want to write AI code? You'll need a TS/SCI clearance just to begin, the mere acting of writing AI software without a license is a federal felony. Need HPC hardware? You'll need to be part of a project authorized to use the tensor facilities at Langley.
Nvidia A100 and better TPUs are already export restricted under the dual-use provisions of munition controls, as of late 2022.
It's also a first amendment issue, and already out there. Reminds me that I'm old enough to remember when PGP bypassed export control by being printed on paper and exported as books and scanned/typed back in, though.
They can of course restrict publishing of new research, but that won't be enough to stop significant advances just from the ability of private entities worldwide to train larger models and do research on their own.
Sure it can. Missile guidance systems fit on a tiny missile, but you can’t just get one.
The controlled parlor game is there to seed acceptance. Once someone is able to train a similar model with something like the leaked State Department cables or classified information we’ll see the risk and the legislation will follow.
They can try. You will note that nobody except government employees and the guy running the website ever got in trouble for reading cables or classified information. We have the Pentagon papers precedent to the effect of it being a freedom of speech issue.
True. In the long run though, I expect we will either build something dramatically better than these models or lose interest in them. Throw in hardware advances coupled with bitrot and I would go short on any of the gpt-3 code being available in 2123 (except in something like the arctic code vault, which would likely be effectively the same as it being unavailable).
They released it because ChatGPT went to 100M active users near instantly and caused a big dent in Google's stock for not having it. The investors don't seem to have noticed that the product isn't reliable.
ChatGPT is amazing but shouldn’t be available to the general public. I’d expect a startup like OpenAI to be pumping this, but Microsoft is irresponsible for putting this out in front the of general public.