It’s quite clear that OpenAI has a significant lead over everyone else. The only other country outside the west that even has a chance at developing something better than GPT-4 soon is China. China has a pretty cautious culture as well so it’s quite possible that a bilateral moratorium can be negotiated with them.
ADDED: Even without considering X-risks, China’s rulers cannot be pleased with the job displacement risks that GPT-4 plus Plugins may cause, not to mention a more powerful model.
They have trained a huge number of college graduates and even now there are significant unemployment/underemployment issues among them.
ADDED 2: If you think many companies can do it, please identify a single company outside the US/UK/China with the capability to train an equivalent of GPT-3.5 from scratch.
Training a LLM with GPT-4 like capabilities is very hard. Most AI researchers are concentrated in a few countries. At the moment the countries with the vast majority of the expertise are US, UK and China.
It's not remotely intellectually challenging to replicate GPT-4. It just takes a lot of GPUs, something plenty of people all around the world have access to.
GPT-2 and GPT-3 are the same algorithm based on the same open source library. GPT-4 most likely is as well. You can literally fork the repo and if you have enough VRAM, cuda cores, and time, you will get GPT-4. High Schoolers could do it. Amateurs are already replicating LLaMA, which is more complex than GPT and not even a month old. (it's just smaller = fewer GPUs required)
Engineering such a system is a harder challenge than many types of research. Even the mighty Google, the leader in AI research by many metrics, is catching up.
Another example is Meta only finishing OPT-175B, a near equivalent of GPT-3, two years after it.
——
GPT-4 got much better results on many benchmarks than PaLM, Google's largest published model [1]. PaLM itself is probably quite a bit better than LamDa in several tasks, according to a chart and a couple of tables here: https://arxiv.org/abs/2204.02311
It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussions.
Since the release of the Attention paper, they havent come up with any groundbreaking idea, that was five years ago. Where is their research? All they seem to have are technical descriptions with scarce details, deceiving tactics, fiddling with parameters, and an abundance of pointless ethical debates. Can we even call this "research"?
Including DeepMind, they published Gato, Chinchilla, PaLM, Imagen, and PaLM-E, among others. They may not be as fundamental as transformers, but important nonetheless.
Can you list 1-2 research organizations, in any field, with more important output in 5 years? Bonus points if outside the US/UK/the west per context above.
It is not remotely intellectually challenging to go to the moon. It just takes rocket fuel. Newton solved motion hundreds of years ago, and now high schoolers compute it in physics class.
Engineering such a system is a harder challenge than many types of research. Even the mighty Google, the leader in AI research by many metrics, is catching up.
Another example is Meta only finishing OPT-175B, a near equivalent of GPT-3, two years after it.
——
Added to reply:
GPT-4 got much better results on many benchmarks than PaLM, Google's largest published model [1]. PaLM itself is probably quite a bit better than LamDa in several tasks, according to a chart and a couple of tables here: https://arxiv.org/abs/2204.02311
It's unclear that Google currently has an internal LLM as good as GPT-4. If they do, they are keeping quiet about it, which seems quite unlikely given the repercussions.
Google was not catching up before gpt-4. That's my point lol. all the sota llms belonged to google via deepmind and google brain/ai right up to the release of gpt-4. chinchilla, flamingo, flan-palm.
GPT-4 was finished in the summer of 2022. Several insiders gave interviews saying they were using it and building guardrails for it for the last 6 months or so.
OpenAI doesn’t publish as much as Google so we don’t really know how long or in what periods they were ahead.
And there’s no organization outside the US/UK/China with the same caliber of AI engineering output as Google.
>It’s quite clear that OpenAI has a significant lead over everyone else
if their lead was significant they wouldn't have admitted to not releasing more info about GPT-4 in their paper due to commercial reasons. What ever secret sauce they have apparently isn't that significant or they wouldn't be afraid to talk about it
It’s quite clear that OpenAI has a significant lead over everyone else. The only other country outside the west that even has a chance at developing something better than GPT-4 soon is China. China has a pretty cautious culture as well so it’s quite possible that a bilateral moratorium can be negotiated with them.
ADDED: Even without considering X-risks, China’s rulers cannot be pleased with the job displacement risks that GPT-4 plus Plugins may cause, not to mention a more powerful model.
They have trained a huge number of college graduates and even now there are significant unemployment/underemployment issues among them.
ADDED 2: If you think many companies can do it, please identify a single company outside the US/UK/China with the capability to train an equivalent of GPT-3.5 from scratch.