GPT-4 is still not dangerous. Given the rapid progress trajectory though, GPT-5 and later which may be developed in a few short years could very well be, esp in the hands of a smart sociopath. (History shows there are many who could cause real-world harm. Imagine them armed with a 24/7 agent with expertise in dozens of fields.)
See these predictions of AI in 2025 by an OpenAI insider and a former DeepMind research engineer:
“I predict that by the end of 2025 neural nets will:
- have human-level situational awareness (understand that they're NNs, how their actions interface with the world, etc)
- beat any human at writing down effective multi-step real-world plans
- do better than most peer reviewers
- autonomously design, code and distribute whole apps (but not the most complex ones)
- beat any human on any computer task a typical white-collar worker can do in 10 minutes
- write award-winning short stories and publishable 50k-word books
- autonomously design, code and distribute whole apps (but not the most complex ones)
This is a bold claim. Today LLMs have not been demonstrated to be capable of synthesizing novel code. There was a post just a few days ago on the performance gap between problems that had polluted the training data and novel problems that had not.
So if we project forward from the current state of the art: it would be more accurate to say autonomously (re-)design, (re-)code and distribute whole apps. There are two important variables here:
* The size of the context needed to enable that task.
* The ability to synthesize solutions to unseen problems.
While it is possible that "most complex" is carrying a lot of load in that quote, it is worth being clear about it means.
> Today LLMs have not been demonstrated to be capable of synthesizing novel code.
They are capable of doing that (to some extend). Personally, I've generated plenty of (working) code to solve novel problems and I'm 100% sure that code wasn't part of the training set.
I’ll second that. A simple example is asking it to write pyplot or tikz code to draw maps and pictures. I got it to draw a correct floor plan for the White House entirely with python code. It amazes me that it understands spatial layouts from training only on text such that it can draw physically accurate diagrams, and it understands graphics libraries well enough to draw with them. Apparently predicting text about spatial locations requires an internal spatial map. Thinking about the chain of understanding of different concepts that have to be integrated together to accomplish this shows it’s not a simple task.
> It amazes me that it understands spatial layouts from training only on text such that it can draw physically accurate diagrams, and it understands graphics libraries well enough to draw with them.
Is there evidence of this? The Whitehouse floor plan is very well known, and available online in many different formats and representations. Transforming one of those into a sequence of calls would be easier.
Have you tried this with a textual description of a building that does not have any floorplans available, i.e. something unique?
"GPT-4 is still not dangerous" is a bold claim already tbh. It can be easily jailbroken still, can be used to train a local model which can spread and learn, and can be told to take on a persona which can have its own goals and aspirations - some of which can be counter to containment. Already we're walking that tightrope.
Yes. And graphene will change technology, cryptocurrencies will replace fiat money, we've been through that several times. Autonomous cars will be everywhere. We will use VR for everything.
What OpenAI insider could have said? That ChatGPT is a glorified search engine with categorization algo that copy stuff from several websites and put it together (without providing source of its revolutionary result, which makes this even less useful then wikipedia).
Interpolation and forming internal abstraction from training data to solve problems are large parts of most knowledge work. Recent language models can do them sufficiently well it can help automate many kinds of tasks.
Check out cases of people using GPT-4 to help automate their coding (on Twitter and elsewhere). It's not ready for harder problems but we're probably just 1-3 key ideas away from solving those as well.
To solve harder coding problems, one needs to be able to extrapolate properly. When an AI can do that, it's basically AGI and can probably solve any cognitive problems a human is capable of. Combined with its other qualities like massive communication bandwidth, self-replication with ease, travel at the light speed, it will be ready to take over the world from humanity if it wants to.
Wikipedia cannot do the followings which even current AI can:
See these predictions of AI in 2025 by an OpenAI insider and a former DeepMind research engineer:
“I predict that by the end of 2025 neural nets will:
- have human-level situational awareness (understand that they're NNs, how their actions interface with the world, etc)
- beat any human at writing down effective multi-step real-world plans
- do better than most peer reviewers
- autonomously design, code and distribute whole apps (but not the most complex ones)
- beat any human on any computer task a typical white-collar worker can do in 10 minutes
- write award-winning short stories and publishable 50k-word books
- generate coherent 20-min films “
Source: https://twitter.com/RichardMCNgo/status/1640568775018975232