Not the author, but I work at OpenAI. There are wide variety of viewpoints and it's fine for employees to disagree on timelines and impact. I myself published a 100-page paper on why I think transformative AGI by 2043 is quite unlikely (https://arxiv.org/abs/2306.02519). From informal discussion, I think the vast majority of employees don't think that we're mere years from a post-scarcity utopia where we can drink mai tais on the beach all day. But there is a lot of optimism about the rapid progress in AI, and I do think that it's harder to forecast the path of a technology that has the potential to improve itself. So much depends on your definition of AGI. In a sense, GPT-4 is already AGI in the literal sense that it's an artificial intelligence with some generality. But in the sense of automating the economy, it's of course not close.
The hype around this tech strongly promotes the narrative that we're close to exponential growth, and that AGI is right around the corner. That pretty soon AI will be curing diseases, eradicating poverty, and powering humanoid robots. These scenarios are featured in the AI 2027 predictions.
I'm very skeptical of this based on my own experience with these tools, and rudimentary understanding of how they work. I'm frankly even opposed to labeling them as intelligent in the same sense that we think about human intelligence. There are certainly many potentially useful applications of this technology that are worth exploring, but the current ones are awfully underwhelming, and the hype to make them seem more than they are is exhausting. Not to mention that their biggest potential to further degrade public discourse and overwhelm all our communication channels with even more spam and disinformation is largely being ignored. AI companies love to talk about alignment and safety, yet these more immediate threats are never addressed.
Anyway, it's good to know that there are disagreements about the impact and timelines even inside OpenAI. It will be interesting to see how this plays out, if nothing else.
My definition: AGI will be here when you can put it in a robot body in the real word and interact with it like you would a person. Ask it to drive your car or fold your laundry or make a mai tai and if it doesn’t know how to do that, you show it, and then it can.
Maybe I'm biased, but I actually think it's a pretty good definition, as definitions go. All of our narrow measures of human intelligence that we might be tempted to use - win at games, solve math problems, ace academic tests, dominate at programming competitions - are revealed as woefully insufficient as soon as an AI beats them but fails to generalize far beyond. But if you have an AI that can generate lots of revenue doing a wide variety of real work, then you've probably built something smart. Diverse revenue is a great metric.
I also find it interesting that the definition always includes the "outperforms humans" qualifier. Maybe our first AGIs will underperform humans.
Imagine I built a robot dog that behaved just like a biological dog. It bonds with people, can be trained, shows emotion, communicates, likes to play, likes to work, solves problems, understands social cues, and is loyal. IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.
> IMHO, that would qualify as an AGI even though it isn't writing essays or producing business plans.
I'm not sure it would, though. The "G" in AGI stands for "General", which a dog obviously can't showcase. The comparison must be done against humans, since the goal is to ultimately have the system perform human tasks.
The definition mentioned by tedsanders seems adequate to me. Most of the terms are fuzzy ("most", "outperform"), but limiting the criteria to economic value narrows it down to a measurable metric. Of course, this could be exploited by building a system that optimizes for financial gain over everything else, but this wouldn't be acceptable.
The actual definition is not that important, IMO. AGI, if it happens, won't appear suddenly from a singular event, but as a gradual process until it becomes widely accepted that we have reached it. The impact on society and our lives would be impossible to ignore at that point. The problem with this is that along the way there will be charlatans and grifters shouting from the rooftops that they've already cracked it, but this is nothing new.
I would say... yes. But with the strong caveat that when used within the context of AGI, the individual/system should be able to showcase that intelligence, and the results should be comparable to those of a neurotypical adult human. Both a dog and a toddler can show signs of intelligence when compared to individuals of their similar nature, but not to an adult human, which is the criteria for AGI.
This is why I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI. It could certainly be considered a step in that direction, but not strictly AGI.
But again, I don't think that a strict definition of AGI is helpful or necessary. The impact of a system with such capabilities would be impossible to deny, so a clear definition doesn't really matter.
> I don't think that a system that underperforms the average neurotypical adult human in "most" cognitive tasks would constitute AGI
What makes you say that it underperforms? I ask because evidence strongly suggests that it is actually vice-versa - AI models currently outperform humans in most of the tasks.
> Would that be services generally provided by government?
Most services provided by governments are economically valuable, as they provide infrastructure that allow individual actors to perform better, increasing collective economic output. (For e.g. high-expenditure infrastructure it could be quite easily argued though that they are not economically profitable.)