I think that definition is useful because it is measurable. It sidesteps the endless "It's just a text prediction engine/ I dunno ChatGPT seems pretty smart to me!" discussions. It also sidesteps the "It did well on a test designed to measure human intelligence it must be smarter than humans"/ "no, the test of human intelligence wasn't designed to measure machine intelligence and tells us very little" discussion.
It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Now maybe this definition isn't so useful either, because a lot of work requires a body, to say, move physical goods, which has little to do with "intelligence" but I can see the appeal of looking for some sort of more objective measure of whether you have achieved AGI.
> It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Well, no, that's job automation, and if it's job-specific then it's narrow AI at best (assuming this is a job requiring intelligence being automated, not just a weaving loom being invented), in other words specifically not AGI.
It's really pretty absurd that we've now got companies like OpenAI, Meta, Google (DeepMind) stating that their goal is to build AGI without actually defining what they mean. I guess it let's them declare success whenever they like .. Seems like OpenAI ("GPT-5 and AGI will be here soon!") is gearing up to declare GPT-5, or at least GPT-N as AGI, which is pretty sad.
It reduces it to "Can I fire 50% of my workforce? Then it must be AGI."
Now maybe this definition isn't so useful either, because a lot of work requires a body, to say, move physical goods, which has little to do with "intelligence" but I can see the appeal of looking for some sort of more objective measure of whether you have achieved AGI.