Hacker News new | past | comments | ask | show | jobs | submit login

> Agents can operate in narrow domains too though, so to fit the G part of AGI the agent needs to be non-domain specific.

"Can", but not "must". The difference between an LLM being harnessed to be a customer service agent, or a code review agent, or a garden planning agent, can be as little as the prompt.

And in any case, the point was that the concept of "completely autonomous agentic intelligence capable of operating on long-term planning horizons" is better described by "agentic AI" than by "AGI".

> It's kind of a simple enough concept... it's really just something that functions on par with how we do.

"On par with us" is binary thinking — humans aren't at the same level as each other.

The problem we have with LLMs is the "I"*, not the "G". The problem we have with AlphaGo and AlphaFold is the "G", not the ultimate performance (which is super-human, an interesting situation given AlphaFold is a mix of Transformer and Diffusion models).

For many domains, getting a degree (or passing some equivalent professional exam) is just the first step, and we have a long way to go from there to being trusted to act competently, let alone independently. Someone who started a 3-year degree just before ChatGPT was released, will now be doing their final exams, and quite a lot of LLMs operate like they have just about scraped through degrees in almost everything — making them wildly superhuman with the G.

The G-ness of an LLM only looks bad when compared to all of humanity collectively; they are wildly more general in their capabilities than any single one of us — there are very few humans who can even name as many languages as ChatGPT speaks, let alone speak them.

* they need too many examples, only some of that can be made up for by the speed difference that lets machines read approximately everything






Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: