Hacker News new | past | comments | ask | show | jobs | submit login

It is a relatively mainstream opinion (e.g. Peter Norvig [1]) that AGI is already here but in the early stages. He specifically compares it to the beginning of general purpose computers. In any case, I think it’s a mistake to consider it a 0/1 situation. This means there will be an ecology of competitive intelligences in the market. It won’t be monolithic.

[1] https://www.noemamag.com/artificial-general-intelligence-is-...




Well said. AGI is already here, because intelligence & generality exist on a spectrum, not a checkbox. Evaluating AGI's presence isn't "yes/no?", but "how much?"

LLMs have an astonishingly general set of emergent skills, which they can combine to solve novel problems (intelligence). LLMs smoke humans in many broad intellectual domains, are competitive in others, and fail abysmally in some. Folks try skirting this messy reality, squaring the circle, by conflating AGI with ASI or some techno-godhood outcome. We should evaluate AGI the same way we evaluate human general intelligence, which also manifests with dizzying variety.

This is what the "statistical parrot" (i.e: 'No True Scotsman') critics miss. Their handwavy statements about LLMs not "really" understanding the world, "only" doing simple token prediction, having no autonomy/drive, etc. miss the point: it doesn't matter. The damn thing communicating with you (on HN or elsewhere!) may display broad intelligence. I think that's why Turing's test requires a text interface -- people can't help themselves from judging agents by silly anthropomorphic heuristics (appearance, similarity to me, spacial behaviors, etc). So you must literally wall off those biases.

AGI is here. It just looks inhuman and our GI is "better", for now.


> We should evaluate AGI the same way we evaluate human general intelligence, which also manifests with dizzying variety.

This is key. All humans don’t understand everything—and the way we assess human conceptual understanding is through tests. This implies that we need a lot more tests for AI.

I’d like to see a big investment in “Machine Psychology.”




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: