Hacker News new | past | comments | ask | show | jobs | submit login

>You cannot build a model that is both good for gaming and meteorology.

This was the prevailing wisdom in AI before generalized transformers as well. We're rapidly moving toward black box hyperintelligent AGI.




There has been zero motion towards true AGI so far, hyperintelligent or otherwise.


I would argue that humanity has been advancing rather steadily towards "true AGI" since at least the Jacquard loom. Otherwise, if you wait to admit progress until you actually have evidence for having achieved AGI, it'll just look like the Heaviside step function.


How can you claim that we've been advancing steadily when we don't even know how far the destination is, or if we're moving in the correct direction? There's no basis to claim that it will look anything like a step function; if we do achieve true AGI someday the first one might be equivalent to a really stupid person and then subsequent iterations will gradually improve from there. It seems like a lot of assumptions are being made.


You're really not clear if we're moving in the correct direction? What then would be for you an indication that we are?


Show me a computer that can reason in a generalized way at least as well as a chimpanzee (or whatever). And no, LLMs don't count. They are not generalized in any meaningful way.


Please do clarify why LLMs aren't generalized - other than not being embodied, they seem quite general to me. Is there any particular reasoning task that you have in mind that chimpanzees are good at, but LLMs are incapable of?


So true! I think a lot of people, even here on HN, get confused by the marketing term, "AI".


Authentic Ignorance, I guess…


I was comparing expert systems here, not emergent ones




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: