Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> but if you think they're general intelligence in the same way humans have general intelligence (even disregarding agency, learning, etc.), that's a you problem.

How is it a me problem? The idea of these models being intelligent is shared with a large number of researchers and engineers in the field. Such is clearly evident when you can ask o1 some random completely novel question about a hypothetical scenario and it gets the implication you're trying to make with it very well.

I feel that simultaneously praising their abilities while claiming that they still aren't intelligent "in the way humans are" is just obscure semantic judo meant to stake an unfalsifiable claim. There will always be somewhat of a difference between large neural networks and human brains, but the significance of the difference is a subjective opinion depending on what you're focusing on. I think it's much more important to focus on the realm of "useful, hard things that are unique to intelligent systems and their ability to understand the world" is more important than "Possesses the special kind of intelligence that only humans have".



> I think it's much more important to focus on the realm of "useful, hard things that are unique to intelligent systems and their ability to understand the world" is more important than "Possesses the special kind of intelligence that only humans have".

This is a common strawman that appears in these conversations—you try to reframe my comments as if I'm claiming human intelligence runs on some kind of unfalsifiable magic that a machine could never replicate. Of course, I've suggested no such thing, nor have I suggested that AI systems aren't useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: