> These models are obviously capable of acts of intelligence.
Except they aren't. They are capable of language and pattern manipulation, and really good at it. But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly. Even if it's something that a kid could figure out.
But when LLM answers a logic or math question that no seven-year-old would figure out, would you flip the table and say this is the evidence that the seven-year-old is not intelligent?
Otherwise, it sounds like circular reasoning, where we simply say "Of course a human being is intelligent because they are intelligent, and LLM is not because it isn't."
and also fails at math about 7% of the time which is abysmal compared to a $7 calculator that fails about 0.00000000001% of the time. that's far more discounting of intelligence than the times it gets the math right is affirming.
Except they aren't. They are capable of language and pattern manipulation, and really good at it. But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly. Even if it's something that a kid could figure out.
The Eliza effect strikes again!