Hacker News new | past | comments | ask | show | jobs | submit login

> These models are obviously capable of acts of intelligence.

Except they aren't. They are capable of language and pattern manipulation, and really good at it. But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly. Even if it's something that a kid could figure out.

The Eliza effect strikes again!




> But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly.

Often they do, but sometimes they don't.

> Even if it's something that a kid could figure out.

Intelligent and smart aren't synonyms. Modern LLM's are obviously pretty stupid at times, but even a human with an IQ of 65 has some intelligence.


But when LLM answers a logic or math question that no seven-year-old would figure out, would you flip the table and say this is the evidence that the seven-year-old is not intelligent?

Otherwise, it sounds like circular reasoning, where we simply say "Of course a human being is intelligent because they are intelligent, and LLM is not because it isn't."


and also fails at math about 7% of the time which is abysmal compared to a $7 calculator that fails about 0.00000000001% of the time. that's far more discounting of intelligence than the times it gets the math right is affirming.


> But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly.

Can you give an example?


Of course not, because then the AI would get trained on the proper answer! /s




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: