Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Devise any reasoning test you like that would cleanly separate humans from LLMs. I'll wait.

Well, you don’t have to wait. Just ask basic questions about undergraduate mathematics, perhaps phrased in slightly out-of-distribution ways. It fails spectacularly almost every time and it quickly becomes apparent that the ‘understanding’ present is very surface level and deeply tied to the patterns of words themselves rather than the underlying ideas. Which is hardly surprising and not intended as some sort of insult to the engineers; frankly, it’s a miracle we can do so much with such a relatively primitive system (that was originally only designed for translation anyway).

The standard response is something about how ‘you couldn’t expect the average human to be able to do that so it’s unfair!’, but for a machine that has digested the world’s entire information output and is held up as being ‘intelligent’, this really shouldn’t be a hard task. Also, it’s not ‘fiction’ — I (and many others) can answer these questions just fine and much more robustly, albeit given some time to think. LLM output in comparison just seems random and endlessly apologetic. Which, again, is not surprising!

If you mean ‘separate the average human from LLMs’, there probably are examples that will do this (although they quickly get patched when found) — take the by-now-classic 9.9 vs 9.11 fiasco. Even if there aren’t, though, you shouldn’t be at all surprised (or impressed) that the sum of pretty much all human knowledge ever + hundreds of millions of dollars worth of computation can produce something that can look more intelligent than the average bozo. And it doesn’t require reasoning to do so — a (massive) lookup table will pretty much do.

> There is nothing philosophical or pseudo-philosophical about saying reasoning is determined by output.

I don’t agree. ‘Reasoning’ in the everyday sense isn’t defined in terms of output; it usually refers to an orderly, sequential manner of thinking whose process can be described separately from the output it produces. Surely you can conceive of a person (or a machine) that can output what sounds like the output of a reasoning process without doing any reasoning at all. Reasoning is an internal process.

Honestly — and I don’t want to sound too rude or flippant — I think all this fuss about LLMs is going to look incredibly silly when in a decade or two we really do have reasoning systems. Then it’ll be clear how primitive and bone-headed the current systems are.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: