Hacker News new | past | comments | ask | show | jobs | submit login

Why do you say they are not capable of reasoning?



What a weird question, it really should be reversed shouldn't it?

But here goes. It's a language model. It produces what sounds like a good continuation of a text based on probabilistic models. While it sounds like human generated content, "it" doesn't actually "think". It doesn't have a culture. It doesn't have thoughts. "It" is a model that generates text that mimics what human whose text it has trained on would have answered. We humans have a tendency to associate that with a sentient thing producing it, but it is not sentient. It is a tree of probabilities with a bit of a randomization on top of it.

Ergo, it cannot reason.


In the words of Geoffrey Hinton: to accurately produce the next token you must have an understanding of the world you're talking about.

Sentience? Consciousness? Who knows. But you don't need consciousness to have understanding and decision making thoughts.


> to accurately produce the next token you must have an understanding of the world you're talking about.

I don't think this is true. It seems to me that you could do this through sheer statistics, and have no understanding of the world you're talking about at all.


>It seems to me that you could do this through sheer statistics, and have no understanding of the world you're talking about at all.

I'm not sure that there is a difference. If there is, what would be an example of true understanding vs just statistics? All of intelligence is ultimately recognizing patterns and layers of patterns of patterns.


Blinded by the implementation we forgot that maybe it's the software (ideas) on top that matters most. The real magic is in the language, not in the brain or transformer. Both the brain and transformer learn language from external sources. There are lots of patterns in language, patterns both humans and AIs use to act intelligently. These patterns act like self replicators (memes) under an evolutionary process. That's where the language smarts comes from - language level evolution. Humans are just small cogs in this language oversystem.


A good paper I know of written to directly refute this comment’s line of reasoning, by a British chap named Alan: https://academic.oup.com/mind/article/LIX/236/433/986238


A good article I know of written to directly sustain this comment's line of reasoning, by an American chap named Noam: https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chat...


I would say that you are jumping to a lot of conclusions here. Lets dig deeper.

"It doesn't have a culture. It doesn't have thoughts"

These are conclusions. What is your reasoning?

To what degree would you say that human decision making can be explained by this statement:

"It is a tree of probabilities with a bit of a randomization on top of it."


Once again, it seems to me that the burden of proof for claiming that a piece of software is sentient should not be on the discarding side.

What is your reasoning to prove that it has culture and reasoning, that its abilities go beyond mimicking human discourse?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: