Brains do not reason - they are a neural network whose results are defined by their experiences, memories, and sensory inputs.
I'm tired of reading comments from people who keep repeating that LLMs don't think, don't reason, isn't intelligence because it is not human. If your definition of the above is because it's not human, it's quite useless as a comment. We know LLMs aren't biological human brains. So what?
Define what reasoning is to you. Then tell us why LLMs don't reason and why it matters.
1. Reasoning is ability to at least carry out proofs in FOL (first-order logic). FOL can simulate Turing Mach
2. LLMs are formally equivalent to only a subset of FOL.
Why is this important? To model human mathematics, you need at least first-order logic.
These arguments have been around for decades, e.g., Penrose. I am tired of people bringing up strawmen arguments ("Not intelligent because not human!")
"Turing complete as long as you only use a polynomial number of steps in the size of the input" is another way of saying "not Turing complete" (doubly so when you look at the actual polynomials they're using). In fact, that paper just reaffirms that adding context doesn't improve formal expressive power all that much. And this is not equivalent to "well, Turing Machines are fake anyway because of finite space"--I can pretty trivially write programs in Coq, a total and terminating language, that will take far longer than the age of the universe to normalize but can be described in very little input.
The type of 'reason' a cat uses is different from the 'reason' used in math
The information a cat uses is incomplete whereas the information used in math and logic is theoretically all accessible.
The reasoning a cat uses allows for inconsistencies with its model because of its use of incomplete information whereas no inconsistencies are permissible in math and logic.
Formally speaking, math uses deductive reasoning whereas the cat uses abductive reasoning.
They can, at times, and do so best when emotion is not involved.
> I'm tired of reading comments from people who keep repeating that LLMs don't think, don't reason, isn't intelligence because it is not human.
LLM's represent a category of algorithm. Quite elegant and useful in some circumstances, but an algorithm none the less. A quick search produced this[0] example discussing same.
Another reference, which may not be authoritative based on whatever most recent edit the link produces, is[1]:
A large language model (LLM) is a computational model
capable of language generation or other natural language
processing tasks. As language models, LLMs acquire these
abilities by learning statistical relationships from vast
amounts of text during a self-supervised and semi-supervised
training process.
> Define what reasoning is to you.
Reasoning was the process I went through to formulate this response, doing so with intent to convey meaning as best as I can, and understand as best as possible the message to which I am replying.
> Then tell us why LLMs don't reason and why it matters.
LLM's do not possess the ability to perform the process detailed above.
That's not a useful definition for judging whether LLMs reason or not. It's not something we can measure on an objective level and introduces another concept of intent which is just as vague as reasoning.
Specifically, an LLM can produce a similar message to what you posted. Everything else about that process is not defined well enough to differentiate you.
> > Reasoning was the process I went through to ...
> That's not a useful definition for judging whether LLMs reason or not.
I formulated this portion of my post specifically to induce contemplation as to what it is when we, as humans, reason. My hope was that this exercise could provide perspective regarding the difference between our existence and really cool, emergent, novel mathematical algorithms.
Ok, that's cool, now ask GPT to solve any programming or logic problem at all and maybe you'll start to understand why you can reason and why it can't.
It can solve some problems and not others. But that would be again a different definition of reasoning, not what the parent wrote. And it would exclude animals/humans as reasoning, because they can't solve all logic problems.
Artificial intelligence is still intelligence, even if it is just a shallow copy of human intelligence.
What irritates me when I see comments like yours is that precise knowledge of weaknesses of LLMs is necessary to either improve LLMs, so most of the people who claim LLMs reason or are already AGI basically deny the ability to improve them, since they are already perfect. Research into studying the limitations of the current generation of AI is unwanted and by extension so is the next generation of AI.
I'm tired of reading comments from people who keep repeating that LLMs don't think, don't reason, isn't intelligence because it is not human. If your definition of the above is because it's not human, it's quite useless as a comment. We know LLMs aren't biological human brains. So what?
Define what reasoning is to you. Then tell us why LLMs don't reason and why it matters.