Hacker News new | past | comments | ask | show | jobs | submit login

> LLMs absolutely 100% can reason, if we take the dictionary definition; it’s trivial to show their ability to answer non-memorized questions, and the only way to do that is some sort of reasoning.

Um... What? That is a huge leap to make.

'Reasoning' is a specific type of thought process and humans regularly make complicated decisions without doing it. We uses hunches and intuition and gut feelings. We make all kinds of snap assessments that we don't have time to reason through. As such, answering novel questions doesn't necessarily show a system is capable of reasoning.

I see absolutely nothing resumbling an argument for humans having an "ineffable calculator soul", I think that might be you projecting. There is no 'categorical prohibition', only an analysis of the current flaws of specific models.

Personally, my skepticism about imminent AGI has to do believing we may be underestimating the complexity of the software running on our brain. We've reached the point where we can create digital "brains", or atleast portions of them. We may be missing some other pieces of a digital brain, or we may just not have the right software to run on it yet. I suspect it is both but that we'll have fully functional digital brains well before we figure out the software to run on them.




All well said, and I agree on many of your final points! But you beautifully highlighted my issue at the top:

  'Reasoning' is a specific type of thought process 
If so, what exactly is it? I don’t need a universally justified definition, I’m just looking for an objective, scientific one. A definition that would help us say for sure that a particular cognition is or isn’t a product of reason.

I personally have lots of thoughts on the topic and look to Kant and Hegel for their definitions of reason as the final faculty of human cognition (after sensibility, understanding, and judgement), and I even think there’s good reason (heh) to think that LLMs are not a great tool for that on their own. But my point is that none of the LLM critics have a definition anywhere close to that level of specificity.

Usually, “reason” is used to mean “good cognition”, so “LLMs can’t reason” is just a variety of cope/setting up new goalposts. We all know LLMs aren’t flawless or infinite in their capabilities, but I just don’t find this kind of critique specific enough to have any sort of scientific validity. IMHO


> don’t need a universally justified definition, I’m just looking for an objective, scientific one. A definition that would help us say for sure that a particular cognition is or isn’t a product of reason.

Unfortunately, you won't get one. We simply don't know enough about cognition to create rigourous definitions of the type you are looking for.

Instead, this paper, and the community in general are trying to perform practical capability assessments. The claim that the GSM8k measures "mathematical reasoning" or "logical reasoning" didn't come from the skeptics.

Alan Turring didn't try to define intelligence, he created a practical test that he thought would be a good benchmark. These days we believe we have better ones.

> I just don’t find this kind of critique specific enough to have any sort of scientific validity. IMHO

"Good cognition" seems like dismisal of a definition, but this is exactly the definition that the people working on this care about. They are not philosphers, they are engineers who are trying to make a system "better" so "good cognition" is exactly what they want.

The paper digs into finding out more about what types of changes impacts peformance on established metrics. The "noop" result is pretty interesting since "relevancy detection" isn't something we commonly think of as key to "good cognition", but a consequence of it.


I feel you are putting too much emphasis on the importance and primacy of having a definition of words like 'reasoning'.

As humanity has struggled to understand the world, it has frequently given names to concepts that seem to matter, well before it is capable of explaining with any sort of precision what these things are, and what makes them matter - take the word 'energy', for example.

It seems clear to me that one must have these vague concepts before one can begin to to understand them, and also that it would be bizarre not to give them a name at that point - and so, at that point, we have a word without a locked-down definition. To insist that we should have the definition locked down before we begin to investigate the phenomenon or concept is precisely the wrong way to go about understanding it: we refine and rewrite the definitions as a consequence of what our investigations have discovered. Again, 'energy' provides a useful case study for how this happens.

A third point about the word 'energy' is that it has become well-defined within physics, and yet retains much of its original vagueness in everyday usage, where, in addition, it is often used metaphorically. This is not a problem, except when someone makes the lexicographical fallacy of thinking that one can freely substitute the physics definition into everyday speech (or vice-versa) without changing the meaning.

With many concepts about the mental, including 'reasoning', we are still in the learning-and-writing-the-definition stage. For example, let's take the definition you bring up: reasoning as good cognition. This just moves us on to the questions of what 'cognition' means, and what distinguishes good cognition from bad cognition (for example, is a valid logical argument predicated on what turns out to be a false assumption an example of reasoning-as-good-cognition?) We are not going to settle the matter by leafing through a dictionary, any more than Pedro Carolino could write a phrase book just from a Portugese-English dictionary (and you are probably aware that looking up definitions-of-definitions recursively in a dictionary often ends up in a loop.)

A lot of people want to jump the gun on this, and say definitively either that LLMs have achieved reasoning (or general intelligence or a theory of mind or even consciousness, for that matter) or that they have not (or cannot.) What we should be doing, IMHO, is to put aside these questions until we have learned enough to say more precisely what these terms denote, by studying humans, other animals, and what I consider to be the surprising effectiveness of LLMs - and that is what the interviewee in the article we are nominally discussing here is doing.

You entered this thread by saying (about the paper underlying an article in Ars Tech [1]) I’ll pop in with a friendly “that research is definitely wrong”. If they want to prove that LLMs can’t reason..., but I do not think there is anything like that claim in the paper itself (one should not simply trust what some person on HN says about a paper. That, of course, goes as much for what I say about it as what the original poster said.) To me, this looks like the sort of careful, specific and objective work that will lead to us a better understanding of our concepts of the mental.

[1] https://arxiv.org/pdf/2410.05229


This is one of my favorite comments I've ever read on HN.

The first three paragraphs you wrote very succinctly and obviously summarize the fundamental flaw of our modern science - that it can't make leaps, at all.

There is no leap of faith in science but there is science that requires such leaps.

We are stuck bc those most capable of comprehending concepts they don't understand and are unexplainable - they won't allow themselves to even develop a vague understanding of such concepts. The scientific method is their trusty hammer and their faith in it renders all that isn't a nail unscientific.

Admitting that they don't kno enough would be akin to societal suicide of their current position - the deciders of what is or isn't true, so I don't expect them to withhold their conclusions til they are more able to.

They are the "priest class" now ;)

I agree with your humble opinion - there is much more we could learn if that was our intent and considering the potential of this, I think we absolutely ought to make certain that we do everything in our power to attain the best possible outcomes of these current and future developments.

Transparent and honest collaboration for the betterment of humanity is the only right path to an AGI god - to oversimplify a lil bit.

Very astute, well formulated position, presented in accessible language and with humility even!

Well done.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: