Hacker News new | past | comments | ask | show | jobs | submit login

> at least a simulacrum that is superficially indistinguishable from metacognition

That's where we disagree I guess. I can very much distinguish between a human reflecting on e.g. how they don't know something, and an AI blindly saying "oh you're right I was mistaken, <profuse apology>, it was actually <additional bullshit>".

Reasoning models didn't really improve that much IMO. A stochastic representation of metacognition is just as metacognition as a stochastic representation of an answer is an answer, i.e. it's not. LLMs are just very good at giving the impression of metacognition just like they're good at giving the impression of an answer.

It might help bias the answers into different local minimas--because it resembles when people exteriorize metacognition, also because it dumps more info into its context instead of the first bullshit that it chose statistically--but it's still nowhere close to higher-order thinking and other metacognition phenomena that humans are capable of.




> an AI blindly saying "oh you're right I was mistaken, <profuse apology>, it was actually <additional bullshit>".

I've seen that in humans too. For example after gradding an exam, a student may come and explain why they made a mistake and why they intended to do and why we shoud increase the grade. Most of the times the new explanation is as bad as the original one.


Yes, humans can bullshit too. Is that the standard to which we want to hold our AI?

The question is not if humans do bullshit like LLMs, it's whether AI can think like humans.


I don't think the question is AI can think like humans. The question is whether AI can perform task like humans. We don't even know how humans think. Even asking the question whether AI can think like humans at this point is pretty non-sensical.


Right, to express it another way: The real-world LLM just makes documents longer. It is being run against a document which resembles a movie script, where "User says X, Computer says" is inserted whenever I type X, and then the LLM just makes it a bit longer to complete the line for the "Computer" character.

These models marketed as "reasoning" are just changing the style of the script to film noir detective, where the protagonist "Computer" has extra observations and commentary that aren't "spoken" to other characters.

While that change may help keep the story on track, it does not affect the fundamental nature of the "thinking" going on: It's still just growing a document and acting out whatever story emerges.


That’s a very good analogy, which works astonishingly well in giving people a reasonable expectation.

The question I’m having though is: How different are we from that? The nature of a Markov-Chain is that you can describe just about anything with it.

The following description is correct, isn’t it? Our “thoughts” are generated as some function of our integrated past sensory input (scare quotes because I don’t want to talk about what exactly a thought is)


> The question I’m having though is: How different are we from that?

It's obvious we are different, but if we could answer exactly how with sufficient rigor, we would already have better AI and be asking different questions. :p

I readily admit that LLMs are an exciting potential piece of a much bigger puzzle, but this could easily be like trying to parse HTML only using regular expressions: It works great on trivial input, but no amount of minor would let it truly solve the problem, because it lacks some higher structure of organization or meaning.


I too thought that it was obvious. But then I spend a lot of time in neuroscience discussions… and… it got less obvious


I feel it’s very rare that people openly and consciously reflect about what they do or do not know. And even then I find it questionable if we are really capable of identifying the difference reliably. However, most of the time I feel we just make it up - esp in none-trivial areas. Just think of the average business meeting




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: