Hacker News new | past | comments | ask | show | jobs | submit login

> incapable of any kind of reasoning

If this were true the debate would be a hell of lot easier. Unfortunately, it is not.




In fact, comments like the one your are responding to are the most effective way to respond to ‘it hallucinates’.


There is no reasoning, which is why it will be impossible to move the LLM's past certain kinds of tasks.

They are 'next word prediction models' which elicit some kinds of reasoning embedded in our language, but it's a crude approximation at best.

The AGI metaphors are Ayahuasca Koolaid, like a magician duped by his own magic trick.

There will be no AGI, especially because there will be not 'automaton' aka distinct entity that elicits those behaviours.

Imagine if someone proposed 'Siri' were 'conscious' - well nobody would say that, because we know it's just a voice-based interface onto other things.

Well, Siri is about to appear much smarter thanks to LLMs, and be able to 'pass the bar exam' - but ultimately nothing has fundamentally changed.

Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.

It's just tech, that's it.


> There is no reasoning

> elicit some kinds of reasoning

I know it's hard, but you have to choose here. Are they reasoning or are they not reasoning?

> next word prediction models

238478903 + 348934803809 = ?

Predict the next word. What process do you propose we use here? "Approximately" reason? That's one hell of a concept you conjured up there. Very interesting one. How does one "approximates" reason and what makes it so that the approximation will forever fail to arrive at its desired destination?

> Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.

Human context is fleeting as well. Time, dementia and ultimately death can attest to that. Even in life identity is complicated and multifaceted without singular I. For all intents and purpose we too are composed of massive amounts of loosely linked subsystems vaguely resembling some sort of unity. I agree with you on that one. General intelligence IMO probably requires some form of cooperation between disparate systems.

But you see some sort of fundamental difference here between "biology" and "tech" that I just cannot. If RAM was implemented biologically, would it cease to be RAM? I fail to see what's so special about the biological substrate.

To be clear, I'm not saying LLMs are AGI, but I have a hard time dismissing the notion that some combination of systems - of which LLMs might be one - will result in something we just have to call generally intelligent. Biology just beat us to it, like it did with so many things.

> It's just tech, that's it.

The human version is: it's just biology, that's it. What's the purpose of stating that?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: