I know it's hard, but you have to choose here. Are they reasoning or are they not reasoning?
> next word prediction models
238478903 + 348934803809 = ?
Predict the next word. What process do you propose we use here? "Approximately" reason? That's one hell of a concept you conjured up there. Very interesting one. How does one "approximates" reason and what makes it so that the approximation will forever fail to arrive at its desired destination?
> Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.
Human context is fleeting as well. Time, dementia and ultimately death can attest to that. Even in life identity is complicated and multifaceted without singular I. For all intents and purpose we too are composed of massive amounts of loosely linked subsystems vaguely resembling some sort of unity. I agree with you on that one. General intelligence IMO probably requires some form of cooperation between disparate systems.
But you see some sort of fundamental difference here between "biology" and "tech" that I just cannot. If RAM was implemented biologically, would it cease to be RAM? I fail to see what's so special about the biological substrate.
To be clear, I'm not saying LLMs are AGI, but I have a hard time dismissing the notion that some combination of systems - of which LLMs might be one - will result in something we just have to call generally intelligent. Biology just beat us to it, like it did with so many things.
> It's just tech, that's it.
The human version is: it's just biology, that's it. What's the purpose of stating that?
> elicit some kinds of reasoning
I know it's hard, but you have to choose here. Are they reasoning or are they not reasoning?
> next word prediction models
238478903 + 348934803809 = ?
Predict the next word. What process do you propose we use here? "Approximately" reason? That's one hell of a concept you conjured up there. Very interesting one. How does one "approximates" reason and what makes it so that the approximation will forever fail to arrive at its desired destination?
> Whereas each automaton in the human world had it's own distinct 'context' - the AI world will not have that at all. Context will be as fleeting as memory in RAM, and it will be across various systems that we use daily.
Human context is fleeting as well. Time, dementia and ultimately death can attest to that. Even in life identity is complicated and multifaceted without singular I. For all intents and purpose we too are composed of massive amounts of loosely linked subsystems vaguely resembling some sort of unity. I agree with you on that one. General intelligence IMO probably requires some form of cooperation between disparate systems.
But you see some sort of fundamental difference here between "biology" and "tech" that I just cannot. If RAM was implemented biologically, would it cease to be RAM? I fail to see what's so special about the biological substrate.
To be clear, I'm not saying LLMs are AGI, but I have a hard time dismissing the notion that some combination of systems - of which LLMs might be one - will result in something we just have to call generally intelligent. Biology just beat us to it, like it did with so many things.
> It's just tech, that's it.
The human version is: it's just biology, that's it. What's the purpose of stating that?