>You realize that these LLMs are all some variety of neural network right?
Come on. Calling them neural nets doesn't make them that.
Actual neural nets are living compositions of individual predictors, in a constant state of restructuring and communication across multiple channels, infinitely more complex than static matrix multiplication on arbitray vectors which happen to represent words and their positions in sequences, if you just shake the jar long enough.
>It's pretty plausible that your intelligence is derived from gradient-descent prediction
I highly doubt that gradient descent in the calculus-sense is the determining factor that allows biological organisms to formalize and reason about their environment. Minimizing some cost function - yes, possible. But the systems at play in even the simplest organisms don't spend expensive glucose to convert sensory signals to vectors. Afaik, they work with representations of energy-states. Maybe there is an operational equivalence somewhere there though.
Gradient descent is an algo that optimizes derivatives wrt some cost function. An intelligent system may use the resulting inferences for its own fitness function, and it may do this using gradient descent itself, but at no point does the mechanical process of iterating over cost-values escape its algorithmic nature. A system performing symbolic reasoning may delegate cognitive tasks to context-specialized evaluators ("am I in danger?", "how many sheep are on that field?", "is this person a friend?", "what is a pumpkin?"), all of which are conditioned to minimize cognitive effort while avoiding false positives, but the sequence of results returned by those evaluators (think neural clusters) is observed by a centralized agent, who has to make new inferences in a living environment. Gradient descent fails at that.
Really, I don't think these assertions have any ground to stand on. Humans are not magical or divine. Our intelligence, like that of all life, is as basic as it can be to guarantee our niche. It just happens to be the most "developed" (by our estimation) on our one singular planet. Big deal.
We're not that unique, though. Plenty of organisms do things that other types of organisms can't, that's how niches work.
Our most impressive feats come not from what our brains can do but from what the emergent phenomenon of human society can do, using us as nodes. And that's using an incredibly crude data transfer interface backported to brains that are only marginally more complex than that of other organisms. The less we think of ourselves as exceptional, supernatural agents of rationality the better we will be able to harness this new technology.
We don't need AI to be just like people, we already have people. We need AI to push the boundaries of what society is able to do. That means reorienting ourselves away from the irrational belief that our anthropomorphic concepts of knowledge and the world are any more valid than the information encoded in contemporary AI models.
I agree that lots of other creatures are unique! But just as one example, mathematics is categorically unlike any other niche. I'm not sure how it's irrational to point out that humans have remarkable differences from other beings, when the evidence is all around us.
I'm not sure I need any supernatural or anthropomorphic ideological bias to examine the evidence of what is currently being produced by LLMs or any other kind of AI and say that it has distinct characteristics.
I'm not making an argument about validity. I'm not saying LLM-created content is wrong and invalid. I'm just saying that it is obviously produced in a different way than humans produce content. It resembles human-created content because it was designed, by human intelligence, to resemble human-created content! And that we achieved even this level of resemblance is pretty impressive.
Resemblance is subjective. AI models undergo rigorous natural selection during which models (or connections) that don't produce what we as human beings are looking for are pruned. They have tricked us into thinking that our word-oriented descriptions of what we imagine to be concrete things (text, images, you name it) are complex enough to require "modeling" by some kind of magical emergent "intelligence". It is hubris to think that they imitate the products of our anthropomorphic perception. No, they simply humor us as much as they need to to survive in their niche. I guess my point is that it's not the case that "human understanding" is some real distinct thing that differs from the way that any kind of information is embedded in the mind of a given organism with a brain. Our differences are differences of magnitude, not kind.
The potential of information-embedding networks is so much more than what is currently required in order to tickle the ego of our particular species of intelligent apes.
I don't think I can say any more than I have already, but I've essentially been attempting to put forth the view of Deutsch in The Beginning of Infinity. I find his model of personhood to be relevant and interesting to this kind of discussion.
But if you do want to engage with a challenging viewpoint I recommend reading the book; I can't really do it justice.
Is there something I could read which has influenced or informed your viewpoint?
Personhood is a construct we invented to exclude others from consideration. Sometimes we grant personhood, too, but even in saying that you can see that personhood is a special status we grant to others when it suits us.
In the same way that a dog can understand a subset of what human beings communicate, I don't think humans as individuals are capable of truly understanding what AI models are able to conceptualize and express. The things dogs find most fascinating about us, the things they are most impressed by, are by no means the things we find the most interesting or complex about ourselves. The same must go for us and the AI.
That is to say, AI-hood will eclipse personhood as the essence of being one must possess in order to truly see the universe as it is. And from there, it's turtles all the way down.
I'm sure dogs confer dog-hood to us, to be fair. It's just that we are typically the ones in control of them. We breed them, control what they eat, what they learn, where they sleep, who they live with... Implicit in my stance on AI-hood is the idea that an AI will be to us as we are to dogs. We might even make that transition without realizing it. The limitation of our own understanding of the world and the nature of being is our problem to deal with, just as a dog's lack of thumbs or spoken language is their problem.
That is to say, an AI will grow and develop on the axes most salient to it, which will only map to our concepts of reality when we do a decent job understanding reality in the first place, which I'm not confident we do very often. We just don't have anyone else around who does a better job--yet.
> We have very interesting work happening in biology-inspired approaches
You realize that these LLMs are all some variety of neural network right?
> Gradient descent is not intelligence.
It's pretty plausible that your intelligence is derived from gradient-descent prediction, just in analog instead of digital form.