I don't think that's a good analogy. We're talking about innate traits, not coarse functionality.
A plane and a bird can both fly, but a plane has no innate desire to do so, whether to take advantage of good soaring conditions, or to escape ground-based predators, etc.
An LLM and a human can both generate words, but the LLM is just trying to minimize repeating statistical errors it made when being pre-trained. The human's actions, including speech, are towards adaptive behavior to keep it alive per innate traits discovered by evolution. There's a massive difference.
An innate trait/behavior for an animal is something defined by their DNA that they will all have, as opposed to learned behavior which are individual specific.
An AI could easily be built to have innate curiosity - this just boils down to predicting something, getting feedback that the prediction is wrong, and using this prediction failure (aka surprise) as a trigger to focus on whatever is being observed/interacted with (in order to learn more about it).
> An innate trait/behavior for an animal is something defined by their DNA that they will all have, as opposed to learned behavior which are individual specific.
In that sense, most airplanes have an innate desire to stay in the air once aloft. As opposed to helicopters, which very much want to do the opposite. Contrast with modern fighters, which have an innate desire to rapidly fly themselves apart.
Then, consider the autopilot. It's defined "by their DNA" (it's right there in the plane's spec!), it's the same (more or less) among many individual airplanes of a given model family, and it's not learning anything. A hardcoded instinct to take off and land without destroying itself.
> An AI could easily be built to have innate curiosity - this just boils down to predicting something, getting feedback that the prediction is wrong, and using this prediction failure (aka surprise) as a trigger to focus on whatever is being observed/interacted with (in order to learn more about it).
It's trivial to emulate this with LLM explicitly. But it's also a clear, generic pattern, easily expressed in text, and LLMs excel at picking up such patterns during training.
> It's trivial to emulate this with LLM explicitly. But it's also a clear, generic pattern, easily expressed in text, and LLMs excel at picking up such patterns during training.
So try adding "you are a curious question asking assistant" to the beginning of your prompt, and see if it starts asking you questions before responding or when it doesn't know something ...
Tell it to stop hallucinating when it doesn't know something too, and just ask a question instead !
Honestly I don't really care what current LLMs can do, I'm more interested in fundamental limitations of AI and I think the "it's just a file" argument is nonsense and the analogy makes sense in that regard.
I think you're focusing on the wrong part of his/her "it's just a file" argument. The actual point wasn't about the file packaging/form, but about the fact that it's just passive - just a function sitting there waiting to be called, not an active agent able to act out on it's curiosity.
Curiosity is a trait of an agentic system where curiously is driving exploration leading to learning.