Generally speaking, anyone can learn to use any tool. This isn't true of generative AI systems which can only learn through specialized training with meticulously curated data sets.
People physically unable to use the tool can't learn to use it. This isn't necessarily my view, but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.
> but one could make a pretty easy argument that the LLMs we have today can't drive a car only because they aren't physically able to control the car.
Of course they can. We already have computer controlled car systems, the reason LLMs aren't used to drive them is because AI systems that specialize in text are a poor choice for driving - specialized driving models will always outperform them for a variety of technical reasons.
We have compute controlled automobiles, not LLM controlled automobiles.
That was my whole point. Maybe in theory an LLM could learn to drive a car, but they can't today because they don't physically have access to cars they could try to drive just like a person who can't learn to use a tool because they're physically limited from using it.
This isn't true. A curated data set can greatly increase learning efficiency in some cases, but it's not strictly necessary and represents only a fraction of how people learn. Additionally, all curated data sets were created by humans in the first place, a feat that language models could never achieve if we did not program them to do so.
I think we need to separate the thinking part of intelligence from tool usage. Not everyone can use every tool at a high level of expertise.