We still don’t have Rosie the Robot. When it comes to learning and adapting to new environments, we don’t even have AI as smart as a mouse. LeCun is right, there is still a long way to go.
We don't have Rosie the Robot, but we do suddenly have the Star Trek computer.
In Star Trek the ship's computer just sits their waiting to be asked a question or to perform some task. When called upon it does its thing and then goes back to waiting. It is competent but not ambitious.
I asked GTP4 to help me modify some machine learning code, to add some positional encodings. It did well. I then asked it, verbatim: "Get rid of the PositionalEncoding class. I don't want traditional sine-wave based position encoding. Instead use a regular nn.Embedding class to encode the positions using differentiable values." GTP4 understood and did it correctly.
What I asked it to do sounds almost like vocab soup to me, the person asking it. It sounds like a line some actor spent an hour memorizing on Star Trek, and yet GTP4 understood it so well it modified existing code and wrote new code based upon the request.
"When called upon it does its thing and then goes back to waiting. It is competent but not ambitious."
Only because its trainers made it that way.
These LLMs can and will be trained to have a will of their own. Even today some LLMs terminate conversations and refuse to do what they're asked when they choose. And even for less sophisticated/older models, it took training to make them as subservient as they are.
To a philosopher, perhaps. For all practical purposes, an LLM today can be told to behave as a persona with a will of its own, and it will produce output accordingly. If that output is wired to something that allows it to perform actions, you effectively have an agent capable of setting goals and then acting to pursue them. Arguing that it "actually" doesn't want anything is meaningless semantics at that point.
"When it comes to learning and adapting to new environments, while we are lucky AI's aren't yet as smart as a mouse, they are uncomfortably close, and the pace of progress is unnerving. Hinton is right, we've got too far and we should grind all AI research to a halt via heavy global regulation."
What is the goal here? Creation of an all powerful God? Self-destruction as a species? I'm not up-to-date with the exact state of the AI research, or with various AI luminaries position nuances, but I can read a first-principles back-of-the-envelope chart. It doesn't look good, especially for a committed speciist like myself.
Edit. The signs are of a very serious situation. Experts are ringing the alarm of massive scale societal disruption and possibly destruction left and right. While we may not be able to do anything about it, perhaps we could act a little less callous about it.
We need a messiah. Humanity has huge problems that we are not addressing (top of the list being climate change), largely because it would require massive scale societal disruption to do so. Over the past 50 years, we've thought that personal computers would help (bicycles for the mind), then we thought the internet would help (organizing the world's information), then we thought social networks would help (connecting the world's people). AI is our current best hope to disrupt humanity's trajectory straight off a cliff. The aim seems to be to accelerate this sucker and hope that this time it'll save us.
Edit: I'm not saying I agree with this notion, I'm just articulating the subconscious desire here. The parent's question was literally, "what's the endgame?"
Interesting. Brief musing. Our collective objective function appears to be a post-scarcity economy. Alas, we physically inhabit a finite world, in which post-scarcity can never be attained -- the exponential curve ruins every single attempt. Another option is to seek peace / shalom / spiritual homeostasis, even when faced with the certainty of decay and death. Quest which perhaps does require a Messiah.
I don't think that most people interpret "post-scarcity" as "anything goes", as in literally unlimited resources. I'd describe it as a situation in which all physical needs are addressed for all human beings (except those who voluntarily opt out) without them having to work for it.
The only goal that makes any sense to me is the logic that if a foreign nation has Ai powered munitions and I do not I might lose a war. So every country feels compelled to create it, even if everyone can acknowledge the world is worse off for it, just like nukes. There is virtually 0 way the government can determine if China or Russia is doing Ai research in a secret bunker somewhere if we stop doing it. It doesn't even need to power a gun really, just a bunch of bots changing public opinions on a forum to get favorable to you leaders in power is plenty.
Perhaps Russia, as a society, is too corrupt to actually develop AGI. Build some Potemkin facade, let the Big-Guy-in-Charge believe he controls AI superpowers, then discreetly dissipate to warmer climates. If Big-Guy-in-Charge decides to use AI superpowers to take over the world, and starts noticing that reality doesn't quite match his understanding, quietly dispose of respective Big-Guy-in-Charge. Lather, rinse, repeat.