It's very Hackernews to throw LLMs in there, but I agree. LLMs don't have experience though. They have training data, and a probabilistic output.
Designing things have two goals:
- Make old things seem new
- Make new things seem old and familiar
Both need a lot of knowledge about how humans work and how we have made sense of the world up until now. Design can't be made in a vacuum and without input.
Edit: To expand: An LLM would never have come up with touch input. It would have regurgitated the existing ideas of using a pen or a mouse to point at things on a screen. To come up with touch input was a huge feat of human engineering that was a combination of design (making touching a obvious for any human, old or young) and engineering (making that interaction actually work).
They might not have had experience 2 years ago, but in the meantime they assisted 100s of millions of people for many billion tasks. Many of them are experiences you can't find in a book. They contain on-topic feedback and even real world outcomes to LLM ideas. Deployed LLMs create experiences, they get exposed to things outside their training distribution, they search solution space and discover things. Like AlphaZero, I think search and real world interaction are the key ingredients. For AZ the world was a game board with an opponent, but rich enough to discover novel strategies.
This sounds like an ad. What is "assisted 100s of millions of people in many billions of tasks"? Any real world data? If it's generating new random clip art for presentations, sure. If it's making new flavor text based on generic input, sure.
If my question is "what is the circumference of earth", and I run a model with a temperature of 100, will it give me a good result? Will it always give me a good result with a temperature of 0? I don't think so. It's a huge probabilistic model. It is not an oracle. It can be useful for fuzzy tasks for sure, but not for being smart. You might think it's clever because it's generated code for you, but that's probably because you asked it to make something 500 people already made and published on GitHub.
Edit: Just to clarify. Don't want to step on peoples toes. I just feel like we're at the top of a new dotcom/crypto/nft hype boom. Seen it soooooooo many times before since the beginning of the 2000s. Don't go blind on technology. Research what it actually is. An LLM is a "next word weighted dice toss machine".
> You might think it's clever because it's generated code for you, but that's probably because you asked it to make something 500 people already made and published on GitHub.
An LLM has no problem coming up with novel solutions within my own esoteric physics framework that only exist on my computer, using my patterns and taking all the nuances into account. It's definitely not just spitting out something it has seen before.
I think the point being made is that there is very little room for creativity there... There are tons of examples of physics engines written in multitudes of languages with a full range of quality of implemention.
Now if the LLM had known to look for and found dark matter or gravitational waves all while sitting there on your computer comparing micro changes between CPU cycles, maybe you would have a point. To my knowledge most physics engines do the even emulate Newtonian physics nevermind more modern variants
I'm not talking about the physics calculations. I'm talking about it navigating, adapting and coding using my own patterns, coding style and structure within a context that is completely custom. It understands the framework I've built, what functions to use when and writes code looking and working as if it was my own.
> It understands the framework I've built, what functions to use when and writes code looking and working as if it was my own.
Yes, that's what LLMs do. They build statistical models based on their context and training data. They then give the most likely statistical output.
There's nothing creative about it, it's all statistics based on the inputs and they can at best extrapolate but they cannot fundamentally move outside of their inputs.
> What is "assisted 100s of millions of people in many billions of tasks"?
Let's assume 180M users for ChatGPT, each user using the model just 5 times in a month. You got the 1B tasks. If one task uses up 1000 tokens - you have the trillion interactive tokens. It's all based on public data and guesstimation.
And to expand on myself. "Experience" means something very specific for humans. It means you have done something for a long time, and you have failed at it. And you have learned from experience. By definition, LLMs don't have any experience at all. They are trained and become a fresh "brain", and then they make the same mistakes, over and over and over, until they either get a new prompt that might correct themselves or are trained from scratch all over again.
Sounds like the LLM facilitated a human to gain experience, by making mistakes for the human and then correcting those mistakes also likely in an incorrect way. LLMs are effectively very very bad teachers.
The LLM given the same inputs tomorrow is likely to return similar responses. If a human did that they would likely be concidered to have some sort of medical condition...
I don't know if I buy that an AI wouldn't have been able to come up with touchscreen. It knows people touch screens with pens, and it knows people point and touch things that aren't screens. It could put those ideas together, that's how people came up with
It's not an AI. It's a word probability model. Trained on tons of text. It's not smart at all. The only reason why it might have "figured that out" is because it was actually present in scifi texts in the 60s. The other reason would be that you increased the temperature to something like 100, and then you think you see something genius in some hallucinations, among other unreadable text.
Designing things have two goals:
- Make old things seem new
- Make new things seem old and familiar
Both need a lot of knowledge about how humans work and how we have made sense of the world up until now. Design can't be made in a vacuum and without input.
Edit: To expand: An LLM would never have come up with touch input. It would have regurgitated the existing ideas of using a pen or a mouse to point at things on a screen. To come up with touch input was a huge feat of human engineering that was a combination of design (making touching a obvious for any human, old or young) and engineering (making that interaction actually work).