Hacker News new | past | comments | ask | show | jobs | submit login

> current LLMs can only summarize, digest, and restate. There is no non-transient value add.

No, you're wrong. LLMs create new experiences after deployment, either by assisting humans, or by solving tasks they can validate, such as code or game play. In fact any deployed LLM gets to be embedded in a larger system - a chat room, a code running environment, a game, a simulation, a robot or inside a company - it can learn from iterative tasks because each following iteration carries some kind of real world feedback.

Besides that, LLMs trivially learn new concepts and even new skills with a short explanation or demonstration, they can be pulled out of their training distribution and collect experiences doing new things. If OpenAI has 100M users and they consume 10K tokens/user/month, that makes for 1 trillion tokens of human-AI interaction rich with new experiences and feedback.

In the text modality LLMs have consumed most of the high quality human text, that is why all SOTA models are roughly on par, they trained on the same data. That means easy time is over, AI has caught up with all human language data. But from now on AI models need to create experiences of their own, because learning from your own mistakes is much faster. The more they get used, the more feedback and new information they collect. The environment is the teacher, not everything is written in books.

And all that text - the trillions of tokens they are going to speak to us - in turn contributes to scientific discoveries and progress, and percolate back into the next training set. LLMs have massive impact at language level on people so by extension on the physical world and culture. They have already influenced language and the arts.

LLMs can create new experiences, learn new skills, and have a significant impact through widespread deployment and interaction. There is "value add" if you look at the grand picture.




https://twitter.com/itsandrewgao/status/1689634145717379074?...

Yeah this is really having a positive impact on scientific discovery.


Your link only shows what unscrupulous people would do.

Here is a LLM with search doing competitive level coding:

https://deepmind.google/discover/blog/competitive-programmin...

and in general, applying evolutionary methods on top:

https://scholar.google.com/scholar?cites=1264402983985539857...

The explanation is simple - either learn from past experience which is human text for now, or learn from present time experience which comes from the environment. The environment is where LLMs can do novel things.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: