There was a much more elaborate standard called P3P recommend by w3c in 2002. It apparently defined a description of how business can use personal data.
But apparently it was considered too complex and "lacking enforcement".
Now maybe if it survived till GDPR it could have it's enforcement, but Mozilla yanked support before that...
Well, it doesn't make sense to use this exact model - this is just demonstration that it can learn world model from pixels.
An obvious next step towards a more playable game is to add state vector to the inputs of the model: it is easier to learn to render the world from pixels + state vectors than from pixels alone.
Then it depends what we want to do. If we want normal Counter Strike gameplay but with new graphics, we can keep existing CS game server and train only the rendering part.
If you want to make Dream-Counter-Strike where rules are more bendable then you might want to train state update model...
It's very illustrative to look into the history of discovery of laws of motion, as it's quite well documented.
People have an intuitive understanding of motion - we see it literally every day, we throw objects, etc.
And yet it took literally thousands of years since discovery of mathematics (geometry, etc.) to formulate a concept of force, momentum, etc.
Ancient Greek mathematicians could do integration, so they were not lacking mathematical sophistication. And yet their understanding of motion was so primitive:
People started to understand the conservation of quantity of motion only in 17th century.
So we have two possibilities:
* everyone until 17th century was dumb af (despite being able to do quite impressive calculations)
* scientific discovery is really a heuristic-driven search process where people try various things until they find a good fit
I.e. millions of people were somehow failing to understand motion for literally thousands of years until they collected enough assertions about motion that they were able to formulate the rule of conservation, test it, and confirm it fits. And only then it became understanding.
You can literally see conservation of momentum on a billiard table: you "violently" hit one ball, it hits other balls and they start to move, but slower, etc. So you really transfer something from one ball to the rest. And yet people could not see it for thousands of years.
What this shows is that there's nothing fundamental about understanding: it's just a sense of familiarity, it is a sense that your model fits well. Under the hood it's all prediction and curve fitting.
We literally have prediction hardware in our brains: cerebellum has specialized cells which can predict, e.g. motion. So people with damaged cerebellum have impaired movement: they still can move, but their movement are not precise. When do you think we find specialized understanding cells in the human brain?
It seems to me that your evidence supports the exact opposite of your conclusion. Familiarity was only enough to find ad-hoc heuristics for specific situations. It let us discover intuitive methods to throw stones, drive carts, play ball games, etc. but never discovered the general principle behind them. A skilled archer does not automatically know that the same rules can be used to aim a mortar.
Ad-hoc heuristics are not the same thing as understanding. It took formal reasoning for humans to actually understand motion, of a type that modern AI does not use. There is something fundamental about understanding that no amount of familiarity can substitute for. Modern AI can gain enormous amounts of familiarity but still fail to understand, e.g. this Counter-Strike simulator not knowing what happens when the player walks into a wall.
People found that `m * v` is the quantity which is conserved.
There's no understanding. It's just a formula which matches the observations. It also matches our intuition (a heavier object is hard to move, etc), and you feel this connection as understanding.
Centuries later people found that conservation laws are linked to symmetries. But again, it's not some fundamental truth, it's just a link between two concepts.
LLM can link two concepts too. So why do you believe that LLM cannot understand?
I middle school I did extremely well in physics classes - I could solve complex problems which my classmates couldn't because I could visualize the physical process (e.g. motion of an object) and link that to formulas. This means I understood it, right?
Years later I thought "But what *is* motion, fundamentally?". I grabbed Landau-Lifshitz mechanics textbook. How do they define motion? Apparently, bodies move in a way to minimize some integral. They can derive the rest from it. But it doesn't explain what a motion is. Some of the best physicists in the world cannot define it.
So I don't think there's anything to understanding except feeling of connection between different things. "X is like Y except for Z".
Understanding is finding the simplest general solution. Newton's laws are understanding. Catching the ball is not. LLMs take billions of parameters to do anything and don't even generalize well. That's obviously not understanding.
You're confusing two meanings of world "understanding":
1. Finding a comprehensive explanation
2. Having a comprehensive explanation which is usable
99.999% people on Earth do not discover any new laws, so I don't think you use #1 as a fundamental deficiency of LLMs.
And nobody is saying that just training a LLM produces understanding of new phenomena. That's a strawman.
The thesis is that a more powerful LLM together with more software, more models, etc, can potentially discover something new. That's not observed yet. But I'd say it would be weird if LLM can match capabilities of average folk but never match Newton. It's not like Newton's brain is fundamentally different.
Also worth noting that formulas can be discovered by enumeration. E.g. `m * v` should not be particularly hard to discover. And the fact that it took people centuries implies that that's what happened: people tried different formulas until they found one which works. It doesn't have to be some fancy Newton magic.
I'm certain that people did not spend centuries trying different formulas for the laws of motion before finding one that worked. The crucial insight was applying any formula at all. Once you have that then the rest is relatively easy. I don't see LLMs making that kind of discovery.
Number theory and algebraic geometry were developed for their own sake (i.e. "it is cool"), but later people found practical applications in cryptography.
So "useful math must be motivated by practice" is empirically false
2. You might need only several hundred of examples for fine-tuning. (OpenAI's minimum is 10 examples.)
3. I don't think research into fine-tuning efficiency have exhausted its possibilities. Fine-tuning is just not a very hot topic, given that general models work so well. In image generation where it matters they quickly got to a point where 1-2 examples are enough. So I won't be surprised if doc-to-model becomes a thing.
GPT-3 paper announcement got 200 comments on HN back in 2020.
It doesn't matter when marketing started, people were already discussing it in 2019-2020.
Stochastic parrot: The term was coined by Emily M. Bender[2][3] in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? " by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.[4]
This paper basically just adds a cache? Not really novel as we already have Codex, Code Interpreter, etc.
reply