Gato, a Decision Transformer on steroids, is pretty much what you would expect, with the expected RL scaling curves†, if you've been following ML scaling research for the past 2 years. It is, however, still mindblowing to see it in reality.
And note that it's only as small (and thus, weak) as it is because they want to run it directly on robots ("We focus our training at the operating point of model scale that allows real-time control of real-world robots, currently around 1.2B parameters").
Hi Gwern, I'm the submitter of the other thread. It was quite coincidental to wake up to this announcement this morning, because the last thing I read before bed was your "Clippy" story: https://www.gwern.net/fiction/Clippy
They use ALE '51' instead of 57, so I assume not. (Because Montezuma's Revenge is pretty much purely about exploration, and given demonstrations of a successful agent wouldn't be hard, there's not much benefit to training on it here. Gato would probably get a good score, but no one would care. The hard exploration games in ALE are often left out for that reason.)
>and given demonstrations of a successful agent wouldn't be hard
Last I checked, the only team that has shown good performance on that game is Uber, and from what I recall they used a controversial hack that would be unlikely to generalize to other environments.
Yes, the hack they used was for the exploration part: providing a state summary to explicitly decide if a state was new or not, and, in the initial Go-Explore, essentially letting the agent teleport to arbitrary states to begin exploring from there.
However, once the exploring was done, they could train an agent on the trajectories of the exploring agent to solve MR with no problem. That's why I say that MR is an exploration problem and training on demonstrations from a player which has already solved MR would obviously work - because it does. So it doesn't show anything interesting about Gato, because Gato would be solving the part of MR that everyone is agreed is basically trivially easy.
This work has two really interesting contributions, in my opinion.
1. Creating a few data points (3) for scaling laws (Figure 8). These behave similar to language models, as gwern puts it [1], but across three data points, it's a bit tough to draw a power-law conclusion (eyeballing the figure, they increase params 4.5x and 3.2x and see about 20% relative performance improvement from each jump).
2. What I find more interesting than the scaling is the out-of-distribution (OOD) generalization results (Figure 9). They test the performance of the agent on a completely unseen task (though possibly from within the same domain, i.e., they might train on a fixed physics engine from the DeepMind Control Suite [2], but never let the agent look at the cartpole task). They compare this to various ablations: from-scratch training with the same architecture, pretraining only with same-domain data, and pretraining only on non-control data (presumably unsupervised contrastive-learning based data).
The results from (1) are impressive and from (2) are mixed (but no less interesting as a contribution!) in terms of the additional training data actually helping with generalization. The reason OOD generalization performance is most interesting is because it really tests whether control-based pretraining helps the agent in a truly new situation. And certainly, there are a couple tasks at which the zero-shot performance improves over the ablations (but there are others where it hurts).
What I'd find exciting to see in coming research is further investigation into variants of Figure 9.
- How does scaling affect the impact of control-data pretraining vs non-control data pretraining?
- The authors used a custom fine-tuning schedule for the few-shot evaluation on unseen tasks. It's possible the schedule needs to be changed for the ablated versions of the agents to give them the best performance, too. What would Figure 9 look like with the "best" training setup for each ablation individually? I.e., can we tease apart how much, if at all, it's a matter of low-level modality-specific features helping zero-shot adaptation vs some kind of truly generalized "control pretraining"?
And note that it's only as small (and thus, weak) as it is because they want to run it directly on robots ("We focus our training at the operating point of model scale that allows real-time control of real-world robots, currently around 1.2B parameters").
† https://storage.googleapis.com/deepmind-media/A%20Generalist... looks just like any scaling curve from a text or vision paper...
Also submitted at https://news.ycombinator.com/item?id=31355657