By now these researchers could show me a deep learning model that accurately predicts the future, and I'd shrug my shoulders and say "so what?".
As a mortal, there's not much to learn from these insanely big models anymore, which makes me kinda sad. Training them is prohibitively expensive, the data and code are often inaccessible, and i highly suspect that the learning rate schedules to get these to converge are also black magic-ish...
There is public code and data available to train similar models (text generation, image generation, whatever you like). Training details are also often available. The learning rate schedule is actually nothing special.
However, you are fully right that the computation costs are very high.
One thing we can learn is: It really works. It scales up and gets better. Without really doing anything special. This was kind of unexpected to most people. This is really interesting. Most people expected that there is some limit and the performance would level out. But so far this does not seem to be the case. It rather looks like you could scale it up as much as you want to get even better and better performance without any limitation.
So, what to do now with this knowledge?
Maybe we should focus the research on reducing the computation costs. E.g. by better hardware (maybe neuromorphic), or more computational efficient models.
As a mortal, there's not much to learn from these insanely big models anymore, which makes me kinda sad. Training them is prohibitively expensive, the data and code are often inaccessible, and i highly suspect that the learning rate schedules to get these to converge are also black magic-ish...