> I see no prospect of it happening in the next 10 years
It's happening right now: the field is being flooded by data-kids who reason in different terms. They are statisticians first and programmers later. Their models can do stuff that bit-pushers like me will always struggle with. 10-15 years from now they'll run everything, and you won't be able to code a helloworld without specifying a ML model.
I'm dabbling in that world right now. Along with the fact that, well, I'm dabbling in that world right now... no. They're doing different thing. You won't have ML models creating UI code (and I mean, actual UI code to draw widgets, not just assembling UIs) or any number of other tasks. They're doing new things. And we have plenty of time to learn, just like I haven't been left behind by any number of other things. It won't be like the web.
(In fact I've been generally unimpressed by the people I've seen so far. The ML math is complicated, but there's a whole lot of "just fire this at that without understanding why" in the actual practitioners. I've found I have a better understanding of what is and is not possible and why than they seem to rather often. This comes back to my extended education background, though; my training is old but less obsolete than you may think, a lot of modern ML stuff isn't some totally new thing just invented yesterday, but a couple of slight tweaks to established stuff that worked really well combined with finally having the CPU firepower to use it.)
That has to be said again. Basically all of deep learning, which is what people mainly mean today when they say "ML", is at least 20 years old. Other data science techniques? Even older. The big spurt seen recently is mainly the result of a) faster hardware and b) larger datasets [1]. At some point the "low-hanging fruit" achievable by scaling up and crunching down MOAR DATA will be all taken and then soemone will need to do all the hard work of figuring out new techniques, or at least building bigger computers. Who's going to do that? All the people who are "passionate about machine learning" and are experts in Tensorflow and PyTorch? I wouldn't bet my money on that. Thirty years from now the people leading the field will be people with thirty years of experience in thinking real hard about real hard problems. As they usually are [2].
__________________
[1] My standard reference for this is a guy called Geoff Hinton. Some people in machine learning may have heard the name. Quoting:
AMT: Ok, so you have been working on neural networks for decades but it has only exploded in its application potential in the last few years, why is that?
Geoffrey Hinton: I think it’s mainly because of the amount of computation and the amount of data now around but it’s also partly because there have been some technical improvements in the algorithms. Particularly in the algorithms for doing unsupervised learning where you’re not told what the right answer is but the main thing is the computation and the amount of data.
[2] Check out the birth dates of the current generation of deep learning luminaries. Geoff Hinton: 1947. Yoshua Bengio: 1964. Yann LeCun: 1960. Jurgen Schmidhuber: 1963. It goes on. Go ahead and tell those guys they're greybeards who don't get what the kids today are doing.
Anyone who has gone through the extreme pixel-level iteration process of honing a UI at a company that really cares about UX knows we are very, very far off from letting AI do it.
> you won't be able to code a helloworld without specifying a ML model.
I think the current ML world will have an overfitting reckoning. ML can optimize situations that are very normative, but struggles to create good solutions to situations in which the best outcomes are fairly distribute (but still specific best actions in certain scenarios).
ML cannot tell you what data or choices even matter in life, it can only help optimize those things once we've decided that. This is where age comes in as there's a correlation to, perhaps causal by, age on wisdom.
> I think the current ML world will have an overfitting reckoning.
It seems like current ML (at least large-language models and transformers) will actually run out of publicly-accessible data to train on - the current models have scraped the majority of the useful text and image data on the public Internet, and it's not clear that we'll gain access to orders of magnitude more data, which according to the Chinchilla paper is the bottleneck on transformer performance: https://news.ycombinator.com/item?id=32321522
In my experience many of those "statisticians first and programmers later" "data-kids" aren't great at programming. Yes, they can build models, but when they need to build code around that model it's often pretty awful in my experience having worked with some of them. They still need us oldsters to help them out of the messes they make.
That is not a good thing. Nor do I think it’s completely true anyway. I have the advantage of a strong analytical background (PhD physics) and over a decade of experience in software development. The “data kids” struggle with the parts that actually make their jobs possible (engineering) and that part is not undergoing any kind of paradigm shift at this time.
It's happening right now: the field is being flooded by data-kids who reason in different terms. They are statisticians first and programmers later. Their models can do stuff that bit-pushers like me will always struggle with. 10-15 years from now they'll run everything, and you won't be able to code a helloworld without specifying a ML model.