There's another completely different theory. I'm not saying it's strictly true but it's worth thinking.
Moore's law has ended around 2016 or so. As progress is slowing, CPU technology is commoditizing. Successive Intel CPU models aren't much faster than previous ones anymore. So all the manufacturers will just end up in the same place and will only compete on price.
We will end up running everything on very efficient cores, massively parallelized. The software frameworks for that will be very different from current ones.
Software is changing very slowly, the multiprocessing shift in hardware is a 20 year old phenomenon and yet only niche apps are parallelized outside gaming. Maybe in another 20 or 40 years.
I think it'll take a lot more happening in programming language technologies than Rust or the current fragmented and janky set of mostly proprietary GPU languages.
The tricky thing about parallelizing workflow apps is that most workflow happens in a pipeline, which fits single cores really well.
It's tricky to find problems that fit into grids.
Pictures are grids, And for computationally intensive tasks that are isolated gpus work greate. But you still have to find a way to make your sums work in a grid if they don't.
Last time I was writing OpenCL the ability for threads to chat with each other was at the entry and exit points which made fitting some problems through parallel architecture very hard.
I think most trouble today comes from the low level imperative & error prone way of expressing computation, it's just prohibitively hard to ensure correctness with today's languages.
If we continue with the aaa games example, we can see that on consoles programmers do manage to extract parallelism even from cpu side work, it just requires a lot of work, debugging grit, tools, talent and budget along with a specific application with limited parameters.
We just need to make this kind of work easier and PLT seems to me the solution.
Moore's law has ended around 2016 or so. As progress is slowing, CPU technology is commoditizing. Successive Intel CPU models aren't much faster than previous ones anymore. So all the manufacturers will just end up in the same place and will only compete on price.
We will end up running everything on very efficient cores, massively parallelized. The software frameworks for that will be very different from current ones.
This is a polemic version from 2014: Cost of transistors stopped decreasing. https://www.youtube.com/watch?v=IBrEx-FINEI