>Today there is talk of a big clock rate bump (to 200 GHZ or so) if they go to a different semiconductor, but at that point you probably need a fiber optic or terahertz wave link to memory to keep the pipeline full.
You talk as if there couldn't possibly be a benefit to an increase in speed without a corresponding increase in memory bandwidth. Whilst it wouldn't be an optimally efficient system, if we /could/ bump to 9GHz (or 200GHz), wouldn't it be worth doing so for at least some kinds of calculations, even if the memory can't keep up?
edit: Both responses were super-interesting. Don't wanna reply to both, but thanks all :)
There's a word for this: computational intensity, i.e. the ratio of useful compute operations per memory load in an app.
Are there are apps that have high computational intensity? Sure, matrix multiply is one of them. That's one of the reasons why dense linear algebra serves as the standard benchmark to determine the top 500 supercomputers in the world.
But even in HPC (high performance computing), many if not most apps actually have relatively low computational intensity (i.e. in the range of one or so compute operations per word of memory loaded). In this regime, it really doesn't make sense to grow compute out of proportion with memory bandwidth because you'll just be idling the processors.
And while I have no proof, I'd expect HPC applications to generally be more computationally intense than general consumer computing tasks. So I'd expect that computational intensity goes mostly down from here.
Maybe you could do cryptography more quickly, but for general-purpose computing, and even most specialized tasks, memory latency and bandwidth are critical. For instance, look at the use of GDDR5 and HDM together with GPUs.
Most of the market is for things that are generalizable; maybe you could make some kind of hyper-DSP for millimeter wave base stations or something like that, but you have to spread out the development cost across a low number of units.
You talk as if there couldn't possibly be a benefit to an increase in speed without a corresponding increase in memory bandwidth. Whilst it wouldn't be an optimally efficient system, if we /could/ bump to 9GHz (or 200GHz), wouldn't it be worth doing so for at least some kinds of calculations, even if the memory can't keep up?
edit: Both responses were super-interesting. Don't wanna reply to both, but thanks all :)