Do you have any insights into why then x86 chips consume so much power than the ARM equivalents?
Btw, I tend to agree with you,and think Intel's engineers are just being lazy wrt power. I expect this Nvidia chip to get crushed technically when Intel's engineers gear up and really work on power (like they did when Transmeta's Crusoe came out).
There's probably quite a bit of truth to that, but in recent years Intel has had two dramatic under-performers (Atom can't come close to ARM and the on-chip graphics for their mid-range processors have yet to fill the gap left when Intel forced NVidia to stop making integrated graphics chipsets) and a complete failure (Larrabee) that seem like they shouldn't have happened given Intel's dominance.
Why is Intel having trouble broadening their horizons? Is it an institutional thing, that the teams not working on the main (and most profitable) CPU product lines just can't get the resources needed to catch up with the competition? Intel always has the latest and greatest fabs, but it seems like the designs they're producing for these new product lines are completely squandering that advantage and then some.
There really aren't equivalent x86 and ARM processors yet; IIRC even the slowest Atom is faster than the fastest Cortex A8 (I'm still waiting on A9 benchmarks).
I don't think Intel is being lazy, but Atom is certainly an immature design and the mainstream Intel cores target a much higher level of performance; brainiac cores are fundamentally inefficient because power efficiency decreases as performance increases (in other words, each marginal increase in performance costs more in power than the previous one).
I suspect that the inability to bring down power isn't just "laziness" on Intel's part. Decreasing power consumption is (more or less) equivalent to increasing performance per watt - an area where Nvidia and ARM designs may simply be better.
Many arm chips support both a RISC ISA (the original ARM), and a more CISC-y ISA (thumb2). They consume less power and perform better with the latter. So well, in fact, that some chips don't even bother with the legacy RISC mode.
Thumb and Thumb2 really aren't any more 'CISC'-y then ARM; about the most cisc aspect would be that thumb2 supports two different instruction lengths (2 and 4 byte) whereas ARM supports only 4 byte instructions.
That said i've heard/read that Thumb2 tends to be the optimum size/space trade-off, but that's not because it's somehow more 'cisc'.
If this were another TransMeta, I'd agree, but the situation is quite different, because Intel is very early in their learning curve on graphics, and nVidia is quite far along in their learning curve on graphics and high-performance computing. They actually have a decent shot at competing with Intel in the low end of the desktop market by making up with the GPU what they end up lacking in the CPU.
Btw, I tend to agree with you,and think Intel's engineers are just being lazy wrt power. I expect this Nvidia chip to get crushed technically when Intel's engineers gear up and really work on power (like they did when Transmeta's Crusoe came out).