Hacker News new | past | comments | ask | show | jobs | submit login

To be fair, most of that is them catching up to where Intel already was. Noone seems to actually be fabbing meaningfully smaller feature sizes (or higher layer counts) than what Intel is stagnating at.



> To be fair, most of that is them catching up to where Intel already was. Noone seems to actually be fabbing meaningfully smaller feature sizes (or higher layer counts) than what Intel is stagnating at.

Eh, Intel's 14nm is 37M transistors/mm2. TSMC and SS are both up to 52M/mm2 at 10nm, and 92M/mm2 at 7nm. Both Apple and AMD's latest gen stuff is on TSMC's 7nm process _today_. Yes, Intel's 10nm is at 101M/mm2, but until they can get mass production on that they're falling substantially behind.


Ah, thank you and I'm glad I used "seems", there; the last I had heard they were all still trying to get reliable production at 10/7nm.

(I stand by "most", but it's nice to hear that someone's making actual progress again.)


If you look at long-term trends transistor density has kept pace (slowed down consistently but not dramatically along the years), the big difference is that it no longer gives you as much as a performance boost as it used to.


The difference is that ARM has been able to deliver desktop-grade performance at power levels that are suitable for use in an iPad.

Intel and AMD might be able to deliver somewhat higher performance by throwing a whole bunch of cores at the problem, but they do so at a much higher cost in power requirements. And it would be easy enough to design ARM machines with an equal number of cores (or even way more), and still have much lower power requirements.

Intel stagnated, and has high power requirements. ARM has caught up, and has much lower power requirements.

The reasoning seems pretty simple to me.


> The difference is that ARM has been able to deliver desktop-grade performance at power levels that are suitable for use in an iPad.

ARM hasn't. Apple has. Apple's implementation of the ISA since A6 is unrelated to ARM's.


Sure, but that's them having a competent (well, less incompetent) ISA and microarchitecture; that they've made better use of the transistors available, not that they've achieved better feature density than what would be expected from where they are on the Moores law curve.

Also Intel and AMD have not delivered higher performance via more cores; they delivered lower price for (say) 64 cores worth of performance, by putting them all on the same chunk of silicon (edit: or at least in the same package). (There are some slight improvements in inter-processor interrupt and cache-forwarding latency, but if that's a performance bottleneck, the problem is bad parallelization at the code level.)


> Sure, but that's them having a competent (well, less incompetent) ISA

Have you looked at the encoding of Thumb-2 (T32) and particularly A64 (their newly designed instruction set for 64 bit)? Their instruction encoding is in my opinion much more convoluted than x86.


> they delivered lower price for (say) 64 cores worth of performance, by putting them all on the same chunk of silicon.

Arguably AMD did the exact opposite - lower prices via splitting a processor into multiple pieces of silicon. (Chip price scale exponentially with area at the high end)


Well, my point there was that a 64-core CPU is not (significantly) higher performance than 64 single-core CPUs, so multi-core is - if a improvement at all - a price improvement, not a performance improvement, but fair point about the price-vs-area scaling.


    Sure, but that's them having a competent 
    (well, less incompetent) ISA and microarchitecture
Hasn't x86 been basically RISC-like internally since the pentium-pro/amd k6? Is the translation layer that big of a hindrance?


ISA doesn't matter nearly as much as people think, unless it's something really different (like VLIW/EPIC/...).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: