Names with nm are just so called commercial names. They don't match from company to company.
Transistor density in millions of transistors per square millimeter is more relevant. For example: Intel 10nm is 101 MTr/mm², TSMC 7nm Mobile is 97 MTr/mm² so they are very similar.
When comparing across generations, you'll get the most accurate picture if you stick with the same kind of chip (eg. desktop-class GPUs) and same vendor so that they're more likely to count transistors the same way from one year to the next.
Because "nm" doesn't mean nanometer anymore. Not in the context of CPUs anyway. Some time back, around the 34nm era, CPU components stopped getting materialy smaller.
Transistor count plateaued. Moore's law died.
To avoid upseting and confusing consumers with this new reality, chip makers agreed to stop delineating their chips by the size of their components, and to instead group them in to generations by the time that they where made.
Helpfully, in another move to avoid confusion, the chip makers devised a new naming convention, where each new generation uses "nm" naming as if Moore's law continued.
Say for example in 2004 you had chips with a 34nm NAND, and your next gen chips in 2006 are 32nm, then all you do is calculate what the smallest nm would have been if chip density doubled, and you use that size for marketing this generation. So you advertise 17nm instead of 32nm.
Using this new naming scheme also makes it super easy to get to 1.4nm and beyond. In fact, because it's decoupled from anything physical, you can even get to sub-plank scale, which would be impossible on the old scheme.
Edit: Some comments mention that transistor count and performance are still increasing. While that is technically true, I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.
> Some time back, around the 34nm era, CPU components stopped getting materialy smaller.
> Transistor count plateaued.
No. Transistor count has continued to increase. The "nm" numbers still correlate with overall transistor density. The change is that transistor density is no longer a function purely of the narrowest line width that the lithography can produce. Transistors have been changing shape and aren't just optical shrinks of the previous node.
The fact that frequencies stopped going up was the breakdown of Dennard Scaling[1], not a breakdown of Moore's Law[2]. We're still finding ways to pack transistors closer together and make them use less power even if frequency scaling has stalled.
You appear to be confounding a few different issues here.
1) Transistor density has continued to increase. The original naming convention was created when we just used planar transistors. That is not the case anymore. More modern processes create tertiary structures of "nodes" which condense the footprint of packs of transistors. Moore's law didn't die. It just slowed.
2) Clock speed is not correlated to transistor size. The fundamentals of physics block increases in clock speed. Light can only travel ~11cm in 1 billionth of a second (1GHz). Electricity can only ever move at 50%-99% the speed of light dependent on the conductor. What's the point of having a 1THz clock when you will just be wasting most of those clock cycles propagating signals across the chip or waiting on data moving to/from memory. Increasing clock speed increases cost of use because it requires more power so at some point a trade-off decision must be made.
You're incorrect, transistor count has not plateaued. [1] Furthermore Moor's law is about transistor count, NOT clock speeds. The fact that we are not at X.XX THz has nothing in relation do with Moor's law.
In recent years the node names don't correspond to any physical dimensions of the transistors anymore. But since density improvements are still being made, they just continued the naming scheme.
Because the naming is based on the characteristics as measured against a “hypothetical” single layer plain CMOS process at that feature size, this isn’t new the nm scale stopped corresponding to physical feature size a long time ago.
"Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch. Nevertheless, the name convention has stuck and it's what the leading foundries call their nodes"
..."At the 45 nm process, Intel reached a gate length of 25 nm on a traditional planar transistor. At that node the gate length scaling effectively stalled; any further scaling to the gate length would produce less desirable results. Following the 32 nm process node, while other aspects of the transistor shrunk, the gate length was actually increased"
That's some pretty bullshit quote-mining there. You stopped right before the important part:
"With the introduction of FinFET by Intel in their 22 nm process, the transistor density continued to increase all while the gate length remained more or less a constant."
I'll repeat it for you see you seem to keep missing it: transistor density continued to increase
This isn't marketing fraud because you aren't being sold transisters like you buy lumber at Home Depot.
Instead, you buy working chips with certain properties whose process has a name "10 nm" or "7 nm". Intel et. al. have rationalizations for why certain process nodes are named in certain ways; that's enough.
>However, even the dimensions for finished lumber of a given nominal size have changed over time. In 1910, a typical finished 1-inch (25 mm) board was 13⁄16 in (21 mm). In 1928, that was reduced by 4%, and yet again by 4% in 1956. In 1961, at a meeting in Scottsdale, Arizona, the Committee on Grade Simplification and Standardization agreed to what is now the current U.S. standard: in part, the dressed size of a 1-inch (nominal) board was fixed at 3⁄4 inch; while the dressed size of 2 inch (nominal) lumber was reduced from 1 5⁄8 inch to the current 1 1⁄2 inch.[11]
Despite the change from unfinished rough cut to more dimensionally stable, dried and finished lumber, the sizes are at least standardized by NIST. Still a funny observation!
So the theory is that Intel and others do this for marketing purposes. In other words, they predict that they will sell more parts if they name them this way instead of quoting the physical dimensions. There is no other reason to do this than for marketing purposes.
That must mean, that this marketing works to some degree. Therefore, it cannot be common knowledge amongst everyone who buys PC parts. Or it might be somewhat known but still affecting their shopping choices. If it was truly common knowledge, there would be no incentive to keep naming them this way?
>I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.
Comparing raw CPU speed seems like a bad metric. A new i5 clocked at 3.1Ghz will absolutely wipe the floor with a 3.4Ghz Pentium, even for single threaded workloads
Another name for this in a properly regulated industry would be fraud. It's like if your 4 cylinder car engine was still called and labeled a V8 because "it has the power of a V8."
the "nm" doesn't mean what you think it means, since some time ago it's become unrelated to actual physical dimensions, and now it just means "a better process".
This comment is the first time I've seen the why of this number explained, thanks. Like, makes sense, it must be tied to some relative scale that's vaguely comparable across companies otherwise it's just kind of silly. I obviously can understand the 'marketing speak' argument but past a certain point it becomes literally nonsense if people are just using arbitrary numbers.
One extra complicating factor is that each process node will have several different transistor libraries that make different tradeoffs between density and power/frequency. So a smartphone SoC will tend to have a higher MT/mm^2 number than a server CPU or GPU.
Yeah, the difference is more or less a factor of 3. The node gives you a smallest transistor size but making some transistors bigger than others lets you reach higher frequencies.
yeah it's probably just the precision of the process but you might need 2-3 nm actual layers and features to operate. That said, people were dubious of 7nm and said it will never happen. Now we're talking about >2nm .. so who knows
People were dubious of 7nm back in 2005 because back then we had '90nm' devices where the 90nm referred to the gate width.
Due to the fact that making a 7nm gate width is not only impractical (even the most advanced EUV lithography can't do it) but also would make the transistors work terribly--the fact that everyone form 2005 was referring to--the industry was forced to innovate. Their clever solution was to change their naming convention, and instead of naming each technology node after the actual gate width they just assign them a arbitrary number which follows moores law. [1]
The actual gate width for a '7nm' process is somewhat ill defined (they look nothing like a textbook transistor), but depending on how you measure it the number comes in somewhere between 30-60nm. [2] Note that there is a number in the 7nm dimensional chart that comes in at 6nm, but that is the gate oxide thickness, and is actually getting _thicker_ over time. For example back in 90nm it was 1-2nm thick.
That said, those skeptical of us ever producing a '7nm' transistor back in 2005 were right--by the naming convention used in 2005 we are still at ~40nm. I am sure that you will be able to buy a '2nm' processor according to the roadmap, but the actual transistors are still going to have a gate width closer to 30nm and their electrical performance is going to be within a factor of 2 of our current '7nm', and honestly probably going to be clocked slower.
Measuring gate width also stopped being relevant. Densities continue to increase, and significantly so, despite gate-width staying relatively constant.
A 45nm process:
i7-880, 45nm
774 million transistors
296 mm2
A "14nm" process:
i7-6700k, 14nm
1.75 billion transistors
122 mm²
That's still a huge increase in density. It no longer means what it used to, but the spirit of the definition is still very much alive.
It is a significant increase in density, but falls well short of the expectation. Density is somewhat hard to compare because it depends on the way the chip is laid out and the amount of cache vs logic, but if we go by the size of a single sram cell (containing 4 transistors) we can make a relatively fair comparison. In 90nm a sram cell was 1um^2 and in 7nm a ram cell is .027um^2, an increase in density of 37x.
The expected scaling is that transistor density should have scaled with gate length squared (since the structures are laid out in a 2-D grid, for example the 0.8um process used in the 8088 had a sram density of 120um^2, compared to 1um^2 squared for 90nm, a factor of 120x for a roughly 10 times smaller process), so one would have expected a 165x improvement moving from 90nm to 7nm.
Unsurprisingly, the missing factor of 5 is the same factor between the process node name ('7nm') and actual gate length (~35nm).