Hacker News new | past | comments | ask | show | jobs | submit login

How is it possible to have 1.4nm transistors?



Names with nm are just so called commercial names. They don't match from company to company.

Transistor density in millions of transistors per square millimeter is more relevant. For example: Intel 10nm is 101 MTr/mm², TSMC 7nm Mobile is 97 MTr/mm² so they are very similar.


* TSMC’s 7nm+ EUV is 115.8 MTr/mm²

* TSMC’s 5nm EUV is 171.3 MTr/mm²

Source: https://www.techcenturion.com/7nm-10nm-14nm-fabrication

I also would like to find any historical data on MTr/mm² just to see if there high correlation with nm names.


Wikipedia has tons of transistor count and die size data for CPUs and GPUs. See eg. https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_proces...

When comparing across generations, you'll get the most accurate picture if you stick with the same kind of chip (eg. desktop-class GPUs) and same vendor so that they're more likely to count transistors the same way from one year to the next.


Because "nm" doesn't mean nanometer anymore. Not in the context of CPUs anyway. Some time back, around the 34nm era, CPU components stopped getting materialy smaller.

Transistor count plateaued. Moore's law died.

To avoid upseting and confusing consumers with this new reality, chip makers agreed to stop delineating their chips by the size of their components, and to instead group them in to generations by the time that they where made.

Helpfully, in another move to avoid confusion, the chip makers devised a new naming convention, where each new generation uses "nm" naming as if Moore's law continued. Say for example in 2004 you had chips with a 34nm NAND, and your next gen chips in 2006 are 32nm, then all you do is calculate what the smallest nm would have been if chip density doubled, and you use that size for marketing this generation. So you advertise 17nm instead of 32nm.

Using this new naming scheme also makes it super easy to get to 1.4nm and beyond. In fact, because it's decoupled from anything physical, you can even get to sub-plank scale, which would be impossible on the old scheme.

Edit: Some comments mention that transistor count and performance are still increasing. While that is technically true, I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.


> Some time back, around the 34nm era, CPU components stopped getting materialy smaller.

> Transistor count plateaued.

No. Transistor count has continued to increase. The "nm" numbers still correlate with overall transistor density. The change is that transistor density is no longer a function purely of the narrowest line width that the lithography can produce. Transistors have been changing shape and aren't just optical shrinks of the previous node.


The fact that frequencies stopped going up was the breakdown of Dennard Scaling[1], not a breakdown of Moore's Law[2]. We're still finding ways to pack transistors closer together and make them use less power even if frequency scaling has stalled.

[1]https://en.wikipedia.org/wiki/Dennard_scaling

[2] Though Dennard's paper came out in 1974 and the term "Moore's Law" was coined in 1975 so they've always been a bit confused.


You appear to be confounding a few different issues here.

1) Transistor density has continued to increase. The original naming convention was created when we just used planar transistors. That is not the case anymore. More modern processes create tertiary structures of "nodes" which condense the footprint of packs of transistors. Moore's law didn't die. It just slowed.

2) Clock speed is not correlated to transistor size. The fundamentals of physics block increases in clock speed. Light can only travel ~11cm in 1 billionth of a second (1GHz). Electricity can only ever move at 50%-99% the speed of light dependent on the conductor. What's the point of having a 1THz clock when you will just be wasting most of those clock cycles propagating signals across the chip or waiting on data moving to/from memory. Increasing clock speed increases cost of use because it requires more power so at some point a trade-off decision must be made.


You're incorrect, transistor count has not plateaued. [1] Furthermore Moor's law is about transistor count, NOT clock speeds. The fact that we are not at X.XX THz has nothing in relation do with Moor's law.

[1]: https://en.wikipedia.org/wiki/Moore%27s_law#/media/File:Moor...


Do you have any sources for this? Where could one learn about this more carefully? I mean isn't this a major marketing fraud?


In recent years the node names don't correspond to any physical dimensions of the transistors anymore. But since density improvements are still being made, they just continued the naming scheme.

https://en.wikichip.org/wiki/technology_node#Meaning_lost


Because the naming is based on the characteristics as measured against a “hypothetical” single layer plain CMOS process at that feature size, this isn’t new the nm scale stopped corresponding to physical feature size a long time ago.


Plenty of info here. Enjoy https://en.wikichip.org/wiki/technology_node

"Recent technology nodes such as 22 nm, 16 nm, 14 nm, and 10 nm refer purely to a specific generation of chips made in a particular technology. It does not correspond to any gate length or half pitch. Nevertheless, the name convention has stuck and it's what the leading foundries call their nodes"

..."At the 45 nm process, Intel reached a gate length of 25 nm on a traditional planar transistor. At that node the gate length scaling effectively stalled; any further scaling to the gate length would produce less desirable results. Following the 32 nm process node, while other aspects of the transistor shrunk, the gate length was actually increased"


That's some pretty bullshit quote-mining there. You stopped right before the important part:

"With the introduction of FinFET by Intel in their 22 nm process, the transistor density continued to increase all while the gate length remained more or less a constant."

I'll repeat it for you see you seem to keep missing it: transistor density continued to increase


> I mean isn't this a major marketing fraud?

This isn't marketing fraud because you aren't being sold transisters like you buy lumber at Home Depot.

Instead, you buy working chips with certain properties whose process has a name "10 nm" or "7 nm". Intel et. al. have rationalizations for why certain process nodes are named in certain ways; that's enough.


>This isn't marketing fraud because you aren't being sold transisters like you buy lumber at Home Depot.

Funny you say that, because "two by fours" used to be 2" x 4”, but became progressively thinner as manufacturing processes improved.


I thought it was because they planed the wood for you, a 2x4 is 2x4 before kiln drying and planing. https://www.thesprucecrafts.com/why-isnt-a-2x4-a-2x4-3970461

That said I'm not sure why they don't sell it by it's actual size.


>However, even the dimensions for finished lumber of a given nominal size have changed over time. In 1910, a typical finished 1-inch (25 mm) board was 13⁄16 in (21 mm). In 1928, that was reduced by 4%, and yet again by 4% in 1956. In 1961, at a meeting in Scottsdale, Arizona, the Committee on Grade Simplification and Standardization agreed to what is now the current U.S. standard: in part, the dressed size of a 1-inch (nominal) board was fixed at 3⁄4 inch; while the dressed size of 2 inch (nominal) lumber was reduced from 1 5⁄8 inch to the current 1 1⁄2 inch.[11]

https://en.wikipedia.org/wiki/Lumber#Dimensional_lumber


Despite the change from unfinished rough cut to more dimensionally stable, dried and finished lumber, the sizes are at least standardized by NIST. Still a funny observation!


They make use of "innovative" ways of calculating the gate length.


It is common knowledge even among consumers that pick PC parts.


So the theory is that Intel and others do this for marketing purposes. In other words, they predict that they will sell more parts if they name them this way instead of quoting the physical dimensions. There is no other reason to do this than for marketing purposes.

That must mean, that this marketing works to some degree. Therefore, it cannot be common knowledge amongst everyone who buys PC parts. Or it might be somewhat known but still affecting their shopping choices. If it was truly common knowledge, there would be no incentive to keep naming them this way?


It does not matter. Those that know use it as the ID for a process and it is fine. Those that don't, don't really need to know.

There is much, much worse marketing out there to tackle first.


>I did the sums, the Intel P4 3.4Ghz came out 2004, if Moore's law continued, we would have 3482Ghz or 3.48 TERAHERTZ by now.

Comparing raw CPU speed seems like a bad metric. A new i5 clocked at 3.1Ghz will absolutely wipe the floor with a 3.4Ghz Pentium, even for single threaded workloads

https://cpu.userbenchmark.com/Compare/Intel-Pentium-4-340GHz...


Sure. But how would it compare to a CPU clocked at 3.5Thz?


Ok, so according to this I have a 50 cylinder car because it has that much horsepower. (based on 1910-era horsepower-to-cylinder ratio)


And James Watt based horsepower itself on the 1700s horse-to-mining pony ratio.


Correct.


Another name for this in a properly regulated industry would be fraud. It's like if your 4 cylinder car engine was still called and labeled a V8 because "it has the power of a V8."

"Anyone seen the new V12s this year?"


the "nm" doesn't mean what you think it means, since some time ago it's become unrelated to actual physical dimensions, and now it just means "a better process".

EDIT: for something to read https://en.wikipedia.org/wiki/10_nanometer


So it’s marketing speak.


Not exactly, it means something like "it has the same transistor density as if we shrunk transistors from about 20 years ago to 1.4 nanometers"


This comment is the first time I've seen the why of this number explained, thanks. Like, makes sense, it must be tied to some relative scale that's vaguely comparable across companies otherwise it's just kind of silly. I obviously can understand the 'marketing speak' argument but past a certain point it becomes literally nonsense if people are just using arbitrary numbers.


Never knew.. Ok.. so instead of nm, it is used as density. Shouldn't they just be using transistors per mm2, or maybe per cubic micron? mu3?

I guess it's the same as LEDs.. watts/lumen.


Apparently it's easier to do some hocuspocus and use nm than transition to Transistors per square micrometer. See this chart of how they relate:

https://en.wikichip.org/wiki/File:5nm_densities.svg


One extra complicating factor is that each process node will have several different transistor libraries that make different tradeoffs between density and power/frequency. So a smartphone SoC will tend to have a higher MT/mm^2 number than a server CPU or GPU.


Yeah, the difference is more or less a factor of 3. The node gives you a smallest transistor size but making some transistors bigger than others lets you reach higher frequencies.


Thanks for this - great explanation!


A Louis Rossmann meme that spawned a coffee mug:

They're not lying, it's commercial real estate.

https://youtu.be/7Tzz7-aOKHU?t=2m04s


Shrinkage!


yeah it's probably just the precision of the process but you might need 2-3 nm actual layers and features to operate. That said, people were dubious of 7nm and said it will never happen. Now we're talking about >2nm .. so who knows


People were dubious of 7nm back in 2005 because back then we had '90nm' devices where the 90nm referred to the gate width.

Due to the fact that making a 7nm gate width is not only impractical (even the most advanced EUV lithography can't do it) but also would make the transistors work terribly--the fact that everyone form 2005 was referring to--the industry was forced to innovate. Their clever solution was to change their naming convention, and instead of naming each technology node after the actual gate width they just assign them a arbitrary number which follows moores law. [1]

The actual gate width for a '7nm' process is somewhat ill defined (they look nothing like a textbook transistor), but depending on how you measure it the number comes in somewhere between 30-60nm. [2] Note that there is a number in the 7nm dimensional chart that comes in at 6nm, but that is the gate oxide thickness, and is actually getting _thicker_ over time. For example back in 90nm it was 1-2nm thick.

That said, those skeptical of us ever producing a '7nm' transistor back in 2005 were right--by the naming convention used in 2005 we are still at ~40nm. I am sure that you will be able to buy a '2nm' processor according to the roadmap, but the actual transistors are still going to have a gate width closer to 30nm and their electrical performance is going to be within a factor of 2 of our current '7nm', and honestly probably going to be clocked slower.

[1] https://en.wikichip.org/wiki/technology_node

[2] https://en.wikichip.org/wiki/7_nm_lithography_process


I think I've read some knowledgeable silicon guy saying he wouldn't bet a dime on 7nm quite a time after 2005 (2012~). But I forgot the details.


Measuring gate width also stopped being relevant. Densities continue to increase, and significantly so, despite gate-width staying relatively constant.

A 45nm process:

    i7-880, 45nm
    774 million transistors
    296 mm2
A "14nm" process:

    i7-6700k, 14nm
    1.75 billion transistors
    122 mm²
That's still a huge increase in density. It no longer means what it used to, but the spirit of the definition is still very much alive.


It is a significant increase in density, but falls well short of the expectation. Density is somewhat hard to compare because it depends on the way the chip is laid out and the amount of cache vs logic, but if we go by the size of a single sram cell (containing 4 transistors) we can make a relatively fair comparison. In 90nm a sram cell was 1um^2 and in 7nm a ram cell is .027um^2, an increase in density of 37x.

The expected scaling is that transistor density should have scaled with gate length squared (since the structures are laid out in a 2-D grid, for example the 0.8um process used in the 8088 had a sram density of 120um^2, compared to 1um^2 squared for 90nm, a factor of 120x for a roughly 10 times smaller process), so one would have expected a 165x improvement moving from 90nm to 7nm.

Unsurprisingly, the missing factor of 5 is the same factor between the process node name ('7nm') and actual gate length (~35nm).


Think of it as 1400 pm transistors.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: