Hacker News new | past | comments | ask | show | jobs | submit login

Disclaimer: Qualcomm employee with no inside knowledge of the terms of the deal, opinions are my own.

Generally fabless semiconductor companies use a variety of foundries and processes for their products. It largely depends on the technical requirements of the particular project as well as the cost of the node. In the early stages of a product development cycle, the team will choose a foundry/process and stick with it to the end. In rare cases the process will switch mid-stream, but this is costly because design essentially has to start over. (For digital, this would be after RTL is frozen; for analog/RF, a change at any point requires starting over)

Sometimes, second sources are added after the first source is in production. These tend to be the lower cost fabs like SMIC or UMC, but they in turn design their processes to be drop in replacements for top fabs like TSMC and Samsung to make it easy to switch over.

Given the above, and that Intel's process is not aimed at copying TSMC/SEC, I would guess that you'll slowly see a handful of non-critical products rolling out on "Intel 7". If the process turns out to be performant enough, you'll see more, even possibly higher end more critical product lines. Don't forget that Qualcomm isn't just about Snapdragon. There is a boatload of RF, Power Management, IoT and WiFi as well.




would you mind sharing a bit about what a process implies for the design? ie why does switching a process require massive design changes

thank you


It doesn't necessarily require a massive design change. The problem is that nearly every aspect of two different processes is different, even when there is supposed to be an "easy" migration path, such as when the new process is just an optical shrink of another process.

Specifically, the transistor properties are different, and so are the properties of the tiny metal wires connecting the transistors together. Once you get to the design phase where you're dealing with individual transistors (for analog this is right off the bat; for digital it's once you get to place & route), you spend thousands of hours simulating the behavior of the analog circuits, or the effects of the particular arrangement of digital logic gates, to make sure everything works. When the transistor and wire properties change, you at minimum have to repeat all of that verification

Sometimes, the new process is different enough that an old design no longer works, and it takes a lot of engineering effort to fix it. Other times, a few small tweaks are enough. In some cases, particularly for digital-only ICs, the foundry will validate the migration between two processes, essentially saying, "we guarantee the IC you already designed will just work in this new process". Smaller companies might just go with it and accept any yield fallout, but bigger companies like Qualcomm would probably do their own independent verification.


so if I can summarize -- what we think of as a logical transistor starts to have all sorts of physical properties, some dictated by the material, the physical layout, and probably what's near it. Designers spend piles of time simulating this, and a process change requires that work to be redone.

Thanks again!


You got it right. Translating from a logical design to a physical design requires following rules laid out by a foundry for a given process node (5nm, 7nm etc.). To check if a physical layout confirms to a node, Design Rule Checking [1] is performed as part of the Physical Verification step in EDA [2]. The simulation/checking is highly complex and takes hours (sometimes days) even with sufficient parallelization. Once a design passes the checks, it is deemed ready for manufacture by the foundry.

[1] https://en.wikipedia.org/wiki/Design_rule_checking

[2] https://en.wikipedia.org/wiki/Electronic_design_automation#A...


ty!


I have been doing semiconductor digital physical design for over 20 years

The Verilog RTL for the digital portions of the design are often identical for the same design in 2 different foundry processes.

The combinational logic gates (AND/OR, registers, etc) are made out of transistors and the transistors are different so all of the logic cells are different.

The foundry gives you a library with these cells. You use a synthesis program to "compile" them from Verilog to logic gates.

One foundry may have 16 different types of AND gates of various sizes, drive strength, and power usage. The other foundry may have 14 types but their 14X cell is stronger than the 16X cell of the other foundry. There is no direct mapping between these.

One foundry may make all standard cells 200 nm in height. They are then placed in uniform rows by the automatic placer tool (Cadence Innovus or Synopsys IC Compiler)

The other foundry made their cells 220 nm in height. The block sizes in your floorplan are now going to be different.

One foundry made their process for their internal mobile phone use so they focused on small cells for low power. Now you try to make your $3,000 GPU in this process and it won't work as well.

One foundry made metal layers 1-5 thin and 6-10 thicker and 11-15 really big. The other foundry had metal layers 1-6 thin. In your old chip you decided to use metal 6 for a certain power layer. Now it is thin and it isn't good for carrying power but thin wire is better for signals. More stuff to change.

It gets even worse for custom analog stuff like a PCIE/USB Serdes or PLL. The rules are even more different. You may need some high voltage area and a larger spacing between that section but the other foundry let you put it closer together.

In my experience TSMC is the easiest to work with. They have a huge infrastructure to help external customers.

Samsung foundries are mainly used by Samsung so they aren't great to deal with as an external customer.

Intel only has external customers for strategic things like Altera which they bought 2 years later. Intel optimizes their DRC rules to make one chip a billion times. Their DRC rules are very restrictive to optimize yield. This takes more people and time.

TSMC is optimized for a thousand customers making a thousand different chips. They don't care if your yield is 1% lower and you probably don't either if you can get to market 3 months earlier with 10% fewer employees.

For Intel that extra 1% over a billion chips is worth hiring more people to meet those extra DRC rules.


Given that almost all previous Intel foundry customers have failed, why should we expect a different result this time? Achronix is perhaps the most telling as FPGAs are incredibly regular and should have been a natural fit for Intel's manufacturing capabilities, but even Achronix ended up moving to TSMC.

Right now Altera FPGAs seem to have the same supply constraint issues as Xilinx at present (my favorite check is to look at stock levels on Digi-Key), so I'm getting the impression that Intel is unable to crank them out any better than other foundries. Furthermore, if Intel really was determined, I'd have expected more than just a handful of Altera's SKUs to be on Intel's process 5.5 years after acquisition.

My faith in Intel is not strong on this front. Does anyone else have reason to believe this time will be different?


thank you -- I built a (tiny) processor for class with Mentor Graphics, but laying out logical gates was as far down as I went. This is really fascinating and thanks again for sharing.


For analog/RF, Qualcomm would need a larger node than 7nm, as I doubt Intel has ready-made blocks for 7nm RF/analog.

The Qualcomm chip digital part would be manufactured by Intel, and the analog/RF part by TSMC? I don't understand how that would work...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: