Hacker News new | past | comments | ask | show | jobs | submit login
Nvidia deeply unhappy with TSMC, claims 22nm essentially worthless (extremetech.com)
149 points by evo_9 on March 23, 2012 | hide | past | favorite | 76 comments



Very interesting. Those really are pretty stunning words for a business relationship as big as this. If I understand correctly, Nvidia has nowhere else to go since TSMC is so far ahead of other independent foundries (I'm not sure how independent GF really is - is it conceivable for Nvidia to work with GF?) and it can't very well go to Intel given the amount of competition they are engaged in. Maybe Samsung?

Also, I'm not clear on the arguments presented in the slides. The transistor normalized price curves for 20nm and 14nm do eventually cross over the preceding curves, unlike the 40nm curve. And even putting the same transistor count on a smaller process node can result in lower power consumption once the leakage problems are mitigated, so the value is increased.

Edit: For historical background, here's a fascinating conversation from 5 years ago between Morris Chang (founder and chairman, TSMC) and Jen-Hsun Huang (CEO, Nvidia):

http://www.youtube.com/watch?v=u-x7PdnvCyI


it can't very well go to Intel given the amount of competition they are engaged in

That's not the reason. Intel's fabs are at capacity producing chips with much higher margins than GPUs. Check the price per die area on Geforce card vs. the CPU that goes on the same motherboard, and then remember that the video card is assembled and stuffed (with a ton of DRAM) and board tested where the CPU just has to be packaged and shipped. The economics simply wouldn't allow it even if Intel wanted to play.


> remember that the video card is assembled and stuffed (with a ton of DRAM) and board tested where the CPU just has to be packaged and shipped

I used to design CPU tests for a major x86 CPU manufacturer. 100% of manufactured chips are tested, sometimes at a length of many hours.


Maybe that was unclear. Obviously all ICs are tested. But the integrated board requires an extra step that the packaged CPU does not.


[deleted]


Erm, you're off by a process node. Haswell is produced at 22nm, not 28nm, and its due for public release in 2013. However, Ivy Bridge is also produced on 22nm and that's the one that will be coming out in a few months.

That's still a half node[1] ahead, but this is mostly a matter of Intel in specific rather than CPU manufacturers in general being fast to adopt more advanced process nodes. [1]http://en.wikipedia.org/wiki/Half-node#Half-shrink


"Those really are pretty stunning words for a business relationship as big as this. "

Just another chapter like the breakup of wintel. The 500 pound gorilla in the room is that the PC business is no longer king. A company like TSMC has limited resources and as demand for them skyrockets from mobile companies the price goes up.


> (I'm not sure how independent GF really is - is it conceivable for Nvidia to work with GF?)

AMD just gave up their stake in GlobalFoundries, so I'm assuming so.


I would only add that they gave up their stake after issues with yield for Bulldozer and their Llano platform which is a generation behind at 32nm -- if NVIDIA wanted to play nice with GF at 22nm, I don't know how that would happen.

At the absolute cutting edge I think there are only 2 or 3 companies on the planet stamping hardware at that level; Intel being 1 of them, TSMC being another and I am sure IBM is in there somewhere (does IBM have its own foundries? Given their PowerPC history and then the Cell processor I assumed so, but maybe all that production was external?)


Yes, IBM has fabs that are similar to GF.


Samsung is the right choice given Sammys good track record of being a foundry to Apple and also for its own SoCs with high end GPUs and uses latest technologies like Low-Power High-K Metal etc.



I'm sure there's a lot of interesting dirt here. But some of the analysis is a little weird. The cost-per-transistor metric isn't the right one, for a start. Imagine a world where you could get all the 1975-era 5v 74HCxx chips you wanted for free, with zero manufacturing cost. Would you choose to build a smartphone out of them? Of course not. The newer transistors are better (faster, lower power) and the products built out of them are more valuable.

So yes, production costs have increased. And that may have (is having, I guess) effects on the speed at which new technologies are adopted (i.e. the crossover into "worth it" is delayed).

But isn't that just a way of saying that semiconductors are finally becoming a mature technology? That's not really such a shock, nor does it justify the poo-flinging at TSMC.

So the real question in my mind is whether TSMC is having problems that the other foundries (Samsung or Global Foundries, say) are not. Given that Intel has been sampling 22nm parts already, it seems like the real news hers is that TSMC sucks and the bit about production costs are just ammunition.


Thanks to the end of Dennard scaling, newer transistors are only marginally better. If NVidia doesn't care about those marginal improvements in performance and power, it seems legitimate for them to complain about price.

If an entire industry's financial projections are based on exponential improvement then becoming a mature technology is apocalyptic, especially if it happens earlier than predicted.


Hmm, the other day I was talking about exactly this with a coworker. This being that transistor cost is not getting exactly cheaper when you factor in R&D and complicate the manufacturing process.

I think Intel will be able to bundle a "good enough for playing AAA games in 1080p with maxed out graphics" video processor into the main CPU in about one/two years, maybe less. Those CPUs will be more expensive that the tinier (in transistor count) ones. This will happen around when they hit 14nm. NVIDIA will go out of bussiness or become a niche company dedicated to vector processing for scientific computation or heavy workstations, and maybe a small player in the ARM market with their Tegra architecture. Maybe AMD will go the same road, making something similar to Intel but using their ATI technology.

Bonus 1: too bad, no more Singularity mumbo-jumbo in less than two years.

Bonus 2: I am one of the few that believes ARM will never be relevant in the desktop/laptop/console/gaming segment, because of what I think Intel will come up with.


I don't buy it. The transistor counts of modern GPUs far exceeds that of even multi-core CPUs. It's not rational to say "oh, well, that's all just trivial engineering, it'll go away". Especially as GPU performance continues to be taxed to a far higher degree than the CPU typically is. Modern popular AAA PC games like Battlefield 3 or Starcraft 2 will eat all of the GPU you throw at it and then some.

And then you see what game devs are working on for the future such as this: http://www.youtube.com/watch?v=Gf26ZhHz6uM or this: http://www.youtube.com/watch?v=Q4MCqM6Jq_0 or this: http://www.youtube.com/watch?v=EhwZ7Sb0PHA and the notion of "good enough" quickly flies out the window.


Intel control the whole platform now that they have PCH, which enable them to put the GPU in the CPU. With Ivy Bridge you even have OpenCL support. There is nothing stopping Intel from embedding a proper GPU now that they don't have to fit it in the northbridge. The only reason NVIDIA is still in business is because embedded graphics sucked too much.


Hmmm, you are right... maybe I shouldn't have said "maxed out" but "mostly acceptable". I think that "maxed out" will be a niche market, specially since the computers able to really max out games are already niche, and PCs in general are loosing market share. I'm not implying its a trivial engineering problem, just a problem Intel will be able to solve in one/two years and deliver processors for a small box you plug into your TV and happily game without investing money and real estate space into a PC.



Many people play games on computers, true. I think pvarangot's contention is that most of them don't need high-ends graphics cards to do so.


I should have mentioned, that chart is for sales in 2011. Some of that is low graphics gaming, but people aren't spending $18.6 billion on bejeweled (maybe on Farmville though). If you take just the segment of the market that is "high-end graphics gaming" then it's still a multi-billion dollar a year industry. Why would a self-sustaining industry of that size just spontaneously go away?


No, I too also believe ARM has a huge hill to climb to matter any where else. Radical energy prices could affect that, but I've had SPARCs, MIPS, Itaniums, PowerPCs, and Alphas and Intel always wins, performance per dollar per ease of being on main street. (Trust me, there is geek appeal to some exotic RISC chip but after supporting it for a while, there is something very cool about just apt or yum installing the stuff you want and it works) Intel is insanely good at what they do.

It is fascinating seeing the split between design and fabrication come out like it has. As Sun faded out, I thought they were at a huge disadvantage without fabrication capability, it really looked like the only way to compete and matter was to do it all. That could ultimately end up being Intel's disadvantage if some of the fab companies can out do them at fab costs; but like I said, Intel is insanely good at what they do.

It's also remarkable that NVidia does look like the odd man out. Seemed unbeatable for so many years there


Why do you think everything will stop with 1080p? Next stop will be 1920x1200, then 2560x1600.

And I don't know a lot about hardware, but my GTX460 can't max out games even at 1080p and it's pulling 150 watts. If you packed all of that hardware and 1GB of video memory onto a CPU it would melt.


I don't know why you think a non-16:9 size will be the next one, specially since there is already a widespread 4k standard (3996:2160, almost 16:9ish), and is almost irrelevant in adoption outside specific niches like digital cinema, or some countries like japan.

I think 1080p is here to stay, at least for three/four years. TV manufacturers seem to be focused more on things like 3D than in increasing resolution. I don't blame them, 1080p for movies had very slow acceptance, and 1080p is already pretty much indistinguishable from 4k when viewed more than 3/4 meters away or when projected in less than 60 inches of diagonal size.


Why are televisions interesting? PS3 and Xbox graphics are so many years behind PC now...

Have you seen one of the new iPad screens? It's hard to believe that that kind of DPI on computer monitors isn't going to have huge adoption.


> Why are televisions interesting? PS3 and Xbox graphics are so many years behind PC now...

They can catch up as fabbing processes catch up into consoles, or some sort of gaming set top box that works more like a PC. I believe its sort of a common assumption PCs are going niche.

> Have you seen one of the new iPad screens? It's hard to believe that that kind of DPI on computer monitors isn't going to have huge adoption.

I haven't. I'm sort of into photography (see profile) and would like to in order to see some of my photos displayed on that much DPI on an iPad held at about 20/30cm from my face. That resolution should really help get more depth effect in full screen photos, and maybe some day we'll no longer need to print them in order to fully appreciate them. I know I'll drool in awe when I see one, it did happen with the iPhone 4 :P

But... remeber resolution is all about distance, because as you put things further away pixels seem smaller. According to Apple "retina display" is 57 arcseconds, that's (I believe) an iPad 3 at 30cm. WARNING: I didn't find an easy calculator for these numbers, please someone correct me if I'm wrong: A 46 inch 1080p TV at 2.5m should present you with smaller pixels than that, around 45 arcseconds. A 27 inch computer monitor with 2560x1440, like the iMac display or the Dell U2711, at 60cm has a pixel size of around 75 arcseconds. So we are almost there for the best displays, and definitely already there for big TVs, taking into account average viewing distance. Since I imagine graphic intensive gaming will take place mostly on TVs using something more like a console than a PC, I don't see a bright future for NVIDIA, but I do see most gaming taking place in Intel SOCs.

Also, for completion of my numbers, a 23 inch 1080p monitor at 60cm should be more in the 85 arcseconds territory... so maybe there is still some leeway for graphic cards, but they'll definitely hit a roadblock soon.


I recently built a new low-end computer with one of the AMD APU's in it, because it seems 'good enough' for that type of machine. However, it still doesn't perform as well as discrete parts. For about $80 you can buy a low-end video card better than anything integrated at the moment.

AMD has a full line of high-performance video chipsets to pull tech from, but they still just barely get to 'adequate' for low-end useage with the cpu/graphics combo. I don't see where intel is going to leapfrog everyone else in this area, unless they buy NVidia (which would probably be blocked).


i believe the exact opposite thing. [1] is a meticulous story about Intels larrabee screwup. nvidia and ati are already defacto for high end gaming and graphics - intel is going to find that hard to beat, short of an acquisition.

windows 8 will be arm optmized, which means that intel is genuinely worried. nvidia's tegra is already being used for servers ... and of course, people are buying many more tablets than laptops.

ps3 graphics capable handsets will be out in 2013, so intel is truly looking for ways to stay relevant in an ipad world.

[1] www.brightsideofnews.com/print/2009/10/12/an-inconvenient-truth-intel-larrabee-story-revealed.aspx


> so intel is truly looking for ways to stay relevant in an ipad world.

They can stay relevant in gaming. When/if PCs fade out and we go back to a big servers/workstation/thin client world Intel will be able to live in the three of them. They are already doing well in big servers and workstations, and I think they are the only ones currently in a position to be the dominant provider of technology for the "gaming thin client", whatever it will look like... I don't know if it will be more like a console, or more of a small PC you plug into your TV. NVIDIA only has something to offer for the workstation type of PC.


why do you say that ? Intel has nothing like Tegra (Clover Trail is not shipping yet) - which is already a GPU + CPU hybrid SOC. You dont need a separate CPU. Windows 8 is already running on Tegra 3 - http://www.anandtech.com/show/5614/microsoft-shows-windows-8...

Plus, the big big reason is that there is already a ton of content out there from big publishers that is Tegra optimized.

Tegra has been very successful in the thin client/ipad world and is looking to screw with Intel's server world with Project Denver.

I'm not even talking about nvidia's next gen Kepler graphics card released this week with serious power optimizations - Intel is seriously far away from that world.

In short, intel has 0 in graphics capabilities - while nvidia has 0.75 in cpu capabilities.


We won't know whether Larrabee was a screw-up until we see how successful Knight's Corner is. A lot of groups are currently running prototypes under NDA. It's a programming model debate as much as anything, the hardware capability is available in both models.


It was an epic screw-up to position it as a head-to-head competitor with NVIDIA and ATI for Direct3d. They should have positioned it as an HPC chip from the beginning and then perhaps later said: "Oh, btw, it has these texture samplers, so let's make a video driver just to see how this CPU of ours compares to GPUs for gaming." That would have set expectations at a much more realistic level.


What about Intel's mobile presence? Because they don't have much of one, and small devices are only going to continue to proliferate in the near future. It's a bold claim that "NVIDIA will go out of business" because Intel can produce integrated CPU/graphics offerings that are good enough on consumer laptops and desktops today and that will continue to get better, but that doesn't at all imply that Intel can just throw money at improving power efficiency to beat ARM or Tegra at their own game. Unless you know something we don't :P


see the medfield phone. it's happening now: http://www.tomshardware.com/news/Intel-Medfield-Phone-Santa-...


This is so short sighted it's almost laughable. "640K ought to be enough for anybody!", right? You think graphics are somehow to flatten off in the near future? Far from it, we are going to continue to see exponential growth in graphics horsepower and developers leveraging that power in games and rendering.


There is no good-enough with AAA games short of Hollywood photo-realism and that's a lot of GPU transistors away.


I meant good enough in "good enough when compared to what a dedicated card can offer for the same price", not "the absolutely unbeatable graphics".

If transistor price doesn't decline for discrete graphic card manufacturers, there will be no more transistors in your GPU ever again, unless you pay a hefty premium for the transistor count difference. This is what NVIDIA is complaining about in the article. They will have to compete with Intel's bundled graphics card, and not only in the lower segments of the market but also the mid and high/mid segment.


I agree that NVidia are looking at a brick wall.

But if people are willing to pay $1-2K for a gaming rig (and given that games are $100-200 plus a monthly subscription they probably will) then the proportion of that cost which is GPU will increase for a few years.

The question is whether in the 'post-PC era' it would be worth the games companies putting out games which will play on a $500 home PC with an embedded intel GPU - even if that GPU is as good as todays GT295. Between high end gamers, family console gamers and casual mobile gamers is there any profit in good-enough?


That's not true if AAA games are ported from six-year-old consoles.


This presentation is more evidence that Moore's Law is dead. Nvidia and AMD can blame their partners, but the reality is that they're hitting increasingly costly technological hurdles.

Of course Nvidia can't just throw it's hands up and say "welp, we've hit the wall" when it has someone to blame.

It's fascinating to watch those predictions about the end of Moore's Law come true.


Intel seems pretty confident that they will be able to get to 11nm in the next 5 years or so. Past that, pure shrinkage of transistors via purely "conventional" methods will mostly hit a wall, but everyone saw that coming.

There's been a lot of work on nanoelectronics. For example, some scientists recently announced at a semiconductor conference that they had found a way to bypass the diffraction limit completely, making 2nm features with a 650nm laser. Similarly, I know an engineer who worked on a massively-parallel electron-beam-based etching device, which could also be applied similarly at very small node sizes. There's a billion other technical challenges, but there's been a lot of progress in making features on the molecular level.

If anything, this is better than expected: EUV has had some serious problems over the past few years, yet its delay hasn't stopped the next few nodes.

Moore's law doesn't require that shrinkage continuing, though, as there are other technologies available. Memristors in particular may allow scaling faster than Moore's law would otherwise predict.


And then there is the third dimension to consider:

http://spectrum.ieee.org/semiconductors/devices/3d-chips-gro...

That comes with a very large set of engineering challenges but if those are overcome you can expect another big jump.


You still have the problem of dumping the additional heat. So AFAICT, 3D will only provide some incremental advancement.


There are many ways to dramatically lower power usage if you're willing to accept slower speeds. Intel has recently demoed some extremely low-voltage transistors that used power in the microwatts. You can make up for this dramatically lower speed by adding far more circuitry using 3D chip-stacking.

Another possible solution is to use 3D chip stacking to add dedicated hardware for a wide variety of tasks, most of which is turned off when not being used.


There's the intermediate step to 3D - 2.5D. it's when you lay all you dies side by side, on another chip, instead of a pcb. It makes relatively easy to manage heat and you can get much more bandwidth at much lower power(much shorter wire distances). And it's already in use for high end fpga's.


You mean the multi-chip module (MCM) ? https://en.wikipedia.org/wiki/Multi-Chip_Module Those have been around forever.

Or something different?



Cool! Thanks.

I wonder if the price will ever come down to the point where it's used in consumer devices.


Plus, even if cooling between layers were figured out, we'll probably just end up with lots of transistors that we can't all power on at once, since we're already at practical power draw limits for most devices.

We don't just need to cram more transistors on a chip, we need those transistors to use proportionally less power (preferably even less than that).


The last time I did some reading into the limits of Moore's Law, about four years ago, the industry consensus was that ~12nm would be the practical limit for traditional CMOS scaling. It's interesting to see evidence that prediction does appear to be holding out.


12nm might be a technical limit - but if those 12nm transistors cost more than a 28nm transistor then nobody wants them.

The main reason for going small is cost - Silicon costs you per sq in - smaller transistors are supposed to buy you more hardware for the same money. If that stops there is less reason to go small.


To clarify, there are still advantages to 12nm, in that the speed of light allows the 12nm transistor chip to be faster (all other things being equal) than the 28nm chip. However, there are limited uses for that speed at this point, which bring into focus cost/performance.

Further, part of bringing costs down involves efficiencies of scale. If you don't have the scale, the price will limit the output. It might be back to the supercomputer world for those who need that particular extra bit of speed.


Well, we're staring at the end of shrinking silicon feature sizes all right. There might be more order of magnitude improvements in store with the use of new technologies, but we should certainly be prepared for a lengthy interregnum.


Moore's law will hold true another 20 years at least. New computing architectures will be the driving force, not silicon processes.


Moore's law only describes silicon processes:

http://en.wikipedia.org/wiki/Moores_law

It has nothing to do with computing architectures.


If you are going to be pedantic about it, you're wrong as well: Moore's law describes neither chipmaking processes nor silicon. It describes a quantity of transistors that are in an integrated circuit. In fact, the first integrated circuit was the Darlington transistor, which would have been germanium at the time, and also had nothing to even do with computing.


Somewhat off topic, but does anybody know how to disable extremetechs craptastic mobile interface? I've got twice the resolution of my laptop on my iPad, and the mobile site looks like it's been optimized for my circa 2002 palm treo 650.


+1 can't stand how Extremetech's mobile site hijacks an otherwise fine rich web experience. On Opera for iPad (which is great for browsing HN stories btw, because of its awesome tabs) Extremetech defaults to the mobile site and its just a blank white page.


I feel like this is a tech-problem that will have a tech-solution. No idea how that fits into an "International Trade Partner Conference".

That said, you can't outsource your complete manufacturing process only to come back later and wish for "virtual IDM" partnerships. Thats all of the benefits and none of the risk.


Rhetoric aside, Nvidia is the designer who keeps breaking transistor count records and is a dominant OEM supplier. That makes them both a technology and business leader among TSMC's customers, so it makes sense for TSMC to devote resources to them, since the experience they get in working with them can't be gained elsewhere and is applicable to their other customers down the road.


Pretty sure most of the (logic) transistor count records are held by FPGAs, no?


Not at the clock speeds that nvidia chips run at.


I guess you're right:

http://en.wikipedia.org/wiki/Transistor_count

But they are very different beasts with different technology and a different (much smaller) market, right?


You are right: 2.6B for CPU, 4.3B for GPU and 6.8B for FPGA.

http://en.wikipedia.org/wiki/Transistor_count


What do they mean by "rough justice" in this context?


Reading between the lines, it sounds like NVidia thinks that TSMC is taking all their profit and they'd like to get a fair share.


If cost/transistor continues to not improve, industry-wide, it spells the end of Moore's Law, at least for mainstream applications where cost is a primary factor.

However... where size itself (and power consumption) are the primary factors, there will be demand. Which means that SoC GPUs will adopt new process nodes more vigorously than video card GPUs. http://www.embedded.com/electronics-news/4304118/Imagination... Sounds like disruption.


I'm a bit surprised by a wafer pricing chart. Sure, the individual wafer cost goes up; but if you have a design with the same transistor count, the size of your chip is going _down_, which means that you'll have _more_ chips per wafer (assuming comparable yields), which means that individual chip cost will go down.

(40/28)^2 ~ 2, so if 28nm is not that bad comparing to 40nm, assuming, again, comparable yields.


This is precisely the analysis given in the per-transistor pricing crossover chart. That shows that the old rate of wafer price increase still allowed for significant per-transistor cost advantage fairly quickly. The whole point is that the rate of wafer price increase has itself increased to the point where these crossovers are barely happening anymore.


Problem is, they aren't going to have comparable yields at the same time. The newer tech will have a lower yield.


Nvidia Doesn't have a choice no matter how unhappy they are with tsmc. TSMC is the biggest fab that every fabless manufacturer uses. Every single fabless semi has no choice but TSMC. If tsmc can't deliver on the 22nm technology, no other fab will be able to. The 22nm is extremely difficult most skus fail. Hence intel wins.


why do companies want to be design only?


These days, it's not really up to a company whether it wants to be fabless or an IDM. Apart from the several billion dollars needed to build a competitive fab, you'll also need to contend with the 20+-year head start existing IDMs and foundries have on you in terms of experience and expertise. As far as I can tell, there are too many moving parts in a fab for a new company to simply poach a few key people and be competitive quickly. I can't think of anyone apart from Apple or a nation-state that has enough money to wait around while their new division learns how to build chips.

That aside, assuming a fab better than TSMC's showed up on your doorstep today, you're not out of the woods in terms of being dependent on suppliers. On the contrary, designing fab tools is also a very capital-intensive business and tends towards an oligopoly, so the TSMCs and Intels of the world are just as dependent on tool vendors as NVIDIA is on TSMC. The same holds for the very specialized materials and consumables fabs use, like the glass to make mask sets.


You could probably buy your way into the Common Platform, but it would still cost billions. I agree that it's not realistic for NVidia to try that.


who is the best company that makes fab tools?


Most companies only make a small fraction of the tools you need for a full fab; for examples of heavy hitters in key areas like lithography, chemical mechanical polishing, and ion implant, see ASML, Nikon, Canon, Applied Materials. For any given tool, there are usually 1 to 3 major vendors.


From AMD's perspective:

• The ex-CEO was criticized for selling their mobile business to Qualcomm just before the Apple+Android explosion

• AMD spun off GloFo due to immediate liquidity problems but maintained a stake. Recently they have sold their stake, so if AMD is successful at being design-only, their actions are justified. But it would seem they are dwindling - i.e. Bulldozer is hardly a home run, and lots of sources are saying the move to automated layout of their transistors is a significant part of the problem - so being design-only could be seen as a mistake.

Obligatory: Intel uses extensive auto-layout and auto-analysis of designs; they seem to have the advantage in this area by merging automatic synthesis with focused optimizations by engineers.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: