> Nvidia needs 65% more 5nm wafers to produce the same number of GPUs. Basically all else being equal and scaled, AMD has 65% more capacity than Nvidia, when it comes to the most critical part of the production.
> 5nm dies are the most expensive part of the whole solution, meaning there is also a 65% pricing advantage (though some of this advantage is offset by more complex packaging and other cheaper dies that go into mi300x as well as more HBM chips).
> 5nm dies are the most expensive part of the whole solution, meaning there is also a 65% pricing advantage (though some of this advantage is offset by more complex packaging and other cheaper dies that go into mi300x as well as more HBM chips).
That doesn't say anything about the fraction of the cost on the finished product though? Those numbers just say it could be something like 2% vs 6% of the cost of the card.
Pricing breakdowns are going to be hard to come by, as they're negotiated and confidential. But I think it would be safe to say that leading-edge core dies are going to be a sizable plurality (if not majority) of the cost. Only memory would come close (and some of those Reddit comments suggest more, due to it being HBM), as power delivery and other small components like resistors just aren't that expensive.
The trickier bit to measure is R&D, as it's both a fixed cost and one that can be spread across several products.
The 608.5 mm² 4090 with 76.3 B transistors isn't too far off from the 814 mm² H100 w 80 B transistors. Seems like mostly more area for memory interfacing?
Assuming they're not selling the 4090 at a loss and being generous that the whole cost is the chip cost, that makes for a $1600 chip. Scaled to the H100 we get $2140. That's a pretty minuscule fraction of a $35k card.
Good point... at 35ish chips per wafer (although I would -hope- nvidia provisioned things sanely to help yields or otherwise bin) an increase in fully good Chips is still better than binning.