Hacker News new | past | comments | ask | show | jobs | submit login

What made Intel seem unbeatable was its process node advantage. Nvidia does not have fabrication plants, so it is able to get the best process node from whoever has it. Nvidia is therefore not vulnerable to what befell Intel.

What makes Nvidia seem unbeatable is that Nvidia does the best job on hardware design, does a good job on the software for the hardware and gets its designs out quickly such that they can charge a premium. By the time the competition makes a competitive design, Nvidia has the next generation ready to go. They seem to be trying to accelerate their pace to kill attempts to compete with them and so far, it is working.

Nvidia just does not do the same thing better in a new generation, but tries to fundamentally change the paradigm to obtain better than generational improvements across generations. That is how they introduced SIMT, tensor cores, FP8 and more recently FP4, just to name a few. While their competitors are still implementing the last round of improvements Nvidia made to the state of the art, Nvidia launches yet another round of improvements.

For example, Nvidia has had GPUs on the market with FP8 for two years. Intel just launched their B580 discrete GPUs and Lunar Lake CPUs with Xe2 cores. There is no FP8 support to be seen as far as I have been able to gather. Meanwhile, Nvidia will soon be launching its 50 series GPUs with FP4 support. AMD’s RDNA GPUs are not poised to gain FP8 until the yet to be released RDNA 4 and I have no idea when Intel’s ARC graphics will gain FP8. Apple’s recent M4 series does have FP8, but no FP4 support.

Things look look less bad for Nvidia’s competitors in the enterprise market, CDNA 3 launched with FP8 support last year. Intel had Gaudi 2 with FP8 support around the same time as Nvidia, and even launched Gaudi 3. Then there is tenstorrent with FP8 on the wormhole processors that they released 6 months ago. However, FP4 support is no where to be seen with any of them and they will likely not release it until well after Nvidia, just like nearly all of them did with FP8. This is only naming a few companies too. There are many others in this sector that have not even touched FP8 yet.

In any case, I am sure that in a generation or two after Blackwell, Nvidia will have some other bright idea for changing the paradigm and its competition will lag behind in adopting it.

So far, I have only discussed compute. I have not even touched on graphics, where Nvidia has had many more innovations, on top of some of the compute oriented changes being beneficial to graphics too. Off the top of my head, Nvidia has had variable rate shading to improve rendering performance, ray tracing cores to reinvent rendering, tensor cores to enable upscaling (I did mention overlap between compute and graphics), optical flow accelerators to enable frame generation and likely others that I do not recall offhand. These are some of the improvements of the past 10 years and I am sure that the next 10 years will have more.

We do not see Nvidia’s competition put forward nearly as many paradigm changing ideas. For example, AMD did “smart access memory” more than a decade after it had been standardized as resizeable bar, which was definitely a contribution, but not one they invented. For something that they actually did invent, we need to look at HBM. I am not sure if they or anyone else I mentioned has done much else. Beyond the companies I mentioned, there are Groq and Cerebras (maybe Google too, but I am not sure) with their SRAM architectures, but that is about it as far as I know of companies implementing paradigm changing ideas in the same space.

I do not expect Nvidia to stop being a juggernaut until they run out of fresh ideas. They have produced so many ideas that I would not bet on them running out of new ideas any time soon. If I were to bet against them, I would have expected them to run out of ideas years ago, yet here we are.

Going back to the discussion of Intel seeming to be unbeatable in the past, they largely did the same thing better in each generation (with occasional ISA extensions), which was enough when they had a process advantage, but it was not enough when they lost their process advantage. The last time Intel tried to do something innovative in its core market, they gave us Itanium, and it was such a flop that they kept doing the same thing incrementally better ever since then. Losing their process advantage took away what put them on top.




> In any case, I am sure that in a generation or two after Blackwell, Nvidia will have some other bright idea for changing the paradigm and its competition will lag behind in adopting it.

This is the most important point. Everyone seems to think that Nvidia just rests on its laurels while everyone and their dog tries to catch up with it. This is just not how (good) business works.


Nvidia once rested on their laurels and that was the Geforce FX over 20 years ago. Jensen was so pissed, he literally screamed at every person in the company.

But he also made sure that resting won't ever happen again. Andy Grove from Intel once said that only the paranoid survive and I bet Jensen is the most paranoid CEO alive. You won't see him in public that way because Jensen in public is Nvidia marketeer.

Nvidia has also understood early on how important marketing and brand recognition is. They learned it the hard way with the utter failure of their first chip the NV1 which was a technological master piece which no one wanted. Witht the bad GeForce FX Nvidia even made marketing videos to make fun of themselves and that helped. ATI didn't crush Nvidia as much as expeceted because of such activities despite having the ultra superior Radeon 9700/9800 series back then.


Nvidia has been very smart to be well prepared, but the emergency of Bitcoin and AI were two huge bits of good luck for them. It's very unlikely that there is another once-in-a-lifetime event that will also benefit Nvidia in that way. Nvidia will be successful in the future, but it will be through more normal, smart business, means.


Those are both computational challenges. Nvidia is well positioned for those due to their push into HPC with GPGPU. If there is another “once-in-a-lifetime” computational challenge, it will likely benefit Nvidia too.


I would never bet against the market finding a use for higher performance products.

~"Good fortune favors those who are prepared to take advantage of it", etc.


I've been using Gaudi chips for a little bit and they are totally fine (and the software stack is even pretty good, or at least the happy path is mostly covered for me). For example I set up training with autocasting, activation checkpointing, fused ops, profiling etc., without too much trouble. I'll write a long blog post about it soon but I think their issue with the Gaudi chips is simply making enough and convincing people to buy them before Falcon Shores (which will, I think, be Xe slice based, so more like a better PVC chip than a Gaudi).

In summary the software story was very surprisingly better than I expected (no Jax though).


> What made Intel seem unbeatable was its process node advantage. Nvidia does not have fabrication plants, so it is able to get the best process node from whoever has it. Nvidia is therefore not vulnerable to what befell Intel.

It's able to get the best process node from /whoever is willing to sell it to Nvidia/: it's vulnerable (however unlikely) to something very similar -- a competitor with a process advantage.


Exactly this, hard to grok why people think somehow the fab is the boat anchor around Intel's neck. No, it was the golden goose that kept Intel ahead until it didn't.

BK failed to understand the moat Intel had was the Fab. The moat is now gone and so is the value.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: