Hacker News new | past | comments | ask | show | jobs | submit login

Not happening. nVidia has a stranglehold on deep learning because of CUDA and Cudnn. I don't see any AMD alternatives to take over either of these. So I wouldn't bet too much on AMD taking over the deep learning chip market.



Who’s writing bare CUDA though? For most tasks a framework like Tensorflow or PyTorch is good enough.

If AMD could provide a backend for the most popular frameworks then they could skip over the CUDA patent issue completely.

The real problem is that it seems like AMD’s not investing substantially in software teams to make it happen.


In the deep learning world every major framework works on top of Cudnn which works on top of CUDA Pytorch, Tensorflow you name it.

https://github.com/pytorch/pytorch/issues/10657

That is the state of Pytorch support for AMD GPUs.


> Who’s writing bare CUDA though?

I do. Not everything you can do with a CUDA card is deep learning. In fact that's just one of many applications.


Lots of people do. We write cuda all the time.


This. The software lead is just incredible; almost everything uses CUDA.

There has been some progress, but PyTorch still isn't fully functional with ROCm yet and that feels like a good litmus test.

https://github.com/pytorch/pytorch/issues/10657


The Apple ecosystem with its amd graphic cars and future Apple Gpu card seems to be a fight. Or at least maintain certain software not totally all cuda all the way down. And also the gaming with amd dominates both gaming platform.

Really do not want just one players. And hope the high level plays have more completion.

Still interest in Taiwan part. Purely from economic point of view. How secure are we ok that front, if all eggs are in one basket. Hk is fallen. Taiwan or South China Sea is in play. That will affect the supply chain.


I think the stranglehold is about to burst.

Intel is launching a GPU/Deep learning accelerator, Huawei is thinking about launching a GPU. Pytorch and Tensor flow work well enough on AMD GPUs. There are also custom deep learning ASICs from Google. There is simply too much competition at this point for CUDA to continue to be the standard.


Is there any chance that some of the upcoming open-source cross-platform standards like WebGPU could have an effect on this, if tooling around them was built to support writing more GPGPU-focused code?


DLSS is black magic




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: