Hacker News new | past | comments | ask | show | jobs | submit login

It's very wordy and takes a lot of reading, but he does have a pretty solid point. 1) He is talking about enterprise computing, and makes that clear with: " Up until now Intel has held a dominant monopoly over Enterprise computing for many years, successfully fending off all challengers to their supremacy in the Enterprise computing space. This dominance is ending this year and the market sees it coming. " So integrated graphics like you mention is irrelevant.

Then at the end he lists why he thinks that with links: Softbank bought ARM and funded NVIDIA, who announced an ARM & NVIDIA integrated enterprise computing product. IBM is supporting NVIDIA with a POWER and NVIDIA integrated enterprise computing product, and AMD is supporting NVIDIA in Ryzen by providing lots of PCIe bandwidth to the graphics card to support compute tasks.




Except AMD own ATI which is their own GPU brand. So no, they're doing that because they want to move GPUs and they'll want those GPUs to be AMD ones.


Also, it is a bit of a niche, but AMD's single precision compute is typically better than NVIDIA's (spFLOP/$ both capital and operational).


AMD GPUs are currently nearly impossible to buy because of Ethereum mining. They are extremely scarce and it has been driving prices up.


By single precision, do you mean 32-bit floating-point computation?

Probably not but, if so, isn't that what both computer gaming and deep learning need the most?


Yes, at least for gaming. (Don't know about DNN.) Single-precision is the only kind that GPUs supported until CUDA happened.

Around 2011, I got my feet wet in CUDA and tried to calculate quantum waveforms (using a method that is mostly matrix multiplications and FFTs). I eventually went back to doing stuff on the CPU because GPU memory was too small in the systems that I had access to (256 MB), which restricted me to one job at a time, whereas the CPU (a contemporary i7) had enough cores and memory to do 4 jobs in parallel. And I needed double precision, which the GPU could only execute at a tenth the speed of a single-precision job. Also, with the GPU, I was restricted to running jobs during the night since those systems were desktops that were also used for classes. Whenever one of my calculations ran, it would occupy the GPU completely, this rendering the graphical login unusable.

I reckon that the situation would look much more favorably for the GPU today, esp. because of the larger memory sizes and because double-precision speed has caught up. But yeah, the most common uses need only single-precision.


GeForce 5xx series came out in 2010 (https://en.wikipedia.org/wiki/GeForce_500_series) and NONE of them had less than 1GB of memory. Idk what GPU you used, but it was old technology at that point.


Probably. Whoever bought those machines probably didn't realize that GPU performance was quickly becoming a relevant metric for scientific computation.


Vega will even have half precision!


ATI hasn't existed for over 10 years.


Enterprise is just an euphemism for clunky corporate IT platforms like SAP/Oracle/SharePoint/Windows, quite distant from Nvidia's field.


Just a month ago: "NVIDIA and SAP Partner to Create a New Wave of AI Business Applications" https://blogs.nvidia.com/blog/2017/05/10/nvidia-sap-partner/


Actual translation: SAP and NVIDIA partner to milk the fad for all its worth -- it wont amount to much in the grand scheme of things, as it's an inconsequential part of enterprise computing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: