It's very wordy and takes a lot of reading, but he does have a pretty solid point. 1) He is talking about enterprise computing, and makes that clear with:
"
Up until now Intel has held a dominant monopoly over Enterprise computing for many years, successfully fending off all challengers to their supremacy in the Enterprise computing space. This dominance is ending this year and the market sees it coming.
"
So integrated graphics like you mention is irrelevant.
Then at the end he lists why he thinks that with links:
Softbank bought ARM and funded NVIDIA, who announced an ARM & NVIDIA integrated enterprise computing product. IBM is supporting NVIDIA with a POWER and NVIDIA integrated enterprise computing product, and AMD is supporting NVIDIA in Ryzen by providing lots of PCIe bandwidth to the graphics card to support compute tasks.
Yes, at least for gaming. (Don't know about DNN.) Single-precision is the only kind that GPUs supported until CUDA happened.
Around 2011, I got my feet wet in CUDA and tried to calculate quantum waveforms (using a method that is mostly matrix multiplications and FFTs). I eventually went back to doing stuff on the CPU because GPU memory was too small in the systems that I had access to (256 MB), which restricted me to one job at a time, whereas the CPU (a contemporary i7) had enough cores and memory to do 4 jobs in parallel. And I needed double precision, which the GPU could only execute at a tenth the speed of a single-precision job. Also, with the GPU, I was restricted to running jobs during the night since those systems were desktops that were also used for classes. Whenever one of my calculations ran, it would occupy the GPU completely, this rendering the graphical login unusable.
I reckon that the situation would look much more favorably for the GPU today, esp. because of the larger memory sizes and because double-precision speed has caught up. But yeah, the most common uses need only single-precision.
GeForce 5xx series came out in 2010 (https://en.wikipedia.org/wiki/GeForce_500_series) and NONE of them had less than 1GB of memory. Idk what GPU you used, but it was old technology at that point.
Probably. Whoever bought those machines probably didn't realize that GPU performance was quickly becoming a relevant metric for scientific computation.
Actual translation: SAP and NVIDIA partner to milk the fad for all its worth -- it wont amount to much in the grand scheme of things, as it's an inconsequential part of enterprise computing.
Then at the end he lists why he thinks that with links: Softbank bought ARM and funded NVIDIA, who announced an ARM & NVIDIA integrated enterprise computing product. IBM is supporting NVIDIA with a POWER and NVIDIA integrated enterprise computing product, and AMD is supporting NVIDIA in Ryzen by providing lots of PCIe bandwidth to the graphics card to support compute tasks.