Hacker News new | past | comments | ask | show | jobs | submit login

Looking 10 years into the future, do we really need x86 though? Is it not possible that Intel will loose and CISC will become basically obsolete? (Yeah, I'm ignoring AMD here for no good reason, but we were talking about nVidia vs Intel anyway.)



Looks somewhat unlikely. We may have other architectures as mainstream and, they would be more energy efficient for the same performance of x86 in mainstream use but, pure SIMD computation is underrated IMHO.

Yes, AVX has a clock penalty but, if your code is math heavy (scientific, simulation, etc.) it's extremely convenient for some scenarios still.

GPUs are not perfect for "streaming data processing" or intermittent processing because their setup and startup time is still in seconds. You need to transfer data first to the GPU if you want the full speed also. In CPU computing this overhead is nonexistent.

I develop a scientific application and we've seen that with the improvements in the FPU and SIMD pipelines across generations, a 2GHz core can match a 3.7GHz one in per core performance in some cases. This is insane. This is a simple compilation with -O3 only. march and mtune were not added intentionally.

Unless GPU becomes as transparent as CPU, we either need to catch or surpass X86 on SIMD / pure math level to replace it completely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: