Hacker News new | past | comments | ask | show | jobs | submit login

It never occurred to me to consider the performance per watt of CUDA vs C/C++/Fortan on the CPU (instead of only raw performance.) So one should consider CUDA only if they want to favor raw performance over cost? (and only if their algorithms yield a significant improvement when implemented to run on a GPU.)



CUDA on consumer GPUs for workloads that execute efficiently on those architectures can provide about 2x benefit in energy efficiency and somewhat bigger in acquisition cost, provided there is enough work at each stage of your algorithm to amortize kernel launch latency and PCIe transfer latency and bandwidth. If you buy server-grade GPUs, the cost proposition is similar to power -- at best ~2x benefit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: