Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are a lot of these apparently. I wonder why would a CPU implementation make sense today vs a GPU one?


Some things are not parallel enough. GPUs don’t feature thousands of independent cores, instead they have group of cores executing same instructions on different data, in lockstep. Some algorithms, e.g. shading those triangles or raytracing or teaching neural networks, work awesome on GPUs. Other are very tricky to port to that model.

Other things are not suitable for GPUs performance wise. For example, if you have octree data structure and you query a lot, a CPU will perform better, because cache hierarchy (on CPU, the top level[s] of the tree will be in L1, the levels below in L2, etc., GPUs don’t have that), and because branch prediction.

Finally, not everything is computation bound. If your CPU code is fast enough to saturate IO bandwidth, be it HDD or network, you’re fine unless you have a million of servers. If you have a million of them, GPUs might still be worse they because might be more power efficient slightly decreasing your huge electricity bills.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: