Hacker News new | past | comments | ask | show | jobs | submit login

tl;dr problems that are compute bound but trivially parallelizable tend to be good choices for GPU computation.

Eg; when you're running a series of small computations on a lot of data. Or a lot of small computations on a moderate amount of data.

GPUs tend to be pretty ineffective for computations where there are a lot of data dependencies since the individual shader units are slow compared to traditional CPU cores... so most of the GPU will be idle.

Or for ultra low latency applications (since moving data between to and from the GPU is costly even with direct memory access).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: