Hacker News new | past | comments | ask | show | jobs | submit login

For scientific workloads, double precision is a must have. The 7 digits of FP32 is not enough. In my lab, we haven't updated our Kepler based GPU since 2013 for this reason.



I agree but the article has an answer to this.

Since I don’t plan to run any DNA analysis or fintech simulations on my smartphone anytime soon, I am very satisfied having FP32/FP16 precision in mobile right now. And so should you.


You are right, I thought vessenes meant: "FP64 seems like a very small use case for most of the parallelized workflows I can imagine" for any platform.


That makes sense. I'm assuming that the majority of GPUs are not running scientific workloads, instead deep learning and matrix ops for finance.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: