Hacker News new | past | comments | ask | show | jobs | submit login

Nope. It's just that GPUs carried the mantle. If you have a massively parallel number crunching application you should bite the bullet and port it to GPU, and you'll see a truly massive FLOP/$ increase. And unlike processors GPU performance is still rising 35-45% per generation, although that will slow if NVidia gets too far ahead.

Also, if your goal is to get optimal cost/FLOP and you are computing pretty much constantly then you shouldn't be using the cloud, to be honest. If you are IO limited or if you have burst use then maybe, but for cost/FLOP the kings are still consumer GPUs and Threadrippers, by very far and large.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: