Hacker News new | past | comments | ask | show | jobs | submit login

Basically because it affects performance. You really don't want to write any buffers!

This is sort of a deep topic, so it's hard to give a concise answer but as an example: CuBLAS guarantees determinism, but only for the same arch and same library version (because the best performing ordering of operations depends on arch and implementation details) and does not guarantee it when using multiple streams (because the thread scheduling is non-deterministic and can change ordering).

Determinism is something you have to build in from the ground up if you want it. It can cost performance, it won't give you the same results between different architectures, and it's frequently tricky to maintain in the face of common parallel programming patterns.

Consider this explanation from the pytorch docs (particularly the bit on cuda convolutions):

https://pytorch.org/docs/stable/notes/randomness.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: