Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So this basically boils down to keeping your training data in memory? Is there something else I missed?


It looks obvious when you write it like that but I think many people are surprised by just how much slower distributed computations can be compared to non distributed systems. Eg the COST paper [1]

[1] https://www.usenix.org/system/files/conference/hotos15/hotos...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: