Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
ladberg
on July 17, 2020
|
parent
|
context
|
favorite
| on:
Powerful AI Can Now Be Trained on a Single Compute...
So this basically boils down to keeping your training data in memory? Is there something else I missed?
dan-robertson
on July 17, 2020
[–]
It looks obvious when you write it like that but I think many people are surprised by just how much slower distributed computations can be compared to non distributed systems. Eg the COST paper [1]
[1]
https://www.usenix.org/system/files/conference/hotos15/hotos...
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: