Hacker News new | past | comments | ask | show | jobs | submit login

If you're paying for CPU/GPU hours, the more you parallelize the faster you get your result for the same money. (Of course till you hit the parallelization limit of your network, but NNs are very parallelizable in general). And the larger your training dataset, the better your results.



I am aware of that. The question was not whether more than 8 GPUs would be useful in ideal circumstances, it was about how many people actually use that functionality with other frameworks other than DL4J?


It'd be a nice statistic to know. Could be dangerous too, as in the infamous "640 kilobytes should be enough for everyone".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: