Hacker News new | past | comments | ask | show | jobs | submit login

Considering the GPU implementation is TensorFlow, I think it's very safe to assume the GPU implementation is the far more optimized one.



Is there anything preventing the same or a similar algorithmic optimization from being implemented on the GPU though? IIUC, the new algorithm (on the CPU) was compared to an existing algorithm (both on the CPU and GPU).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: