Hacker News new | past | comments | ask | show | jobs | submit login

As long as their is enough consistency between approximating a multiply X*W and Z*Wt (Wt = W transpose), then it is possible it could be used in NN training.

Y = X*W is the forward propagation. If Z is an error or derivative of error, Z*Wt is the back propagation.

Its an interesting question as to how well that would work. Anything that speeds up matrix multiply in NN and deep learning in general would be a big deal.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: