As long as their is enough consistency between approximating a multiply X*W and Z*Wt (Wt = W transpose), then it is possible it could be used in NN training.
Y = X*W is the forward propagation. If Z is an error or derivative of error, Z*Wt is the back propagation.
Its an interesting question as to how well that would work. Anything that speeds up matrix multiply in NN and deep learning in general would be a big deal.
Y = X*W is the forward propagation. If Z is an error or derivative of error, Z*Wt is the back propagation.
Its an interesting question as to how well that would work. Anything that speeds up matrix multiply in NN and deep learning in general would be a big deal.