Hacker News new | past | comments | ask | show | jobs | submit login

Are you referring to http://arxiv.org/pdf/1411.0247v1.pdf ? That doesn't mention sparse backward connections, but it does show that feedback weights that aren't computing the actual derivative dloss/din can till support learning. The network 'learns to learn', so to speak.



I also dimly remember that sparsity in the random fixed backward connections still works. There is actually a figure about that in the paper you've linked to.

Interestingly, feedback alignment is also patented, but it is unclear that is actually helpful, except for explaining neuroscience. To my knowledge there has been no application of it in two years.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: