Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A biological neural network is certainly not differentiable. If the thing we want to build is not realizable with this technique, why can't we move on from it?

Gradient descent isn't the only way to do this. Evolutionary techniques can explore impossibly large, non-linear problem spaces.

Being able to define any kind of fitness function you want is sort of like a super power. You don't have to think in such constrained ways down this path.



The issue is that its still a massive search space.

You can do this yourself, go play nandgame, and beat it, at which point you should be able to make a cpu out of nandgates. Then set up a rnn that is the same layers at total layers of the nandgates and as wide as all the inputs, with every output being fed back into the first input. Then do PSO or GA on all the weights and see how long it takes you to make a fully functioning cpu.


>A biological neural network is certainly not differentiab

Biology is biology and has its constraints. Doesn't necessarily mean a biologically plausible optimizer would be the most efficient or correct way in silicon.

>If the thing we want to build is not realizable with this technique, why can't we move on from it?

All the biologically plausible optimizers we've fiddled with (and we've fiddled with quite a lot) just work (results wise) like gradient descent but worse. We've not "moved on" because gradient descent is and continues to be better.

>Evolutionary techniques can explore impossibly large, non-linear problem spaces.

Sure, with billions of years (and millions of concurrent experiments) on the table.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: