Hacker News new | past | comments | ask | show | jobs | submit login

We always knew that neurons did way more than just integrate voltage and produce spikes. The problem with testing out what those things are has largely been technological. Only until quite recently have we been able to electrically isolate dendrites and axons for recording, and slowly it's becoming tractable to image from entire neurons with high temporal resolution using voltage-sensitive fluorescent indicators. So the field of dendritic and other sub-neuronal computation arguably in its infancy.

The other things we figured out in the meantime relate to different types of neurotransmission (e.g. volume neurotransmission), active processes in axons and dendrites like vesicular trafficking, synaptic tag-and-capture (if it turns out to be true) and all kinds of weird types of plasticity. Basically a neuron's function is a lot more nuanced than the simplistic "integrate and fire" idea.

Artificial neural networks (in the deep learning sense) therefore don't really have much of anything to do with biological ones function-wise. It really is just regression with a lot of nodes and layers, and maybe some bells and whistles. That doesn't mean they're not powerful in their own right, just not comparable to brain circuits and don't do the same kind of thing nor solve the same kinds of tasks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: