Hacker News new | past | comments | ask | show | jobs | submit login
Computational Power Found in the Arms of Neurons (quantamagazine.org)
153 points by reubenswartz on Jan 16, 2020 | hide | past | favorite | 75 comments



Knowing little about neuroscience outside of biology class, I've always wondered why neurons would be so simple as to just compute weighted sums and thresholds.

I mean there are unicellular creatures that can do incredibly complex things, and neurons, as all cells, are ultimately the descendants of such unicellular organisms, "teamed up" into a multi-cellular creature. They have a lot of internal structure, organelles, whatnot. Modeling them as some simple switches seems a stretch.

My point is, overall this doesn't seem so surprising to me as a layman. (Which doesn't mean more than just literally that. I'm not claiming laypeople should decide scientific questions from their armchairs.)


We always knew that neurons did way more than just integrate voltage and produce spikes. The problem with testing out what those things are has largely been technological. Only until quite recently have we been able to electrically isolate dendrites and axons for recording, and slowly it's becoming tractable to image from entire neurons with high temporal resolution using voltage-sensitive fluorescent indicators. So the field of dendritic and other sub-neuronal computation arguably in its infancy.

The other things we figured out in the meantime relate to different types of neurotransmission (e.g. volume neurotransmission), active processes in axons and dendrites like vesicular trafficking, synaptic tag-and-capture (if it turns out to be true) and all kinds of weird types of plasticity. Basically a neuron's function is a lot more nuanced than the simplistic "integrate and fire" idea.

Artificial neural networks (in the deep learning sense) therefore don't really have much of anything to do with biological ones function-wise. It really is just regression with a lot of nodes and layers, and maybe some bells and whistles. That doesn't mean they're not powerful in their own right, just not comparable to brain circuits and don't do the same kind of thing nor solve the same kinds of tasks.


This is an intuition that many neuroscientists have shared, it's just that direct experimental evidence of 1) it happening at all in a non-trivial way and 2) it being relevant to the behavior of an animal is somewhat scarce.

Plus, the sum and threshold model actually gets you pretty far, as evidenced by artificial neural networks, so there might be some resistance to adding complexity that might not be necessary.

Edit: Google "dendritic computation" if you want to dig deeper.


clarifying that dendritic computation has been studied for decades, this one is just a new type of dendritic spike


Also: spiking neural networks. Doing useful stuff with them has been a work-in-progress so far though.


Yeah, spiking neural networks are a tough nut to crack. Check out Nengo if you're interested in learning more


they are not. the current consensus in computational neuroscience is that a neuron can be equivalent to a 2-layer or 3-layer ANN with point nonlinearities.

Edit: i am blocked from posting, so here are the refs:

https://www.ncbi.nlm.nih.gov/pubmed/20800473

https://www.ncbi.nlm.nih.gov/pubmed/25554708

https://sci-hub.tw/https://www.sciencedirect.com/science/art...


Do you have any references you could share on this? I'd like to read more, thanks!


Minsky and Papert showed that computing XOR requires a 3 layer artificial neural net: Input, Hidden Layer, Output - which has 2 layers of weighted connections.

This[0] research shows a single neuron can compute XOR, thus the artificial neuron has less computational power than a real one.

[0] - https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_s...


That's exactly what TFA is about ;)


Ah interesting thank you


Around the time everyone was talking about tensegrity there was a researcher that had figured out a mechanism by which cells can sense pressure, due I think to mechanical stress on the molecules in membrane being detectable inside the cell. Proteins and lipids as proverbial nerve endings. They speculated from this that neurons rely on quantum effects, possibly using similar mechanisms, and that the surface area is involved.

The arms of neurons have a lot of the surface area, don’t they?


Surface area of the neuron depends on the type of neuron. It can be a very important factor in summation of inputs and propagation of potentials, along with shape, diameter, myelination, etc.


The idea is that a certain simplification could work enough to be valuable. It is similar with gravity isn't it? We don't have a grand unified theory of quantum mechanics, but even Newton's abstractions serves us well in certain situations. We don't know how it behaves in the micro level but we can still observe the macro effects and create an abstraction out of said behavior.

And to be honest, it took us somewhere. Yes we don't have AGI or anything close to it but the products of machine learning is something we depend on every single day now.


The risk is that we've developed a formalism and ecosystem that works on entirely different principles than the brain, even if it looks similar.

It still works, but it might not be a useful model to anyone studying the brain. I've yet to meet the neuroscientist who assumes it is, so perhaps that's not a problem.


> The risk is that we've developed a formalism and ecosystem that works on entirely different principles [...]

It's a possibility, but I sometime have the feeling that people dismiss the idea that such a simple model of the brain could be enough to explain complex behaviors because they want to believe there is more to the brain.

I'm sure real neurons are very complex and difficult to model, but I also believe that the real challenge is to explain how neurons interact with each other, not how they behave individually.


And why do you believe the current non-intelligent ML is sufficient?

As it is, ML is running away rather fast from the integrator model by introducing explicit gating and nonlinearities in the neurons.

Yet it is still a toy compared to a nematode.


You're right, I doubt that the current models are enough to capture the complexity of the brain.

But models based on incredibly simple neurons can already produce quite complex behaviors. They show how many simple computing units interacting with each others can lead to things like vision. And I do believe that this is a fundamental principle.

Maybe we should explore that idea and scale this model up, instead of rejecting it as "too simple" and hoping that the complexity of the brain will be fully explained by the discovery of some quantum effect in neurons.

> As it is, ML is running away rather fast from the integrator model by introducing explicit gating and nonlinearities in the neurons.

I think the idea of a non-linear activation function has always been around. But for the rest I agree.


Inside each sell is gene regulatory network (GRN). Active genes produce proteins. Some of those proteins increase or decrease activity of genes. GRN as a whole can be modeled as stochastic recurrent neural network.

The potential for computation inside each neuron is similar to recurrent neural network.


Except the clock is ticking every 30 minutes in that network. it's too slow for behavior. GRNs are relevant to long-term plasticity which is protein-dependent and has prominent modulators (e.g. CREB)


Understanding of neurons goes wayy beyond that today, see https://en.wikipedia.org/wiki/Hodgkin%E2%80%93Huxley_model

ANNs are abstractions that seem to work. We still don't have a full understanding of how two neurons communicate.


Using some back of the napkin math there are approximately a hundred trillion atoms in a typical neuron.

https://www.quora.com/If-a-neuron-were-a-galaxy-how-would-th...


Computing weighted sums and thresholds gives you a universal function approximator, so that's nice. I'm more interested in how the buggers set those weights and thresholds. It is hard to imagine them having access to any gradient information, so how do they work?


They do have chemical gradients, but the mechanisms is not even comparable to simple backpropagation...


For those of you that want to be horrified / amused by code, check out the materials and methods of the article itself. It contains a link to the full source code used to generate the modeling figures:

https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model...

The simulator used is called Neuron. Code for it is written in a custom language called Hoc and models are implemented in yet another domain specific language. Hardcoded parameters galore including whole sections of code that are prefaced with: this has been copied from this other publication.

https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model...

The dendritic tree geometry is specified as hardcoded set of point and branch declarations

https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model...

(apparently that morphology was reconstructed from imaging data?)


Why horrified? A quick scan and it's actually nicely laid out and indented, split into logical sections and commented.

Without wanting to spend a lot of time on it, why is it horrific?

There's nothing intrinsically wrong with hardcoded values as they're biological constants, and nicely identified by the standard convention of using ALL_CAPS_VARIABLES.

Edit: Scanning a bit more, he's probably referring to this file, which is pretty horrific:

https://github.com/ModelDBRepository/254217/blob/master/_mor...


The morphology files are automatically generated by a 3d reconstruction program called neurolucida. NEURON is an ancient program, written in 1983, using a custom parser for the ad-hoc language literally called hoc. It has a hardcoded limit in the number of lines a function can have (hence you have to split func1(), func2() etc) and a gazillion bugs. It uses a modified kinetic equation solver which compiles those .mod files to to a DLL that NEURON then uses to run. It uses some GUI library called InterViews from god knows where. It's generally a rather bad program nowadays, but written by the guy who invented a faster Crank-nicholson method to integrate the cable equations (Hines). Despite a few alternatives existing, NEURON is still the most popular because there are a lot of ion mechanism kinetic modules (the mod files) that you can download and reuse from previously published models, which have been hand-tuned to be quasi-realistic. NEURON is supposed to be used by neurophysiologists, not by programmers.

I think a mass open source campaign to replace NEURON and ModelDB with something more modern, and clean up and convert every published model would be worth the effort. There are quite a few alternative programs (GENESIS, Neuroconstruct etc) that are either incomplete or abandoned because of lack of funding/interest.


Alas, the possibility of comments like this are one reason that some researchers hesitate to release their code.

Even if there are issues, it's tremendously valuable that they shared this, so others can reproduce and build on their work. That said, most scientists would like to have better quality code in science as well, but I think the incentive structures and other demands that make it challenging...


As the saying goes, “if a brain were easy enough to understand, we’d be too stupid to understand it.”


As someone who studied neurobiology more deeply than most CS people I've been talking into the wind for a long time that neurons are not point objects that can be simply modeled by equations. They are organisms with incredibly complex behaviors and internal gene regulatory structures and other information processing capabilities.

Neural networks are coarse grained models of large scale brain structure. The fact that these models can be taught to do very interesting brain-like things (especially pattern recognition) demonstrates that this structure is important and fundamental, but that doesn't mean it's the whole picture of what's going on in the brain.

I say talking into the wind because in my experience most CS people tend to hand-wave away biology at the cellular level and below. There's this zealotry around us being "almost there" with AGI and the rest that blinds people to the real magnitude of the problem and how much is really happening in the brain. This likely includes a whole lot that we haven't even started to really understand.


Amen.


> It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.

It has been suggested that neural networks have diverged from biological principles to the point where biological research does not provide any useful improvement to current machine learning techniques. Back propagation of errors was a major advancement in the 1970s, and it was developed purely based on mathematical principles like statistics, differential equations, and calculus. As far as I have read, error back propagation is not how biological NNs learn, and it seems to be a much more efficient strategy. Biology seems much more brute force in comparison.


> It may also prompt some computer scientists to reappraise strategies for artificial neural networks, which have traditionally been built based on a view of neurons as simple, unintelligent switches.

I find this is a conveniently simplistic view of the current state of machine learning, which they wiggle out of by saying "traditionally built". Anyone working on ML and neural networks these days knows that modern networks are built out of building blocks -- certain standard architecture archetypes (MLP vs CNN, autoencoders, GANs, etc), resnet blocks, training regimes (layer pre-training, progressive training, etc), normalization methods, and so on. Sure, the term "neural networks" originally implied that each linear-non-linear operation pair represented a single neuron, but it's not a stretch to say that if neurons are more powerful than we thought, then each neuron represents, say, a small MLP block, or a resnet block. The biological analogy still stands. Citing ideas of ANNs from two decades ago to promise how new results in biology may change "current" thinking in ML (by citing decades-old ideas while calling them "traditional" no less!) is disingenuous.


> Biology seems much more brute force in comparison.

Biology is more decentralized for sure, but I wouldn't be so fast with calling it "brute force". One interesting feature of biological neural networks is the variety of neurotransmitters. In very simple terms, they are not only capable of exciting or inhibiting neurons from firing, but they are also capable of regulating how neurons adapt to the stimulae they have been exposed to. In other words, the biological network not only learns, but it runs the learning algorithm itself. It appears to do this by creating various layers of communication with different effects on the computing nodes.

In my view, we have not been smart enough yet to figure out nature's algorithm.


>Biology seems much more brute force in comparison.

A singly honeybee with its billion synapses can adapt to a variety of situations and environments, learn on-the-fly, perform complex sequences of tasks, cooperate and communicate with other bees. All of these capabilities are emergent and packed in its tiny head.

State of the art artificial neural networks (ranging beyond billions of parameters now) only do the thing they're specifically built for, only after training with bazillion specific examples and consume tons of energy while doing so.

Which one of these sounds like the brute force approach?


The majority of behavior in organisms like bees is instinctual, not learned in their lifetime. That training required millions of billions of trials over the course of hundreds of millions of years.


>The majority of behavior in organisms like bees is instinctual, not learned in their lifetime.

It doesn't matter whether their behavior is considered "instinctual". What matters is that they can quickly adapt their behavior to entirely novel scenarios:

https://science.sciencemag.org/content/355/6327/833


“Brains may be far more complicated than we think,” as said by anyone who has worked with brains! Good stuff though, very interesting :)


As said by a brain studying brains


Don't get me started, if a complex brain can finally understand the brain, does that not mean we will still be missing the special sauce that was the cause of the complex comprehension, er, I'm out!


This is why in some philosophies, there is a big difference between experiential knowledge and awareness and empirical ‘measurement’ knowledge.


Ah, it's spiking edges as opposed to spiking node activations. I like this because it applies to 1 of many connections uniquely, rather than dividing the entire graph by spiking neurons. Exponentially more edges than nodes in a dense net. Connections are always more important than entities.

```The dendrites generated local spikes, had their own nonlinear input-output curves and had their own activation thresholds, distinct from those of the neuron as a whole```

How would this work in practice? Apply activations to multiplication values of the weight, or just don't perform the multiplication if the activation of the node is low?


Change the simulation unit from neurons to dendrites. Miss different kinds of gate and nonlinear activations.

Which is what's being done lately, but then we know next to nothing about the topology of real neural nets nor electrochemical communication inside a neuron.


I wonder if this is also the case for neurons in other animals. Probably it is, but it would be a stunning revelation if it is not the case.


From the top reddit comment:

> This paper is amazing. What is missing from the description above is that this is the first example of how human neurons are qualitatively different than rodent neurons (not only more computation power, but categorically different computation).

> ELI5: the way the biological human neuron implements XOR is by a formerly unknown type of local response to inputs, which is low below the threshold, maximal at the threshold and decreases as the input intensifies above the threshold. We never saw anything like that in any other animal. (link to the relevant figure from the paper)

Rodents being presumably the go-to non-human object of study.


Other differences of human neurons. I saw a presentation recently that said that populations of human neurons can synchronize to far higher frequencies than mouse neurons. Into the thousands kd hertz.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5100995/


Could you link the Reddit comment please?


Oh. The link was changed on HN.

Here is the reddit comment: https://www.reddit.com/r/MachineLearning/comments/ejbwvb/r_s...


notably this kind of dendritic spikes has not been found in other animals which have been investigated for dendritic spikes extensively (mice, rats, monkeys)


OT, but "They obtained slices of brain tissue from layers 2 and 3 of the human cortex" has a slightly uncomfortable air to it.



The neurons in the Creatures video games come to mind.

  In Creatures Evolution Engine games such as Creatures 3, each neuron and dendrite is a fully functional register machine.
https://creatures.wiki/Brain#SVRules


I have written a short, clear, and dense book about this exact subject. It's free! Read it here:

http://www.corticalcircuitry.com/


A preprint that came out after I released the book that I would have mentioned:

https://www.biorxiv.org/content/10.1101/613141v1.full

They model a single cortical neuron as a deep neural network with 7 hidden layers consisting of 128 hidden units each.

Biological neurons are A LOT more powerful than "neurons" in neural networks. If you hear a claim about computer-brain parity being close - the people making it almost certainly don't understand the power of cortical neurons.


Question for ML people: would backpropagation work if some of the neurons had nonmonotonic activation functions (for example exp(-x²)), or would the gradient descent get stuck on one side or the other?


Gradient descent should have no issues with a function like exp(-x^2). Actually, softmax (softmax(x_i) = exp(x_i)/sum_j(exp(x_j))) is sometimes used as an activation function. It could make sense to modify the softmax function to use -x^2 in place of x, for some use case. However, it doesn't always make sense as a drop-in replacement for other activation functions like ReLu or Sigmoid. It really depends on your use case.


It would probably work with careful choice of learning rate, initialization, and weight decay to keep signals small. Batch norm would play a larger role (probably want to use it after the activation fn). I don't see why it would get stuck on either side, but it could obviously get stuck if enough signals grow too large on both sides.


Anybody know where these super neurons are supposed to be located in the brain ? Are they exclusively located in the Neo Cortex ?


neocortex, from the anterior temporal lobes of epileptic patients . This kind of dendritic response has not been found the dendrites of other mammals so far. it's not a new type of neuron


So does this point to us being biological machines? Is it already accepted that we are biological machines?


I would guess that this is near-universally accepted among neuroscientists, and people in the exact sciences in general.

Although I don't see how it would make a difference to this question where in the brain the computational power arises from.


You'll have to be more specific with what you mean by "machines". In what ways would machines be different from non-machines?


That's a good question.

I guess I'd say a Turing machine is my definition of a machine.

And I suppose the implication is that if we are machines, operating on a stream of input, does that make us deterministic?

EDIT: What I'm ultimately interested in is if we, as humans, are operating in a realm that logic cannot operate in. If that makes any sense? I'll probably have to define realm!


Turing machine is mathematical model for computation. Deterministic Turing machine (DTM) is computationally equivalent to Non-Deterministic Turing machine (NTM) but the time complexity may not be the same.

>What I'm ultimately interested in is if we, as humans, are operating in a realm that logic cannot operate in. If that makes any sense?

Any mix of logic and randomness is still a machine. You must assume something spiritual if you think there exist rational action without computation behind it. Randomness does not help there.


> You must assume something spiritual if you think there exist rational action without computation behind it.

I'm still uncertain i.e. spiritual. I'm finding, as I grow older, logic and reason fail to answer for my experiences. Whether that's a shortcoming in my own ability to comprehend logic and reason, or because logic and reason simply do not have all the answers, I do not know.

> Randomness does not help there.

The human condition, one might argue, emerged as a way to handle randomness. I suppose, at some level, if we were to remove all randomness from the universe, life would be pretty pointless (insomuch as life is not already pointless.)


What is the purpose of life, which you say depends on randomness?


Well, the future as a mystery, something to be discovered, not known in advance, depends on randomness. I would argue if life weren't random and we could see past, present, and future with perfect clarity, it would be a little pointless. Every action you took, you'd know the outcome before taking the action. It's boring.

There is no purpose of life, in my opinion. But with randomness it's a lot more intresting.


As always, being deterministic may make a system predictable in theory, through in practice there are to many variables for it to matter. To soften the existential dread.


Existential dread is most definitely softened! Thank you.


You could argue that "man-made machines", "biological machines" (all of life, animal, vegetal and fungus) and "simple matter machines" (like stars are engines or telluric planets are combustion heaters), even the "surface natural ecosystem of Earth", are all different material implementations of machines: organized systems. Carbon-based, copper-based, hydrogen-based... you might map the whole periodic table of elements minus the rare column perhaps.

Ultimately you'd find a common unicity — like electrical charges, strong/weak force, arrangements of "gates" and "structure" etc. — but used in different ways (for instance iirc charged clouds of gas in space don't form new stars because gravity at close range is weaker than the charge that repels them, only strong enough at larger distances to hold them together; that's dramatically different from the use of charges in biological cells).

The cosmos itself is but a big machine. Who's to say we (I mean the whole planet, maybe star system, maybe galaxy itself) aren't actually just a single "cell" of the cosmos? That we are part of a much bigger machine, that we are like those processing units on dendrites in the article, if the universe is a big brain of sorts?[1]

These are all types of machines (X. process of information; Y. engines to convert energy; Z. structures that "restrict" "flow" like gates, transistors, cell membranes, dendrites; N...), which apparently may be expressed, materially implemented in different ways.

To take your question about logic.

If you mean the universal logic exposed above, I don't think so personally — merely because there's no evidence for it whatsoever, no observation; and there's also no need if we accept that from such "machinery" complexity may emerge complex systems (e.g. humans).

If you mean logic from within the human mind — and this begs the question of whether maths and physics are "invented" or "discovered" in the background — then we must assume a subjective answer, at best an aggregate of "non-disproved facts" that we can all agree on within normalcy (edge cases helping us understand said average norm).

"Logic" is but one of several operating modes. Are we more than that? Most certainly yes. That's the experience of all of us. Because perception is so centered on thought, we tend to overappreciate the importance of thinking in our lives, in our behaviors and even values; but the relative picture gradually revealed by biology and psychology and sociology and economics etc. is that we are mostly irrational, mostly automatic (trained habits, patterns recognition, educated intuition, etc), and actually very little in the way of "logical behaviors units".

I'll let you ponder the discrepancy from an absolutely (perhaps) low-level logical machine to the possibility of a chaotic non-logical emergent high-level behavior. What it means for life, for AI, and possibly much bigger or smaller things we consider "inert", for now.

[1]: I mean, look at the larger structures of the universe, tell me it doesn't look like a sample from a biological tissue... http://cosmology.com/GalacticWalls.html (figure 6: http://cosmology.com/images/darkmatterdistribution.jpg)


> Because perception is so centered on thought, we tend to overappreciate the importance of thinking in our lives, in our behaviors and even values

Very true. Thought can become our master easily. As a programmer, I often find myself struggling to get out of 'logic' mode or 'thinking' mode.

> I'll let you ponder the discrepancy from an absolutely (perhaps) low-level logical machine to the possibility of a chaotic non-logical emergent high-level behavior. What it means for life, for AI, and possibly much bigger or smaller things we consider "inert", for now.

Wonderfully put! I think the ideas of emergent behaviours are a saving grace for the incessant desire of ours to reduce everything to its atomic (in the sense of an atomic operation, not an atom) components. The emergent behaviours, can't be predicted. They can't be algorithmized. Becuase to build algorithms one must go down to those atomic operations.

My wider concern is that we are currently obsessed with analytical thought and philosophy and that the continental philosophy has become a 2nd class citizen. I think this has happened becuase analytical thought lends itself to being algorithmized, whereas continental philosophy does not.

Sorry, that was a bit of a tangent!

Yes! Love the pictures! I have had the same thought/feeling when I first saw them too!


> I think the ideas of emergent behaviours are a saving grace for the incessant desire of ours to reduce everything to its atomic (in the sense of an atomic operation, not an atom) components.

Very well said! Humans think a lot in terms of dichotomies (here the micro/macro, or whichever scale you want to consider), but I've heard biologists and physicists explaining that the closest to fundamental functions is closer to e (natural log), sinus (and hyperbolic stuff), the notion of "binary polarity" is very specific and quite limited — look at the Standard Model, it's clearly more complicated.

I personally think that's how the dimension of time emerges in our perception (any perfect observer): periodicity in all phenomena (but with e.g. Fourier it gets extremely complex at emergent thresholds), the arrow of time (non-periodicity is basically high entropy, 'heat death'/homogeneity).

Cue any such human-driven dichotomy (I think this one is relatively easy: the deepest processing system is probably close to "emotions", and these at the lowest level work in terms of "rather good" and "rather bad" (different sub-regions of the brain) + some "general integration" (third region). And what do you know, we tend to be heavily polarized in general, it's like most people feel reassured (familiarity) when things are explained in terms of black and white, left and right, ones and zeros.

(Just my thoughts on it.)

> My wider concern is that we are currently obsessed with analytical thought and philosophy and that the continental philosophy has become a 2nd class citizen. I think this has happened becuase analytical thought lends itself to being algorithmized, whereas continental philosophy does not.

Strongly agreed. I think we're witnessing a kind of "revival" through many fields (some spiritual, some historical). Things are moving. I think dumb things like numbers also influence people, it's been ~20 years since the turn of the millenium and that's enough for 1 generation to weigh in and the others to accept change because "oh new era, obviously, new number!" — this plays at the subconscious level of very crude processing, me thinks.

Nice tangent, hopefully not too hyperbolic. ;-)


edit: no sure what that website is, it was a google search. Here's another source: https://phys.org/news/2014-11-filamentary-galaxies-evolve-co...


it would be news if we weren't


I remember reading this before, in a comment somewhere here on HN




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: