Hacker News new | past | comments | ask | show | jobs | submit login
Neurons unexpectedly encode information in the timing of their firing (quantamagazine.org)
276 points by theafh on July 7, 2021 | hide | past | favorite | 108 comments



> For decades, neuroscientists have treated the brain somewhat like a Geiger counter: The rate at which neurons fire is taken as a measure of activity, just as a Geiger counter’s click rate indicates the strength of radiation.

This hasn't been true for at least 20 years. There's a classic paper showing evidence of this in 1993 (O'Keefe and Reece) in rats. And has been an active area of discussion both before and since. (Note that it's not to say that rate isn't important, but as a community no one has beleived that all the information would be encoded in rate for many years)

There's lots of good explainers here that link to relevant research: http://www.scholarpedia.org/article/Encyclopedia:Computation...


They mentioned this in the 3rd paragraph: "This temporal firing phenomenon is well documented in certain brain areas of rats, but the new study and others suggest it might be far more widespread in mammalian brains."


Hilarious. We shouldn't trust rat/mouse studies, but it's typically because the mice are way LESS complicated than humans. I would expect complicated behavior in mice to be treated as a lower bound for the complexity in humans. (but what do i know, i'm just a monkey.)


There are lots of animals that can beat us at specific neural tasks. So I don't think it's helpful to think in those terms. For example, our visual short term memory is bested by chimpanzees, at least on certain tasks used to measure it across both chimps and humans.

Lowly mice likely have better olfactory capabilities than us. It wouldn't surprise me if their brains can handle some very specific things better than we do.


>There are lots of animals that can beat us at specific neural tasks.

anecdotally - in front of us a woman with a dog is leaving the dog park, and the dog pulls in the opposite direction that the woman tries to go, and that goes for a few seconds until the woman "Oh!, you're right, today we parked there" and follows the dog.

And on 2 occasions spread in time and space i saw a racoon confidently crossing an intensive traffic boulevard (once it was Page Mill in Palo Alto and another was Geary in SF. The Geary was around 6-7pm, getting dark, high traffic, such crossing presents some cognitive challenge even for humans) following the basic procedure we are taught in childhood - check for traffic (and the racoons were checking for the correct direction), wait for the sufficient opening, cross to the middle, check and wait again, cross.


as a kid i had a cat that would look both ways before crossing the street.


I saw many times straw dogs waiting for the semaphore to go green and then walking in the pedestrian crossing when most humans weren't.


maybe, humans have great olfactory sensitivity and can rival dogs with some smells, but our nose is to far from the ground to be useful for many tasks and in-turn leads to lack of use and we kind of ignore it and generally don't get how to use it

https://www.livescience.com/59070-human-sense-of-smell-sensi...


"Scholarpedia [.org] : the peer-reviewed open-access encyclopedia (where knowledge is curated by communities of experts)."

Thank you, thank you, thank you.

This fills an important gap.


Agreed, that opening line suits neuroscience poorly. "For decades, meteorologists have treated the environment somewhat like a Geiger counter: the temperature is taken as a measure of energy, just as a...", it's ludicrous!



Rate coding vs temporal coding is literally a meme in neuroscience because the two camps seem to refuse to compromise.

Everyone else has realized that both happen depending on how that particular part of the nervous system works, or even what particular kind of information is flowing through it.

This title reads like it was written by a rate coder who woke up one day and was like "Woah, you mean ... there might be more to it than just averaging spike counts per unit time???"

edit: Hah, they are literally the first two headings in the wikipedia article on neural coding [0].

0. https://en.wikipedia.org/wiki/Neural_coding


I feel like the more you learn about neuroscience, the more you learn how precise and complex neural information processing is. For instance, there's evidence that at least some neurons "record" information about their activity in the form of RNA. Also the placement of synapses in the dendritic arbour matters, and some synapses can act like logic gates with respect to the synapses farther down the same branches.

I think there almost certainly must be neural behavior which codes fairly simply based on rate, but it's very difficult to believe that there isn't neural computation based on precise timing relationships.


> there's evidence that at least some neurons "record" information about their activity in the form of RNA

Source? I'm not a neuroscientist, but I'd have thought that I'd have heard of this if there were legit evidence. Are you saying that neurons might use RNA as a sort of "long-term" memory of activation patterns? This seems really unlikely! But again, I'm not a neuroscientist.



I'd have thought that I'd have heard of this if there were legit evidence

Changes to DNA would be the long term changes, not RNA. But yes, look up epigenetics, and especially DNA methylation.


This might be a reason why the immune system is kept outside from the brain.


I think it's also fascinating to learn about computation, sensing, reaction etc. in other cells, both within multicell organisms and when looking at single cell organisms.

There was an article on this that I don't know how to find again - the point being that neurons are not unique in their capacity for computation, they are only the most evolved formed of it.


Yeah I mean if you look at the structure of Neurons it's easy to understand them as an extremely optimized version of "normal" intercellular communication. I mean all a synapse really is is chemical signaling between two adjacent cells.

The structure of neurons allows them to achieve "adjacency" to other cells which would be far from one another in most tissues, and electrical transmission of the action potential just allows neurons to control that chemical signaling in a very precise and directed way.


First rule about sufficiently evolved systems: They're more interconnected and involved than you likely first expect.


Neurophysiology and phase precession have a storied history, from the 1990's up to the Nobel Prize in 2014.

Wikipedia: https://en.wikipedia.org/wiki/Phase_precession

O'Keefe and Reece in 1993: https://onlinelibrary.wiley.com/doi/abs/10.1002/hipo.4500303...

... [F]iring consistently began at a particular phase as the rat entered the field but then shifted in a systematic way during traversal of the field, moving progressively forward on each theta cycle... The phase was highly correlated with spatial location... [B]y using the phase relationship as well as the firing rate, place cells can improve the accuracy of place coding.


In my very different field (robotics and industrial automation), temporal coding is one of the most powerful ways to expand your IO. Nearly all PLCs, sensors, and robots make heavy use of digital IO. But this parallel interface is limited, especially with hard-wired signals, and even if you're using serial network protocols the typical fieldbus abstraction represents the network as fixed-size buffers of digital IO that update every few milliseconds. Analog signals are usually an expensive optional extra!

However, the controllers all include high-resolution timers. If you need to transmit an analog value, rather than bit-coding it over 12 discrete digital IO, a clever programmer might turn on a digital output for the desired number of milliseconds, or select between 10 or 16 or however many states you want to represent with your one wire using a predefined list of durations. You can convey far richer information this way!

Always interesting to see what researchers are discovering in the automated control system that is biology... sometimes we can discover techniques for use in industry with biomimicry, sometimes biology we didn't know about seems to imitate technology we developed independently.


Not sure I understood you.

Did you describe PWM?


No, typical PWM is a repetitive hardware function of an embedded system, and operates at kHz or MHz. Industrial controls usually have a scan time or requested packet interval on the order of 10 ms, much to slow for PWM without dedicated peripherals. And even if you can transmit it, you still need to read it on the other side, again, high-speed IO is a dedicated thing, you won't buy step and direction outputs or encoder inputs for unused spares.

A PLC and a robot might interchange digital signals such as "In cycle", "Part present", "Faulted", "Clear of fixture", "Screw present", and "Cycle start". Hopefully the original designers also pulled a couple spare wires in case you need to add another sensor. But the old equipment is being asked to do something new, requirements now say you need to transmit, perhaps, which of dozens of part numbers to select from, or the touch point at which a sensor fired between 10 and 100mm. That's a binary signal with 6 or 7 bits, which means a $200 8-channel output card, multiconductor cable, and a $200 input card. When possible, you'd buy the fieldbus option cards from the factory and pull network cables. But in a pinch, with the more typical two spare inputs and two spare outputs, you can move some data to the time domain and work out "If aux1 is pulsed high for 680ms, the new sensor tripped at 68mm". One wire, one bit of state (0V or 24V), many values transmitted.

You could extend this concept with a clock signal, data signal, and transmit arbitrary serial data, but that's going a bit too far for most maintenance techs. A pair of timers is comprehensible.


This is cool hack! But boy does it terrify me!

I've spent so much time debugging weirdness getting into signals from things like lightning strikes, AM radio stations, and nearby switching power supplies....


This has been somewhere between highly suspected and well established for a few decades now, certainly not unexpected. These findings simply back up the present consensus.


The headline seems to be somewhat at odds with explanation of this "phase precession" is the body of the article:

"The phenomenon is called phase precession. It’s a relationship between the continuous rhythm of a brain wave — the overall ebb and flow of electrical signaling in an area of the brain — and the specific moments that neurons in that brain area activate. A theta brain wave, for instance, rises and falls in a consistent pattern over time, but neurons fire inconsistently, at different points on the wave’s trajectory. In this way, brain waves act like a clock, said one of the study’s coauthors, Salman Qasim, also of Columbia. They let neurons time their firings precisely so that they’ll land in range of other neurons’ firing — thereby forging connections between neurons."

My understanding of what they're saying is that it's related to "neurons that fire together wire together". Given different signal travel distances, it's necessary for neurons to fire at different times if they're to arrive at a given destination at the same time (in order to "wire together"). They achieve this timing by firing in synchrony with the theta brain waves that travel across regions.

So, with this understanding, I guess you could say the timing is encoding information, but really in essence it's only the relative spatial position - within the cortex - of the firing neuron that's being "encoded". A more useful way to view it is just that firing times are synchronized to theta waves in order to achieve larger scale coordination that compensates for signal travel distances.


Fire together, wire together refers to the connection between two neurons strengthening.

Firing just before the recipient neuron fires strengthens the bond, and firing afterwards/off tempo weakens the bond.

It's an elegant concept, because it handles neural weights, a non-linear activation function, and clock speed with a simple, distributed rule.


Right, but first the "recipient" neuron needs to fire, which requires integrated synaptic inputs to cross some threshold, which requires input spikes to arrive close to the same time.

This phase precession mechanism being discussed is what allows inputs arriving from different distances (i.e. with different signal travel times) to arrive close to the same time such that the recipient fires. Once it fires, then "fire together, wire together" can strengthen/weaken the synapses as appropriate.


After reading the article, I don't think that's what they're saying. My understanding is that place cells stand in for a literal location in space, and fire when they detect that location.

The phase precession is how place cell A overlaps firing with place cell B when the individual is moving from location A to B, strengthening the connection (to create a mental link/memory between the two?)


The article does discuss the place cell example (observed in rats), but basically as old news. What seems to be new (at least going by the Quanta article - I don't have access to Cell) is seeing phase precession in humans, and hints of it being a more universal mechanism also used for sequence learning, etc.

In the place cell instance, it seems one effect of this mechanism might be for place cell firing to act as a predictive input for adjacent place cells (a pretty solid real-word prior - you can't be "here" unless you just came from somewhere adjacent!), and another might be to make prior-and-current place simultaneously available which could be used to learn direction of travel.

If this is a general (or at least widespread) mechanism, not limited to place cell firing, then it opens up all sort of learning possibilities by bringing together (in time) inputs that would otherwise be asynchronous.


I find a lot of the common explanations of how the brain works in neuroscience to be unsatisfying… compared to molecular biology, it just feels like often we don’t have a solid (falsifiable, etc) grasp of what’s actually happening, yet… too much handwaving (for instance, the lack of any specifics on where precisely some data is located… even “it’s stored in the connections” seems not quite true or falsifiable… although perhaps not exactly false, either). So I find discoveries like this exciting as it’s starting to peel back the curtain on a true understanding of the brain and neurons.


it just feels like often we don’t have a solid (falsifiable, etc) grasp of what’s actually happening, yet…

It feels like that, because it is like that: there's probably way more which isn't known about the brain yet, than things known with a decent amount of certainty. Just skimming over recent papers in the fundamental area, a lot of that summarizes as 'So here we found that area A is connected to area B and modulated when things happen in C, so attributes to function D. But way more research needed, and unsure what this means in relation to E and F'.


My benchmark of "this thing is well understood" is that it's possible to build that thing, or a replacement for it, again.

Kidneys probably meet that criteria - we have dialysis machines which allow someone to survive without kidneys almost perfectly.

Yet the idea of replacing someones brain with something artificial and having them function as normal is still a LOOOOOOONG way off.


> My benchmark of "this thing is well understood" is that it's possible to build that thing, or a replacement for it, again.

Something I've been thinking a lot about lately: Implicit in statements like this is the idea of a system. That some complex-seeming artifact is actually composed of a relatively smaller number of essential things and all of the observed complexity is just emergent properties of the simpler underlying system. Find the handful of hidden rules and you can build back up to the whole thing from first principles.

For example, if you were to learn chess purely by watching people play, it would be a huge struggle at first. Does how they hold the pieces matter? What role does timing play? Why does one player rest their head on their cheek while staring at the board? Eventually you start to figure out which actions are essential and which aren't. It doesn't matter where inside a square a piece is placed. All pawns are behaviorally equivalent, etc.

We really really like systems. So much so that we tend to assume everything is one. But I see no evidence to assume that biology and evolution work that. Evolution is a semi-random walk over the phenotype space and fitter organisms are discovered (mutated) entirely randomly. It may be that a kidney mostly filters blood, but also does a little of this other thing, and the fact that it pushes your small intestine out of the way is important, and also and also and also...

We can increase our understanding by learning more, but there may simply be no "first principles" for what makes an organism tick and almost all of its complexity may be irreducible. There may be absolutely no separation between "fundamental property" and "implementation detail". It may be that no terms in the grand equation of life cancel out.


Evolution is an incremental mechanism, which is to say that it preserves most of what it has previously done (DNA encoded) and just makes small changes on top of that. IOW it is essentially structure preserving, and any change that undoes anything that still has value is likely to be maladaptive and not preserved (as opposed to repurposing of gills into ears, etc, where the original function is not being used).

Of course evolution is also messy and isn't operating out of a playbook of decomposable single-function parts. Experiments with evolving electronic hardware have resulted in circuits taking advantage of all sorts of nasty non-linear analog effects, as you might expect.

Still, given the inherently incremental nature of evolution, it is highly likely to result in a system of parts operating with some degree of independence to each other. While there are still many aspects of the brain's functioning we don't understand, it's pretty apparent that it is composed of functional parts like this - cortex, hippocampus, cerebellum, basal ganglia, etc.



> but there may simply be no "first principles" for what makes an organism tick and almost all of its complexity may be irreducible. There may be absolutely no separation between "fundamental property" and "implementation detail". It may be that no terms in the grand equation of life cancel out.

But this is NOT true of all biology. I picked molecular biology as an example for exactly this reason. It’s driven by evolution with all its messiness, but yet it DOES have some reducible complexity. There really is DNA that is transcribed via certain molecules to RNA which is transcribed by other molecules into protein via a sequence of certain amino acids coded by the DNA base pairs. There is reducible complexity in spite of the crazy messiness of evolution, and it ends up looking a lot like some of our engineered systems in some instances (ie we can use the language of information theory and bits to describe encoding of genomes). We are able to use this to actually develop vaccines specifically using mRNA as a delivery mechanism, with specific, engineered changes to the transcribed viral protein spike to improve the vaccine’s effectiveness.

What I see in neuroscience looks a lot like genomics and inheritance before the discovery of DNA. And the insistence that “biological systems are entirely non reducible complexity” feels just a bit too much like a cop-out. This is not magic. If you are a neuroscientist optimistic about the field, then you must also believe there is some reducible complexity in there that will be discovered. I do get the feeling, based on research progress like in this article, that there really are some breakthroughs coming in really understanding what’s going on.


> And the insistence that “biological systems are entirely non reducible complexity” feels just a bit too much like a cop-out.

What insistence do you refer to?


> Kidneys probably meet that criteria - we have dialysis machines which allow someone to survive without kidneys almost perfectly.

This only means we know what kidneys do, not how they do it.

(But probably we do... I am not a biologist.)


There’s a strong bias toward things that are model-able in neuroscience.

The role of microtubules, for example, are mostly ignored even though they may explain the complexity of cognition displayed by relatively “simple” brains.


Fringe Tangent:

It's possible those microtubules in our brains are 1 dimensional superconductors, and thus might be capable of holding Qubits. We might have quantum memory.

https://arxiv.org/ftp/arxiv/papers/1812/1812.05602.pdf


That's still extremely unlikely given all we know about quantum computers so far, and about biology.

It's also important to note that quantum computers are just faster classical computers, they are still Turing machines. Many people who are looking for some non-computable element of consciousness in quantum effects in microtubules seem to forget that.


Quantum computers are not Turing machines as far a I know.

They can encode exponentially many states simultaneously, and are non-deterministic.


You can simulate any quantum computer on a Turing machine, though it generally takes exponentially more time than with a real QC. Whether this means they are the same thing or not is a matter of semantics.

But what is clear is that there is no computation that can be done on a QC that can't be done on a TM.

The role of non-determinism in computation is more debatable. There is even a notable MIT professor (who coined the term Actor model) who claims that real computers, like the Android phone I'm writing this on, are in fact not Turing machines, because of non-determinism / parallelism. He also claims that Godel's incompleteness theorem doesn't apply to actual mathematics, so I would take his words with a huge spoonful of salt.


they are they just dont work in binary bits they work with qbits which simply have behavior rules diffrent from digital but can be simulated by one just inefficiently. much like a binary computer can be approximated by a balanced trianay computer or analog computer or by pumps valves and flowing water like in MONIAC. they are all turring complete and can simulate each other just not efficiently. they dont have any magical logic gates they are just able to do a task that would need require lot of time or many parrelel computaions or both in less time because a single qbit is in super position of 1 and 0 simaltaniously and we can do mathmatical operations on it without collapsing the superposition until we have observed it. to simulate this on a traditional computer you would need to run it for each 2 time per bit.

so for a 8 bit key for example you would need to run the same operation 256 times 65536 for a 16 bit key and 4,294,967,296 for a 32 bit. so in essence quantum computation just lets you cheat on massively parallel processes but is not a hyper-turring machine or a turring oracle. in fact there are many algorithms that you get no gain from using qubit other than i higher energy bill


not sure why I am being down voted here perhaps someone needs to do some reading

https://en.wikipedia.org/wiki/Quantum_computing#Computabilit...

https://www.smbc-comics.com/comic/the-talk-3


They are not computably non-deterministic, so they are still Turing machines with a random number generator attached.

Qubits aren't magic either and the amount of data they can store is well known, it's just a complex bit instead of a real one.


> they are still Turing machines with a random number generator attached

It's important to note that probabilistic Turing machines are different from quantum Turing machines in terms of efficiency, as far as it is known today. But still, that doesn't change the fact that they can (inefficiently) simulate each other.


quantum computers are not just faster classical computers. Quantum algorithms have no classical counterpart. In fact the literal clock speed per quantum computation may be slower but the computation will still get done faster overall.


But there’s no computation a quantum computer can do that a classical computer can’t. In that sense, it’s still a Turing machine. It can do some class of computation faster (in principle).


Richard Feynman describes such computation, that produces incorrect results on classic computer and correct one on quantum in "Simulating Physics with Computers"[0]. He follows one physical experiment detailing every step from start to finish and how it converges to incorrect result on classical computer.

[0] - https://doi.org/10.1007/BF02650179


Thank you for the link! I have also found a free version for anyone else interested: http://physics.whu.edu.cn/dfiles/wenjian/1_00_QIC_Feynman.pd...

However, Feynman does not claim what you say he claims. His whole article is about efficiently simulating QM on a classical computer, and he shows that is not possible given what we understand of quantum probability today, at least (he does this by explicitly asking for a computer which does not grow exponentially in size with the size of the physical system it wants to simulate).

In modern CS terms, what he is discussing is whether the complexity classes BPP and BQP are equal or not (as far as we know today, just as he claims, BPP seems to be a smaller subset of BQP, but this is not proven).

This is all perfectly in line with my claim, and is in fact explicitly there in Feynman's paper: a classical computer can perfectly simulate a quantum system/computer, but requires exponential time/space to do so (as far as we know).


Perfectly except for random number generation right? The randomness of QM can never be decoded or attacked. If we are talking about fudging/"simulating" randomness with classical pseudorandom number generators and relying on computational complexity prohibiting decoding the patterns, and quantum computers are theoretically able to speed up factorization of large numbers, don't we have a problem?

Isn't there a point where we could say we can't really simulate powerful enough quantum computers because they can "decode" the patterns behind any psuedorandom computation so quickly?

Like no one in the right mind would be using classical computers at that point except for anything except basic computation and word processing. We might as well say humans and pen and papers can simulate quantum computers. Quantum computer capabilities may outstrip classical computers nearly as much as they do humans with pen and paper.

"Simulation" kind of loses it's meaning when taken far enough.


Random numbers are random numbers. You can calculate the probabilities (the wave function amplitudes, which are deterministic); or you can use any source of random noise to reproduce the data, once you adjust for the difference between quantum and classical probability.


For an extreme example, there are finite patterns behind the lava lamps at Cloudfare or the chaos of Jupiter's storms. These are sufficiently random for any current need I can currently imagine though.

But then I also see that "classical computers can never be built big enough to explore more than 400, actually more than, probably 100 qubits, 100 qubits doesn’t seem like very much. No classical computer can do the calculation of following what 100 qubits do" https://blog.ycombinator.com/leonard-susskind-on-richard-fey...

Are we really in no danger of quantum computers being able decipher patterns behind these traditional sources of noise? There is no pattern to quantum randomness. Aren't we going to have switch to truly random sources of noise eventually, instead of pseudorandom ones (anything non quantum)?


A lava lamp is a chaotic system. The same initial conditions, no matter how precisely measured, will not result in the same outcome, it will diverge. It is non-deterministic in the real world.


Wait, let's be clear. In the real world a lava lamp is basically impossible predict as it evolves. And that is due to chaos. So far so good.

But this chaos is deterministic. Chaos means highly sensitive to initial conditions and involves nonlinearity, but it is still entirely classical and deterministic. In the real world we do not measure things precise enough to keep track of a lava lamp's deterministic evolution, but it is there within the chaos.

So I am wondering if a quantum computer, which Susskind says takes only 100 qubits to outperform any Turing Machine constructable ever, may one day do better at picking out the deterministic patterns behind the chaos of things like lava lamps. And if that happens, we may need more extreme versions of chaotic systems to keep secrets. And since quantum randomness is the only true randomness in the universe, forever indiscernable in principle; will one day all deterministic, chaotic means of adding "randomness" be replaced with quantum sources of randomness, due to how powerful quantum computers are?

*Now you could say even the lava lamp involves quantum randomness because everything is ultimately quantum. But because it is so macroscopic, it behaves more classically the a smaller quantum system.


> a quantum computer, which Susskind says takes only 100 qubits to outperform any Turing Machine constructable ever

It's very important to understand that this only applies for a limited set of algorithms. QCs are not universal accelerators. In particular, if picking out this deterministic patterns from the apparent chaos were an NP problem, the QC would be just as slow as any other computing machine that we know so far.

You're also misunderstanding how chaotic systems work. With a chaotic system, even if you know the precise time evolution rules, you're not going to be able to predict the outcome at time T, because a tiny difference in the initial conditions, or a tiny interference from the outside world, will mean vastly different outcomes.

In fact, QCs would be particularly BAD at predicting the outcome of a chaotic system, because QCs can only give answers up to some error bound, unlike classical computers which can perform exact calculations. But the error introduced by the QC itself is probably going to compound the imprecision in the initial measurements of your chaotic system.

One final note that is important to state: the problem with predicting chaotic systems is not physical or computational, it is mathematical. You can have even simple systems whose solution can vary orders of magnitude more than a variance in the parameters. Solving such a system is easy and fast, but the solution is physically meaningless: a 0.01% error in the measurements can mean that you solution is off by a factor of 100.


If chaos is not a problem for computation, why do we always hear weather simulations needing better and better supercomputers? If simulating/computing chaos is just about getting precise enough initial measurements, then that wouldn't seem to be a need for weather simulating right?

I was thinking along the lines of running current observed data backwards to fine tuned initial conditions. That must require lots of computational power. Are we sure quantum computers and quantum algorithms wont speed this part up. That has to be somwewhat isomorphic to factoring large numbers, which I thought QC does have a potential advantage. But maybe chaos is distinct from factoring, I don't know much about how it would be modeled computationally. I realize most of the battle is getting proper initial conditions. But quantum computing is also coming at time when quantum sensing is growing. To me with quantum computing speed ups and quantum sensors, we have the two ingredients necessary to make progress on chaotic systems. Better initial conditions and better computational methods. That was my thought. Sorry for the ramble


It might be possible to predict the pattern in the lava lamp, but to do so would require cutting it off from the rest of the universe. Light, heat, convection with room air, variations in local gravity all are going to effect the flow.


>"That's all. That's the difficulty. That's why quantum mechanics can't seem to be imitable by a local classical computer."

I don't think argument is about efficiency. "a classical computer can perfectly simulate a quantum system/computer" is not explicitily there, it's an argument against that. It seems to me you're saying anything that's not strictly proving BQP > BPP supports something else.


At the very beginning he says:

> The rule of simulation that I would like to have is that the number of computer elements required to simulate a large physical system is only to be proportional to the space-time volume of the physical system. I don't want to have an explosion. That is, if you say I want to explain this much physics, I can do it exactly and I need a certain-sized computer. If doubling the volume of space and time means I'll need an exponentially larger computer, I consider that against the rules (I make up the rules, I'm allowed to do that).

He emphasizes this again in the section about computing the probabilities:

> We emphasize, if a description of an isolated part of nature with N variables requires a general function of N variables and if a computer stimulates this by actually computing or storing this function then doubling the size of nature (N->2N) would require an exponentially explosive growth in the size of the simulating computer. It is therefore impossible, according to the rules stated, to simulate by calculating the probability. [emphasis mine]

So when he uses the term 'computer' he doesn't mean 'abstract Turing machine', he explicitly means 'realizable/efficient Turing machine'.


If we want to reach AGI within the lifetime of the universe, and quantum effects are required for consciousness [within the lifetime etc...], it stands to reason we'd need quantum computers.


Yes, that is true, at least as far as we know today (it's not yet mathematically proven that quantum computers can't be efficiently simulated by probabilistic classical computers, even though we are almost certain of this).


If (a huge if) our memories are stored in qubits, a sufficiently strong magnetic pulse (one much larger than found in an MRI) might be able to erase our brains.

This also has interesting (morbid?) implications for how long our memories last after death.


This is why I come to HN daily. Thanks for the very interesting read.


Microtubules are found in all eukaryotic cells. Even single celled organisms. If microtubules are responsible for cognition, what are brains for, and why aren’t amoeba intelligent?


Have you ever watched an amoeba do its thing? I wouldn't call it cognition, but its behavior is remarkable for something with no neurons. In fact microtubules are the exact structure posited to be responsible for where and how amoebas move their pseudopods.

https://pubmed.ncbi.nlm.nih.gov/7983169/


Amoebas are intelligent. All living things are intelligent.


Reverse lamp post? We don't routinely take MRIs because of false positives, meaning we might have more questions than answers so we won't ask.

Seems like neuroscience is still in the phase of reducing misunderstanding.


I had a crazy idea about neurons that I'll share here, because why not?

What if each time a neuron sends a signal to another neuron the potential required for that connection decreases slightly? So the next time there's an action potential between neurons it's more likely to go where it has already gone. In other words, frequent connections make that same pathway more readily traversed. Could it be that a memory is simply a new signal traveling down a 'worn path'?

I realize that there needs to be some way for this resistance change to be reset over time, so is it possible that dreams serve the purpose of writing somewhat random data, like wear leveling or trimming on an SSD?

This is just pure speculation and I have "Hello World" levels of knowledge about neuroscience. But hey, it's fun to speculate, right?


It sounds like you are essentially describing long-term potentiation (LTP).

See here: https://en.wikipedia.org/wiki/Long-term_potentiation


The catchphrase is "neurons that fire together wire together".


This is a good but old article+paper that has some info on how neural networks are organized and how neurons work in the context of the physiology of the brain (information flow), the columns and layers play important but not well known functions in this: https://www.frontiersin.org/articles/10.3389/fncir.2017.0008...

Funnily, I think AI is a potentially really good use for understanding neurology further. There is so much disparate research out there, from the neuron, to the network, to the brain itself, it would be interested to feed it all into GPT-3, both the research papers and a large compendium of imaging and firing maps, and then ask it to look for connections. I'd be ridiculously interested in working on that (time to get a phd?).


Definitely check out the work of Donald Hebb https://en.wikipedia.org/wiki/Hebbian_theory


I have a completely layman theory that that’s what dreams are. Worn out pathways that our ego or consciousness is not letting through to us, but are being used in our subconscious, being able to manifest themselves finally. That’s why we can see things that are bothering us or on our mind in the background there


https://www.biorxiv.org/content/10.1101/2020.07.24.219089v1

It’s recently suggested there is nothing special about dreaming - its just a confluence of himan interpretation with a mechanism to stop you from going blind.


When I went to a therapist and we went to analyze my dreams it got weird how many details and meanings one could find there


Is this not the idea of a synapse? A pathway that is easier to take because it has been taken before?

I only have an AI understanding here, so this could be too simple.


I wish we had a large research effort on just trying to understand a cpu running windows 98.

E.g. put all of our best analysis tools to task against an operating pentium chip and see if we can determine from first principals that it is running a W98 screen saver.

That would give us a small sense of the challenge we are facing.



From my understanding, the current approach is basically the idea that if you put a bunch of neurons in a room, it eventually starts questioning the universe

It really lacks nuance


What if the universe and the room is neurons all the way down?


I mean, we can't really completely inspect a working brain to see what's happening. To get that level of inspection, we'd need to dig into the brain while it is functioning. Unfortunately, this kills the patient. And then the brain stops working. It's a black box essentially.

We have tools that allow us some degree of insight, but honestly, it is incredibly difficult and progress is slow and staggered.


You might want to ask Google about two-photon microscopy links.

Example papers:

https://pubmed.ncbi.nlm.nih.gov/25391792/ -- "Two-photon excitation microscopy and its applications in neuroscience"

https://www.nature.com/articles/s41598-018-26326-3 -- "Three dimensional two-photon brain imaging in freely moving mice using a miniature fiber coupled microscope with active axial-scanning"

Sure, it's localized and you cannot go deep, but there is so much to learn that that is plenty at this point.


So my point remains. I'm not saying there's nothing to learn. I'm saying that we cannot go deep. That we currently cannot understand the brain to the degree we understand other systems. And that there are fundamental difficulties because we're dealing with living beings.


I mean, we can't really completely inspect a working brain to see what's happening. To get that level of inspection, we'd need to dig into the brain while it is functioning. Unfortunately, this kills the patient

Leave out the completely and it's a different story though: it's perfectly possible to 'dig in while functioning' i.e. inspect small parts using electrode arrays and that will not kill the patient and only do minimal damage (the couple of cells killed by that are nothing in comparison with the entirety). Non-invasive fMRI techniques also have come a long way but temporal resolution is low. But as you say: difficult, slow, and by no means a 'complete' view. On the other hand: no idea how one would even begin to handle the insane amount of data which would come from inspecting a complete brain. So what goes on now, tackling smaller areas/connections thereof one by one, is not even that bad of an approach.


That's kind of why I said completely.

Our ability to know how a brain works on the level of how well we know, say, the combustion engine works is severely limited by the fact that we're dealing with living beings and that the state of consciousness of the subject matters.


That's kind of why I said completely.

Sort of, but to people not knowing anything about it it might sound as if it's impossible to do anything at all in vivo so I added some information about what is possible if you do not want a 'complete' recording.


yup to understand it we would have to take apart and experiment distructively on working one which woud be un ethical.

reminds me of a argument i had with my sister a psych major about whether psychology is a actually a science. her answer was it could be but it would break to many laws, violate human rights and ethics boards would frown at you for cloning hundreds of humans to raise in identical situations to do a/b testing on.


That's what rats are for. For some experiments the research animal will be immediately "sacrificed" then have it's brain sliced and diced for inspection. Brings a whole new meaning to "thank you for your service".


Preprint of the paper "Phase precession in the human hippocampus and entorhinal cortex": https://www.biorxiv.org/content/10.1101/2020.09.06.285320v1....


Motion-detecting neurons in the visual cortex need to use timing; it would be a little weird for evolution to just use that mechanism once and not try it again.


Isn't that the whole idea of Spiking Neural Networks?


This is what (mediocre) my PhD thesis was about.


Reminds me a lot of stochastic computing: https://en.m.wikipedia.org/wiki/Stochastic_computing


Neurons might contain something within them - https://news.ycombinator.com/item?id=26838016


I'm merely a casual observer of neuroscience, but I feel like I already knew this. This isn't a huge leap if you accept that Spike-timing-dependent plasticity is happening.


I totally called this!

Watch for firing patterns encoded on MRNA for pattern matching, and to help predict what might come next in a sequence. That was the other part of my theory.


Anybody look at the original paper? I'm not convinced at all by their Fig 2 - that's more a blob than a precession.


Soo... this starts to sound like serial communication. When can we start reverse-engeneering communication protocols? :D


Freeman was right after all.


how is this new? spiking thresholds serve a purpose


why is this unexpected?


The usual story is that neuron were initially characterized experimentally using current injections to which their firing times are (in a certain sense) maximally disordered and thus the response is characterized only by firing rate.

This idea is also born out in most real neural systems, where firing rate is well correlated with various sorts of feature presentation.

But at faster timescales other things seem to be going on.


Interestingly enough, a sequence of maximally disordered bits (poisson distributed) is exactly what you would expect from a highly efficient code.

So either the brain is entirely random, or so exquisitely determined that we can’t possibly figure out its code from statistics on the bits alone. I put my bets on the latter.


Don't get yourself distracted into the light while trying to find how neuron learns. Once get distracted, always be distracted and get into a rabbit hole of plethora of information but loose the initial motive.

Shape of the neurons is the memory. A fetus brain doesn't have ups and downs. It is fluidic. As we learn, we get ridges. This is just a fact to prove that neurons/ the neural fluid(neurons together) take shape as it learns. Once we establish a simple, yet truthy foundation, pile up things on this for more missing pieces.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: