Is it simulated to the level where you could give it some visual stimulus and observe the actions it is trying to take? Could it be wired to a remote robotic insect and control it in real time?
I've no idea how detailed these simulation projects and if we are months or decades away from doing what I mentioned
>Is it simulated to the level where you could give it some visual stimulus and observe the actions it is trying to take?
No way.
First, this mapping doesn't tell us how the synapses are regulated - if we could 'run' this the weights would stay fixed forever and that's not how a brain works.
Second, there must be some neurons dedicated to chemical management, and they'd go haywire unless you found a way to deal with them. It's possible the hardware/software is so intertwined it can't be separated. Or maybe it's just complex, regardless the mapping is of limited use here.
Third, you're assuming that the synapses/axons are the only thing that matters. It may well be this is true, but having other processes being involved has not yet been entirely ruled out. If they are, the mapping is incomplete.
Lastly, we don't have the computational ability just yet to simulate even the mapping itself.
For some more context / concrete links: there are ongoing efforts to simulate the C. elegans worm, e.g. https://openworm.org/ , which has ~300 neurons.
The actual precision of this model is: nobody knows, because nobody knows precisely what neurons do / what they react to. We know some of it but definitely not all. But, simulating what we do know, you get quite worm-like behavior, despite whatever flaws exist.
To get a more perfect simulation, we'd need more perfect knowledge of the chemistry and physics, and lots and lots more computing power. It's something that's continually improving, but a lot of shortcuts have to be made to make it even remotely calculable. That'll always be the case, physics is simply too complicated to both efficiently and accurately simulate.
An average cell contains an estimated 100 trillion atoms. I’m guessing to “just” simulate elegans’ 302 neurons at the atomic not to speak of quantum precision, is not within our compute capabilities yet. Until then, we’ll have to content ourselves with rather crude approximations of simulations. I’m not entirely convinced on a gut feel level that the current GPT (multimodal or not) AI will necessarily lead to AGI when we don’t even know how the biological equivalent works to the molecular chemistry level. Though I really hope I’m blind to a possibility and we hit the friendly Culture Minds-like AGI jackpot without having to first understand the biophysical mechanics of our own biological sapience.
I would guess that, if we ever manage to make intelligence in silicon, we will know how every detail works, but not why the entire thing is intelligent.
OpenWorm is a bad model. To give a computer analogy, you're simulating highly optimized low-level code for Apple ][. That uses all the little hardware-specific tricks and timings to work.
A fruit fly brain is a better model to simulate in this regard. It's much more "generic", so there's a hope that we can recapture high-level behavior from it more easily.
> A fruit fly brain is a better model to simulate in this regard. It's much more "generic", so there's a hope that we can recapture high-level behavior from it more easily.
This is a completely baseless assumption. It is also moot, since simulating even 300 neurons is beyond our current computational power; simulating 3000 is not going to be possible in the foreseeable future.
What makes stimulating 300 that computationally heavy?
(Doesn't sound like many?, but I've read that one single neuron needed 100 or so nodes in a deep neural net to simulate. But 300*100 also doesn't sound like many (nodes)?)
The problem is not the neural simulation, but that we need to simulate pretty much all of the worm. And we currently can't do that.
The neurons in worm directly interact with muscle cells, for example, via peptide/monoamine signalling. Or with "remote" neurons.
This is all possible because neural network of the worms is directly integrated with everything. And the connectivity is hard-coded in its genome.
It's not true for more complex organisms, simply because you can't encode that kind of complexity in DNA. Instead it encodes the overall connectivity structure, so that emergent behavior provides necessary instincts.
Massively simplified models exist, and yeah - we have artificial neutral networks FAR larger than that and they run just fine. But they're so over-simplified that it's fair to call them something else entirely, not a simulation. You can use them to create similar behavior, but they're just following the basic concept of a brain, not how they actually work. Inspired by a real thing, not actually mimicking a real thing.
Kinda like how a door hinge is not your knee, even though they both bend. It's not a knee simulation, it's just something that bends. For some things (doors) that's perfectly fine, and there are billions of them in use. You can't transparently replace your knee with one though, it has biology-juice all over it and needs to handle very different behaviors at times.
In most cases the vague answer is "how closely can it recreate the end result". Because that's a useful measure.
For brains: we have no idea. Every brain is somewhat different, but we only have a couple complete physical layout maps so far. No "weights", no specific chemistry per cell, no knowledge of what may have changed before it could be frozen for imaging.
With a lot of work and a lot of extra physical and chemical simulation (a full body in quite a lot of detail), based on that map and what we know of the rest of its body, openworm achieves wormy wriggling.
That's honestly pretty good! And it implies that the connection structure is at least coarsely meaningful in an extremely simple mind, which is what many have suspected but have had no way to verify before. But we know next to nothing about how that individual worm's brain behaved compared to the simulation. The fly brain will be similar - we'll have a bigger and more complex brain to run, more complex behaviors to compare against other flies, etc. It'll provide more evidence in favor of or opposed to the importance of that structure, compared to other things that we don't have enough information on yet.
Basically it's still extremely early days, so it's sorta like asking Stonehenge astronomers whether or not our supernova-brightness-based measurements of the size of the universe are accurate. They can see supernovas too, but their answer has to be "uh. maybe? give us a few thousand years to research it". We need more data and better tools than currently exist.
Neurons and synapses are incredibly complex. Think you're emulating over 3,000 heterogeneous cores with over half a million links while communication must be low-latency-ish. A third of the links seem to join other links and we don't even know what that does. If there's computation there we'll need even more cores.
I think you may be overestimating the complexity. The better idea is to set up and experiment with different simulation parameters, and see how far they diverge from actual observed behavior.
Seems two different things are being discussed, sort of top-down, vs bottom-up modelling.
I'm sure we're not far from making a high-level LLM-ish model of behavior based on those extensive studies.
But the topic of discussion is not that, but making a model sufficiently accurate that you "turn the crank" and it yields similar behavior without any priors of what that behavior should be.
To do that, at a minimum, we'd need for each neuron, the profile of responses to each of the neurotransmitters at each synapse, the excitatory/inhibitory effects of each signal, the patterns of how each neuron reacts to those inputs (i.e., receiving a signal from upstream neuron 489327 does not mean that it'll just pass it downstream, but that it'll decide depending on rate of firing, other current excitatory/inhibitory inputs, etc., if and at what rate it'll send the signal downstream), the rate of learning in each of those neurons... and a bunch of other variables, fully modeled.
Then, compute all of those running through the system, and have it take an input like a photo and output the same behavior, from the bottom up, without hints from the behavioral studies.
> But the topic of discussion is not that, but making a model sufficiently accurate that you "turn the crank" and it yields similar behavior without any priors of what that behavior should be.
Change the expression levels of some genes, alter an amino acid here or there, change the input parameters (e.g. make a phenotype that lacks the ability to feel pain), and you can end up with a neuron network that responds subtly, or grossly, differently than another. Put memory into the picture and responses can be learned. Fruit fly behavior permanently changes in response to serious injury, similar to the manner in which humans experience chronic pain. Sure, now this is something that can be modeled, but before the experiment it wouldn't have been modeled.
> After the injury healed, they found the fly's other legs had become hypersensitive. "After the animal is hurt once badly, they are hypersensitive and try to protect themselves for the rest of their lives," said Associate Professor Neely. "That's kind of cool and intuitive."
> "The fly is receiving 'pain' messages from its body that then go through sensory neurons to the ventral nerve cord, the fly's version of our spinal cord. In this nerve cord are inhibitory neurons that act like a 'gate' to allow or block pain perception based on the context," Associate Professor Neely said. "After the injury, the injured nerve dumps all its cargo in the nerve cord and kills all the brakes, forever. Then the rest of the animal doesn't have brakes on its 'pain'. The 'pain' threshold changes and now they are hypervigilant."
Edit: I got confused and didn’t write this comment in the right place.
—-
I somewhat agree with how you described it as top-down vs bottom-up. I think it’s not exactly how I was framing it, but it’s close enough, and it’s a useful way to think of it.
Even in the rest of your comment you’re taking a bit more of a bottom-up approach relative to what I’m saying: you’d be surprised how much we know about how the brain’s gross organization leads to complex phenomena (pick up the latest edition of Blumenfeld’s clinical neuroanatomy if you want the very-high-level summary).
You can, in principle, achieve a “broadly correct” outcome by doing tissue-level modelling of NNs. It’s surprising how much of the brain is macro components, as opposed to micro, cell-level processing. (Of course I’m handwaving a lot here. I’m afraid anything short of a concrete demonstration is bound to be unsatisfying.)
And yes, modeling at the higher functional level can be very useful; knock out the Wernicke's center and speech goes, visual cortex, vision, etc... So, with a more detailed functional description of each level, we may wind up with a model with useful predictive value.
Tho, that said, how does this approach create a truly robust abstraction from the lower level wetware? Would it provide ability to fully reproduce computations? Would it account for lower-level changes in health, hormones, electrolyte levels, neurotransmitter-active drugs...?
This is a little handwavy, but consider that, in the context of modelling a human brain:
- If you model top-down (at the tissue/functional level like you suggest), you'll be recreating the "broadly correct" kind of computation, and you'll be recreating something that looks and feels like a really really bad, quirky and dumb human brain. But it'll have certain human-like qualities that are maybe even difficult to pin down. These would possibly make a ton of mistakes, but there'd be a lot of human-like biases and mistakes.
- If you throw billions of neurons into a bag, you may be able to train them to perform calculations with a high degree of correctness (eg.: ChatGPT, generative art, modern ADAS systems) but when these make mistakes, the mistakes they make will look extraordinarily stupid to a human (eg.: "a human would never have suddenly steered his car into a brick wall like that").
Both approaches can produce extremely stupid results, but you need the top-down architecture if you want to preserve what makes "the human flavour of intelligence" what it is. (I suppose you could emulate the same result with a big enough bag of neurons, but that sounds very inefficient to me, intuitively.)
---
Depending on what you're interested in modelling, you may need to combine multiple approaches, as the brain has multiple layers of emergent properties. I don't think that for most purposes you'd need to go as far as modelling blood contents, but something like it might be required if "embodiment" was an important part of what you'd like to model. There are certain types of things that biological organisms learn particularly fast because they have a physical body that interacts with the real world.
I don't personally believe embodiment is fundamentally required in the model (ie.: I think it's probably possible to emulate the same result if you use a sufficiently large number of neurons), but I think realistically it will be a practical necessity for keeping models and computations as efficient as possible.
What do you actually work on? I'm a synthetic biologist and know a hell of a lot about manipulating pieces of DNA. I know a bit about other branches of lab and academic biology. And I can remember some of the chemistry classes I've taken, usually a bit after the fact.
Are you saying this because you've spent time reading and/or researching fruit fly behavior studies? Or for some other reason?
I currently work as a software/data engineer, but spent 12 years doing fundamental biochem & life science. My MSc was on the characterization of extracellular vesicles in the context of horizontal transfer of information in ccRCC (kidney cancer) cell lines.
I worked closely with one prof (not my PI) who specializes in the study of mitochondrial metabolism in fruit flies, but fruit flies are not my area of expertise. I know just enough to know that there’s an enormous amount of literature on the subject.
To simulate something you need to first know how it works and know all the relevant interactions, this is something that we currently do not understand. Neurons are not equivalent of neural networks we use in computers, they are A LOT more complex with whole groups firing the same time and chemicals regulating neurons all the time and whole topology is plastic, it works in a way we can't model or simulate currently. People hugely overestimate our knowledge about brains.
I'm a biochemist, and I somewhat disagree with this take.
Yes, it's true that modelling just a single cell's interactions with its environments is beyond our capacity.
But here we're talking about simulating how a brain reacts to signals at a higher level of abstraction: we're studying an "emergent" phenomenon. We don't need to model molecular interactions, and we absolutely can model this using artificial neural networks.
This doesn't mean we'll be able to completely accurately model its behaviour, but we should get a lot closer than many HN commenters seem to believe. Biological neurons don't have magical properties, they just have more side-effects.
I specifically used "relevant interactions" to rule out "modelling just a single cell's interactions with its environments" but who knows how deep you need to go to have an accurate model.
> we're studying an "emergent" phenomenon
It's like saying that mapping all cells in human organism to addresses in memory will give us emergent human inside computer.
But still at abstract layer we just do not have a model of a working brain, we only have maps of neurons, not an actual "algorithm" or a model of all relevant environment interactions needed for a working brain or a set of "abstract instructions" needed for it to work.
I think you are over optimistic about current state of our knowledge if we can't even model organism with few thousand neurons accurately. Imagine trying to do that for tens of billions of neurons and trillions of synapses, especially that we know brain doesn't work like an ANN at all[1].
> It's like saying that mapping all cells in human organism to addresses in memory will give us emergent human inside computer.
Sounds more like they're saying: a knee is a knee, it doesn't have any magical properties. Build something that bends, and it will behave like a knee. I don't know how true that is, obviously.
I guess the point is that we might be able to simulate an artificial knee equivalent for neurons. It won’t be as good as a real knee, but it might still be useful.
I’d argue they’re better (in some ways), given that they’re “dead” material as opposed to functional tissue. Human knees don’t last nearly as long when the tissue is dead.
Lol. That's an interesting definition of "better".
If you're active, knee implants have to be replaced after 15-20 years, because they detach from the bone. Knee implants can harbor pockets of infections because they don't have an immune system. They are also more prone to dislocations.
They are objectively inferior to a healthy organic knee.
Yes, my comment was made a bit in jest. I think everyone will agree that after an arthroplasty, your quality of life is not as good as it was before.
But most of the reasons why artificial knees are not as good as the real deal have to do with the fact that they're made of inert (albeit fancy) materials. They don't have the ability to continually heal and do tissue remodelling, which is what real tissue does.
I feel very optimistic when I think about this: we're limited, but I think it's absolutely wonderful what we're able to do.
Wasn't there success in replicating how the neuron modulates a signal with electronics? If we can reproduce the out given any in then that's interchangeable from a system perspective?
Synapses of neurons change over time due to synaptic plasticity. This process is responsible for allowing learning and memory. Synaptic plasticity occurs through a combination of changes in the number of synapses and the strength of those synapses. So these brain maps are like a single frame still image frozen in time taken from a video of the changing connectome.
The first thing you said is correct. To do a proper simulation you would need to gather functional properties of the various cell classes and their synaptic connection, which this study didn't do. (Maybe you can find that information from other lab, I'm not familiar with fly models?)
However we definitely have the computational ability to do simulations a fly network. Look at some of the modeling done by the Blue Brain Project or Allen Institute for Brain Science - they do simulations of rat and mice models with hundreds-of-thousands to millions of neurons and exponentially more synapses. 3000 neurons is not that many. If you stuck to non-compartmental point models a 3,000 neuron simulation could probably be ran on a moderately high-end laptop.
But as said before, the physical connectome is only part of the information you'd need do any worthwhile simulations.
If you're lucky, the system you're trying to simulate has good in-vivo recordings. That way you can compare the in-silco models directly to the real ones using either firing rates, LFPs, or other neuronal dynamic. Unfortunately, most of the time that isn't the case.
Trying to simulate a 3000 cells and 500K connections of the fly brain is not a computational problem, it's a knowledge one. If you can find functional properties to build a spiking/rates model, and data to compare it too; then it would be feasible (although a lot of work) to build and run simulations on the model. But without that extra info, and only using the physical connectome, there would be very little reason to try to do so.
We can model some aspects essentially completely - that's basically what this map covers, the "obvious" physical connections. Simplified forms of this can probably be simulated very very quickly. Sometimes that's sufficient.
It's not the complete picture though, normally that brain would be in an ever-changing soup of chemicals, which definitely impact behavior... somehow. Simulating that, and even knowing what might be relevant to simulate, will never be complete. Only incrementally better than previous attempts.
The level of detail in insect brain simulations varies, and it may be challenging to simulate it to the level of reacting to visual stimuli. Neural interfaces have been successful in controlling robots, but real-time processing and precision remain a challenge.
Man... I just find neural connectomes so depressing.
It's like looking at the copper wiring on the motherboard, or the pins of the CPU, when what you really want is the logic from the networked gates (transistors).
Yet it seems we are many, many decades away from being able to extract that in any comprehensible or definitive way.
I need to stop reading neuroscience articles. There's always big proclamations, Like "the neural circuitry behind arithmetic has been discovered!" then you dig into the meat and it's mostly guesswork and hypothesis based on correlated activity and connectivity, no logic to be seen.
This paper did blow my mind though, I hope to see more creative stuff like it:
> It's like looking at the copper wiring on the motherboard, or the pins of the CPU, when what you really want is the logic from the networked gates (transistors).
and what you kind of really want is a debugging guide to an OS.
Yeah all that criticism doesn't go into any detail about actual methodological flaws or issues with the results... It just complains about language and is pretty sanctimonious for such weak and generic citations. Like, those are the sort of citations I'd give as an undergrad and trying to pad a paper to make it seem more authoritative and well established than it is lol.
Were any of the criticisms NOT centered around their irresponsible use of language and about the actual methodology and results? How they cultured different neurons to play pong is pretty amazing by itself to me.
Science and Nature are the canonical exaggerators when it comes to headlines. Whole genome, whole mitochondria, complete, total, you name it, there is (nearly?) always an invisible asterisk at the end. When these words are use then Materials and Methods tell you what actually happened.
What's completely crazy in this research is the ability to thin-section fly brains. Thousands and thousands of slices _of a fly brain_, good old physical science at the heart, crowd-sourced to connect the dots (though I'm not positive that's the case in this paper). The open-hardware imaging tools used in some studies like this are also super cool- https://openspim.org/.
Well granted they're solving quadratic equations, but neither are the adults.
For instance, if you expose larvae a rewarding stimulus like sugar along with an odour, they will later be attracted to that odour. That is by definition learning, simple learning, but we have to start somewhere i guess.
Interestingly there is some evidence that the memory lasts through to the adult stage, despite the fact that a lot of the brain is actually rewired during pupation.
i intuitively think that there is something similar in kind between a fruit fly brain and a human brain regardless of the scale, and that cracking the smaller case will somehow lead to cracking the larger one. However, rather than hope, it leads me to a negative conclusion. That is, even at orders of magnitude smaller scales, and with full knowledge of how all the gross bits are connected, we still won't be able to understand what actually happens and why it works.
I suspect this will be the case because the old microscopic will become the new macroscopic and we will realise that there is yet orders of magnitude more details to make sense of.
"the old microscopic will become the new macroscopic and we will realise that there is yet orders of magnitude more details to make sense of."
That's already the case.
Below the neuron level is the molecular level.
Below the molecular level is the atomic level.
Below the atomic level is the subatomic level.
The last of these is still in the process of being explored and understood by physics, and there it might not even be possible to measure all there is at that level, much less make sense of it or adequately model it.
As Roger Penrose famously pointed out, events at the subatomic level might be critical for consciousness, and we're very far from modeling even all the molecular interaction that happen in a human brain, nevermind the atomic or subatomic interactions.
"With 3,016 neurons and 548,000 connections, called synapses, the result is by far the most complex map of a whole brain ever made."
"Researchers have also mapped 25,000 neurons and 20 million synapses in the brain of an adult fruit fly, but this is still just a partial [map]"
"Human brains have an estimated 86 billion neurons and hundreds of trillions of synapses"
The scales at play here are hard to imagine. This is very interesting but it seems the most interesting facet is the completeness, and not just the absolute scale.
if you map every single neuron, at what point is it no longer just a 'model' of reality, and is reality. How do we know that the computer model of neurons does not have the same internal awareness as the real neurons?
This article overstates it. This mapping is not as detailed as what was done with The nematode worm Caenorhabditis elegans. Where they did entire nervous system.
Guess I need to find source study. This high level article makes it sound like an actual map, not an image. Based on same type of map that has been made of nematode
From article
Now, researchers have constructed a detailed map of the neurons and the connections between them in the brain of a larval fruit fly. With 3,016 neurons and 548,000 connections, called synapses
It's just a graph of connections with neurons as nodes and synapses as the connections. Poorly translated as a "map" in reference to the historical studies that have tried to "map out" the brain wiring.
> if you map every single neuron, at what point is it no longer just a 'model' of reality, and is reality
If you map position of every fish in the sea will you have a model of a living sea with living fish and all their interactions (electrical/physics/chemical etc)? Or just the positions of fish in a sea at certain point of time?
I think with all of these examples the difference is if it executes. A computer model can be encoded and put on a loop and execute. So what is difference between biology where neurons are firing based on electrical potential of calcium atoms, versus in a computer loop processing math, both fire. So does the computer model while executing have an internal subjective experience, the same as a human with its electrical signals firing. Where in the brains is our own awareness, and if its just the 'firing' of neurons, then the computer that is a model of that firing can have same awareness. You can't come up with any argument that computers don't have awareness that doesn't also exclude humans. It is arguable that human awareness is just a byproduct of our processing.
That would require knowledge of how detailed you have to get before going deeper stops altering the performance. I suspect there is no magic line that becomes sapience once you cross it; there's just a smooth slope upwards. Someday we'll have to decide standards of measurement that separate the living from the dead.
Subjective experience of the world around us. The one thing I'm 100% sure I do have, and it would require very special pleading (or solipsism) to insist other humans are anything other than almost certain to share that trait.
My suspicion is that there is some threshold of cerebral complexity required for it to occur, and is pretty unlikely to apply to all animals with central nervous systems. Whether insect brains are sufficiently complex I really couldn't say, nor am I sure what test could be done to determine it (though one day we may have enough understanding to do such a test).
Is it reasonable to expect that as this science progresses, it will progressively map out more complex connectomes? So the connectomes of nematodes and sea squirts have only a few hundred neurons each; and a fruit fly larva has 3,016 neurons with 548,000 synapses; and humans have 86 billion neurons with hundreds of trillions of synapses; is the expectation that brain maps will progressively be made of animals that have:
Machine learning has much more humble goals: fitting a bunch of data to a curve.
Despite all the hype around chatGPT, I have yet to see any model that asks me a question (without being programmed to do so). Today, my son asked me out of the blue: "Why do people write on paper?" and "What are our walls made of?" and "Why don't we paint our house yellow?". I don't care to live extraordinarily long, but I'd give my right arm to have a quick peek into the future just to see how much of the brain's underlying mysteries will be decoded in, say, 100 or 1,000 years.
>Engelbart once told me a story that illustrates the conflict succinctly. He met Marvin Minsky — one of the founders of the field of AI — and Minsky told him how the AI lab would create intelligent machines. Engelbart replied, "You're going to do all that for the machines? What are you going to do for the people?" This conflict between machine- and human-centered design continues to this day.
Numenta tries to do something like that: reverse engineer the neocortex (so not a fly's brain). It does have some insights but I think it's still a long way.
Machine learning? We'd be better off with way less technology in general... if we manage to survive tech induced climate change and/or potential nuclear annihilation without billions dying that is...
There are billions of people alive just now because of technology. Poverty is rapidly being eradicated with the help of technology. Child mortality in the third world is rapidly going down due to the spread of technology and the knowledge how to use it. Food production is at an all-time high with the help of technology. The planet is far greener with a far larger forest cover than it was a century ago both due to technological developments which made agriculture far more productive and due to the increased CO₂ concentration in the atmosphere. The potential for nuclear annihilation has kept the (first?) cold war from going hot [1], providing a rare period of detente/relative peace in Europe. This now seems to have come to an end so it is to be hoped that rational minds will prevail where it comes to the use of nuclear weapons as they did for the last ~80 years.
Technology has its downsides as well but often it is possible to use more technology to solve problems caused by the use of technology, e.g. filters on smokestacks, fast breeder reactors to solve the problems with nuclear waste, etc. In other words there is a need for more technology, not less. It is also necessary to make technology accessible to more people than it currently is, not by handing out widgets to "the poor" but by enabling those with the will and the capacity to build up their own capacity to produce and use it. It is technology which will solve any problems caused by a changing climate, whether that be a rise or drop in temperature and the resulting effects on precipitation. It is technology which will keep the population from ever increasing to the breaking point simply because people feel the need to have many children since so many of them do not survive past their early years.
[1] ...although there have been plenty of proxy wars in Africa, Asia as well as South-America
> The connectome of the larval fly, published Friday in the journal Science, took 12 years to complete. Imaging a single neuron required about a day...
> Human brains have an estimated 86 billion neurons and hundreds of trillions of synapses...
So, the techniques used for the fly are totally impractical for humans, or presumably any mammal. Anyone know of any developments that may help? Maybe AI could be used to automate processing of the images?
Do we need to image all 86 billion? A sampling of cells throughout the brain and a simulation of a cell such that we can figure out how 1 fertilized egg can become 86 billion neurons and change. We can throw compute power at that, we don't really have brains to throw at the problem.
The sell with this kind of research is that doing it will lead to order of magnitude gains in complexity - compare the time it took to sequence the first fruit fly genome with the time it takes today
> But the study revealed that a third of the connections in the fruit fly’s brain did not follow this pattern—they were between two axons, between two dendrites or from a dendrite to an axon.
Interesting. Does this extend to humans? Does it offer a plausible biological mechanism for backprop?
I've no idea how detailed these simulation projects and if we are months or decades away from doing what I mentioned