It's nice that someone now has the neural wiring diagram for part of a mouse brain. But we have the full wiring for a thousand-cell nematode, and OpenWorm still doesn't work very well.[1] OpenWorm is trying to get a neuron and cell level simulator for what's close to the simplest creature with a functioning nervous system - 302 neurons and 85 muscle cells. That needs to work before moving to more complexity.
> That needs to work before moving to more complexity.
It really depends on what level of abstraction you care to simulate. OpenWorm is working at the physics and cellular level, far below the concept level as in most deep learning research looking to apply neuroscience discoveries, for example. It’s likely easier to get the concepts of a functional nematode model working or a functional model of memory, attention, or consciousness than a full cellular model of these.
More specifically, a thousand cells sounds small in comparison to a thousand layer ResNet with millions of functional units but the mechanics of those cells are significantly more complex than a ReLU unit. Yet the simple ReLU units are functionally very useful and can do much more complex things that we still can’t simulate with spiking neurons.
The concepts of receptive fields, cortical columns, local inhibition, winner-take-all, functional modules and how they communicate / are organized may all be relevant and applicable learnings from mapping an organism even if we can’t fully simulate every detail.
The trouble is that (assuming sufficient computational power) if we can't simulate it then we don't really understand it. It's one thing to say "that's computationally intractable", but entirely another to say "for some reason our computationally tractable model doesn't work, and we don't know why".
Present day ANNs may well be inspired by biological systems but (as you noted) they're not even remotely similar in practice. The reality is that for a biological system the wiring diagram is just the tip of the iceberg - there's lots of other significant chemical things going on under the hood.
I don't mean to detract from the usefulness of present day ML, just to agree with and elaborate on the original point that was raised (ie that "we have a neural wiring diagram" doesn't actually mean that we have a complete schematic).
I'm aware of that and I've done quite a bit of work on both spiking neural networks and modern deep learning. My point is that those complexities are not required to implement many important functional aspects of the brain: most basically "learning" and more specifically, attention, memory, etc. Consciousness may fall into the list of things we can get functional without all of the incidental complexities that evolution brought along the way. It may also critically depend on complexities like multi-channel chemical receptors but since we don't know we can't say either way.
It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.
> It's a tired analogy but we can understand quite a lot about flight and even build a plane without first birthing a bird.
The problem is we don't know if we're attempting to solve something as "simple" as flight with a rudimentary understanding of airflow and lift, or if we're attempting to achieve stable planetary orbit without fully understanding gravity and with a rudimentary understanding of chemistry.
I think it's still worth trying stuff because it could be closer to the former, and trying more stuff may help us better understand where it is on that spectrum, and because if it is closer to the the harder end, the stuff we're doing is probably so cheap and easy compared to what needs to be done to get to the end that it's a drop in the bucket compared to the eventual output required, even if it adds nothing.
Your analogy is actually quite apt here - the wright brothers took inspiration from birds but clearly went with a different model of flight, just like ANN field has. The fundamental concept of the neurons are same, but that doesn't mean the complexity is similar.
Minimally, whatever the complexity inside a Biological neuron maybe, one fundamental propery we need to obtain is thr connection strengths for the entire connectome, which we don't have. Without that we actually don't know the full connectome even of the simplest organisms, and no one to my knowledge has hence actually studied the kind of algorithms that are running in these systems. I would love to be corrected here of xourse.
Even with connection strengths I still don't think we would really have the full connectome. Such a model would completely miss many of the phenomena related to chemical synapses, which involve signal transduction pathways, which are _astoundingly_ complex. Those complexities are part of the algorithm being run though!
(Of course we might still learn useful things from such a model, I just want to be clear that it wouldn't in any sense be a complete one.)
This. I simply cannot even begin to go into the sheer magnitude of the number of ways the fundamental state of a neural simulator changes once you understand that nothing exists monotonically. It's all about the loops, and the interplay between them. So much of our conscious experience is shaped by the fact that at any one time billions upon billions of neural circuits are firing along shared pathways; each internal action fundamentally coloring each emergent perception through the timbre it contributes to the perceptual integration of external stimuli.
It isn't enough to flip switches on and off, and to recognize weights, or even to take a fully formed brain network and simulate it. You have to understand how it developed, what it reacts to, how body shapes mind shapes body, and so on and so forth.
What we're doing now with NN's is mistaking them for the key to making an artificial consciousness, when all we're really playing with is the ML version of one of those TI calculators with the paper roll the accountants and bookkeepers use. They are subunits that may compose together to represent xmcrystalized functional units of expert system logic; but they are no closer to a self-guided, self-aware entity than a toaster.
Agreed, though continuously monitoring the propagation of the signals in vivo would allow us to at least start forming models on temporal or context specific modulation of connection strengths (which in the end is what decides the algorithms of the nervous system I presume)
I mean, I know that I'm conscious. Or at least, that's how it occurs for me.
But there's no way to experience another's consciousness. So behavior is all we have. And that's why we have the Turing test. For other people, though, it's mainly because they resemble us.
Can you give some examples? I'm guessing there is a different in definition of understanding here.
As I interpret GP, the claim is you can't describe something in sufficient detail to simulate it, then you don't actually understand it. You may have a higher-order model that generally holds, or holds given some constraints, but that's more of a "what" understanding rather than the higher-bar of "why".
I don't think that's what they're saying. We could have the detail and understanding but lack compute.
It seems that they are saying that a simulation is required for proof. We write proofs for things all the time without exhaustively simulating the variants.
I explicitly called out the case where issues arise solely due to lack of compute in my original comment.
I never claimed that a simulation is required for proof, just that an unexpectedly broken (but correctly implemented) simulation demonstrates that the model is flawed.
No? It honestly seems like you're being intentionally obtuse. The simulation being correctly implemented is an underlying assumption; in the face of failure the implementer is stuck determining the most likely cause.
Take for example cryptographic primitives. We often rely on mathematical proofs of their various properties. Obviously there could be an error in those proofs in which case it is understood that the proof would no longer hold. But we double (and triple, and ...) check, and then we go ahead and use them on the assumption that they're correct.
> Can you give some examples? I'm guessing there is a different in definition of understanding here.
I'm not the previous poster, but how about the Halting Problem? The defining feature is that you can't just simulate it with a Turing machine. Yet the proof is certainly understandable.
If you think you understand something, write a simulation which you expect to work based on that understanding, and it doesn't work - did you really understand it?
Maybe, maybe your simulation is just buggy. I can write a simulator of how my wife would react to the news I'm cheating on her, and fail miserably, but I'm quite positive I understand how she would actually react.
Not necessarily. A working simulation (for some testable subset of states) doesn't carry any hard and fast logical implications about your understanding.
On the other hand, assuming no errors in implementation then a broken simulation which you had expected to work directly implies that your understanding is flawed.
and just looking at the way they dance around - they're in motion, they're changing their connections, they're changing shape - is so entirely unlike the software idea of a neural network that it makes me really doubt that we're even remotely on the right track with AI research
It really depends on what level of abstraction you care to simulate.
The article starts out "At the Allen Institute for Brain Science in Seattle, a large-scale effort is underway to understand how the 86 billion neurons in the human brain are connected. The aim is to produce a map of all the connections: the connectome. Scientists at the Institute are now reconstructing one cubic millimeter of a mouse brain, the most complex ever reconstructed."
So the article is about starting with the wiring diagram and working up. My point is that, even where we already have the wiring diagram for an biological neural system, simulating what it does is just barely starting to work.
I'm planning to work on a project like this at some point soon (worm biology is so cheap you can do it in your garage for the price of a Mercedes). The main roadblock I want to work on is to get more connection strength measurements - we already have the full wiring diagram for the worm connectome, but it's not obvious that the connections are all equal strength, they definitely are not. Many labs are trying to image the neurons firing realtime at the whole organism level, but they're stymied by the head which has a hundred or so neurons in very close proximity (and they typically fire at rates faster than 3D imaging modalities can keep up).
I'm definitely excited to start working on this in a couple of years! My hopeful guess is that observing the full nervous system while it's firing full throttle is the only way to start understanding the algorithms that are running there. And from there hopefully we can start finding patterns!
Needless to say I agree with you. The people who say they have a wiring diagram of the mouse brain need to reign in their enthusiasm. We are not anywhere close to start understanding even a fly or zebrafish brain leave alone a mouse or human one. Sydney Brenner himself agreed (though he's arguably biased towards nematodes).
The worms are very special. Their nervous systems are weird because they have been heavily optimized to be small. A huge push to get the full connectome of Drosophila is coming to an end right now, we are not there yet fully, but close. This research has already elucidated many things about how their brains work. People are discussing how to do the mouse now. In conjunction with functional experiments this research already explained many pathways such as olfactory associative learning, mating behavior and many others. None of these understandings came from simulations, but from multiple experiments and study of selected subcircuits. There were also successes in simulation of the visual pathway of drosophila, based on the connectomic data. For example this study [1] was able to reproduce direction sensitivity of a specific set of neurons. It isn’t necessarily true that we need to fully understand the worms before we should and can move on to more complex nervous systems and successfully understand them. It might just be that neuron abstractions don’t work well in worms, because of the mentioned evolutionary pressure to optimize for size.
A computer analogy: it's usually far more straight forward to reverse engineer a regular C program, than one of those tiny hand-optimized assembler demos.
For a concrete example, consider the "modbyte tuning"[1] in the 32byte game of life.
Life is different. There is no logic to the way it solves problems.
A programmer writing a game of life in C will probably do it from scratch, in a straightforward manner. Read the corresponding machine code and you are going to see familiar structures: loops, coordinates, etc...
Now here is how life does it: you have a program that outputs prime numbers, you then have to change it into a sorting algorithm, then in a fizzbuzz, and then in a game of life. You are not allowed to refactor or do anything from scratch. If the program become too big, you are allowed to remove random instructions.
The numbers seem to suggest that it's a simple mechanic - hey just 302 neurons - but a single neuron cell is immensely complex, containing millions of molecules, trillions of atoms that all interact with each other in unknown, unpredictable or even unobservable ways. Even if we had all that data in a model, the biggest problem is that our computations, unlike nature's, are done serially, meaning we only get to work at one bit at a time whereas nature is computing all of the interactions in parallel. If you've ever done a physics collision system you'd know how the performance starts to degrade rapidly with just very few elements (being an O(n^2) problem) and you need to make workarounds. So we need different hardware to start with, like analog computers, if we'd ever have a chance to simulate a single living cell (for starters), then move on to larger organisms.
Exactly, it's a kind of hubris that people seem to claim knowledge about the brain just by observing vague electrical signals from afar. The delusion comes from the fact that we seem to know a lot about the heart, the kidney, the bones, so why not the brain? Well, the brain's cells are communicating to each other in a network with trillions of connections, that's a major difference that seems to be ignored, and the other is that the each cell contains a nucleic acid code beyond our scale of understanding which runs all the time while those electrical signals pass through deciding its next messages and actions. If you ignore these two major bottlenecks, then you could be tricked into claiming knowledge, but it will be limited and, most of the time, wrong.
[1] http://openworm.org/