> We are not giving proper credit to how complex it is, and the multi-billion year developmental process that it took.
Or we are simply not ready to accept that it's simply a big book of heuristics fine-tuned over biological eons.
It's just big. We have too many interwoven, interdependent, synergistic faculties. Input, output, and a lot of mental stuff for making the right connections between the ins and the outs. Theory of mind, basic reasoning, the whole limbic system (emotions, basic behavior, dopaminergic motivaton), the executive functions in the prefrontal cortex, all are very specialized things, and we have a laundry list of those, all fine-tuned for each other.
And there's no big magic. Nothing to "understand", no closed formula for consciousness. It's simply a faculty that makes the "all's good, you're conscious" light go green, and it's easy to do that after all the other stuff are working well that does the heavy lifting to make sense of reality.
I'd call this pulling a Dennett: trivializing complexity to something that cannot or just doesn't have to be explained. Being unable to conceive consciousness at this moment doesn't mean there's nothing to conceive of: even if we never get to the final satisfactory answer, there is undoubtedly much more room left for useful concepts we don't have yet, around or inside this idea
That's a lot of nots. So you are saying that Dennett says that the brain might be reducible[1]?
I don't think that's a strong claim or that it even qualifies as a claim at all. Lots of things might decompose into simple components if subjected to the right analysis, very few things definitely won't - for example many clever people have spent a great deal of time attempting to reduce quantum and cosmic scale physics to simple intuitively founded laws... If Dennett's claim is that the human brain is the same order of object as the universe I can accept it only if we agree that all objects share the same order. Where does that get us?
[1] Apologies, I don't know what reductible means, but guessed typo - I'm open to education though and unworried by typos!
Your last paragraph seems to contain the kind of overconfidence that I'm talking about. I don't understand how you can say "consciousness is simply X" or "it's easy to do that [if you handwave away the hard parts]." Clearly it's not that simple or easy, or we would have done it.
We can't even create life from non-life. How can we begin to understand all the stuff you're talking about that's been layered on top? We don't understand this stuff well enough to just handwave it away as unimportant or trivial.
I'm assuming a simpler model, no need for magic, because so far I don't see what behavior/data this simple model cannot explain.
> Clearly it's not that simple or easy, or we would have done it.
We don't have the computational power yet. Not to mention the vast amount of development required. Think of the climate models, that are huge (millions of lines of code), but they're still nowhere near complete enough, and they only have to model sunlight (Earth's rotation, orbital position, albedo), clouds, flows (winds and currents), some topography (big mountains, big flats), ice (melting, freezing), some chemistry (CO2, salts). And they only have to match a simple graph, not the behavior of a human mind (eg Turing test).
So, it's not easy, even if simple.
> We can't even create life from non-life.
We understand life. Cells, RNA, DNA, proteins, mitochondria, actins, etc. It's big, it's a lot of moving parts, and we understand it, but we can't just pop a big chunk of matter into an atomic assembler and make a cell.
And I think intelligence/sentience is similar. It's big, not magic.
Certainly you do realise that this has been a moving goalpost for half a century? It seems that lately people started to avoid giving a concrete estimate of the power required, though. It was so easy in the 90-s! «Human visual/verbal system processes gygabytes per second/has n flops to the xth» or the like. Well, now we have that and more; how come a deeper modeling, a finer processing, a more complicated network has come to be needed?
And your examples are incorrect, for example Navier-Stokes equations plus some general physics knowledge always have allowed us to estimate how much data we need for a certain fidelity of a finite-term weather forecast. Certainly we need more for a complete climate model, but we know what we need. No such thing about the brain.
It's an easy way to score some rationality points by voicing rejection of "magic", but it's a strawman. Nobody will bother arguing for a mythical homunculus in the seat of the soul, nor even for a concise formula summing up the workings of the mind. Pick harder targets. "It's just big" or "it's just a bunch of heuristics cobbled together" is a non-explanation. The brain is not a Rube Goldberg machine that manages to produce any sort of work simply due to its excessive complexity – it is energetically economical, taking into account that neurons are living cells that need to sustain their metabolism and not merely "compute" when provided with energy. Its discrete elements aren't really small by today's standards, nor are they fast. The number of synapses is ridiculous, but since they aren't independent, at a glance it doesn't add that much complexity too (unless we abandon reason and emulate everything close to the physical level).
Yet we have failed to realistically emulate a worm. By all accounts we have enough power for 302 neurons already. There's no workload to give to overwhelm available supercomputers. It's knowledge and understanding that we lack, and it's high time to give up on the delusion that more power, naturally coming in the future, will somehow enable a creation of predictive brain model, for this would truly be magic.
I know that people constantly underestimated the required computing power, as more and more finer details of the brain and cognition are unraveling. That doesn't make my argument invalid. I don't think we need to do a full brain emulation. That's the worst case scenario.
We're getting pretty good at computer vision, what's lacking is the backend for reasoning, for generating the distributions for object segmentation and scene interpretation. Basically the supervisor. (As unsupervised learning is of course just means that the supervision and goal/utility functions are external/exogenous to the ML system, such as natural selection in case of evolution.)
My example illustrates that yes, we can give an upper bound on molecule by molecule climate modeling, but that's just a large exponential number, not interesting, what we're interested in is useful approximations, which are polynomial, but they being models, they need a lot of special treatment for the edge cases. (Literally the edges of homogeneous structures, like ice-water-air, water-air, water-land, air-land [mountains, big flats, etc] interfaces. And the second order induced effects, like currents, and so on.) That means precise measurements of these effects, and modelling them. (Which would be needed anyway, even if we were to do a back to the basics N-S hydrodynamics model, as there are a lot of parameters to fine-tune.)
For the brain we know the number of neurons, the firing activity, the bandwidth of signals, etc. We can estimate the upper limit in information terms, no biggie, but that doesn't get us [much] closer for the requirements of a realistic implementation.
> Yet we have failed to realistically emulate a worm.
> it's high time to give up on the delusion that more power, naturally coming in the future, will somehow enable a creation of predictive brain model, for this would truly be magic.
a) people are saying exactly this for years, that we have enough data already, we need better theories/models
b) they fail to accept that more computing power and data is the way to test and generate theories.
> The brain is not a Rube Goldberg machine that manages to produce any sort of work simply due to its excessive complexity
A Rube Goldberg machine is simple, just has a lot of simple failure modes. (A trigger fails to trigger the next part, either because the part itself fails, or the interface between parts failed.)
> Its discrete elements aren't really small by today's standards,
If you mean cells, or cortices, agreed.
If you mean functional cognitive constituents, I also agree, but a bit disagree, as they are small parts of a big mind, all interwoven, influencing, inhibiting, motivating, restricting, reinforcing, calibrating, guiding, enhancing each other to certain degrees.
So in that sense consciousness is a big matrix which gives the coefficients for the coupling "constants" between parts. A magical formula if you will. But not more magical, than the SM of physics.
Yes, indeed, but that's philosophy at its best. Thinking about nothing or everything. Zero and/or infinite complexity.
No predictive power whatsoever.
Also, I'm amazed by consciousness, by reasoning, by our cognition, by intelligence, how we apply it, day-to-day, from pure math to messy, but useful engineering, through the ugliness of realpolitik, and the beautiful and dreadful human tangle that our civilization is. The contrasts, the why-s. (Consider the so stark difference between US and Mexico especially the border towns, which is of course deceiving, as the problems don't stop at the border, the cities, states, nations are connected. The warring cartels, the corruption, the hopeless have-nots, the dealers, the addicts, the war on drugs/terror/smuggling/slavery/yaddayadda, the DEA, ATF, their foreign counterparts, the policy going against the market, the hard on crime ideology, the big data vs gerrymandering case just on the Supreme Court's plate, the pure math and reasoning behind all that again, are all connected, just harder to frame them in a "deep" picture.)
But so far, none involves any actual irreducible complexity. No magical formula, just layers upon layers of complexity and fine-tuning.
In part I agree with you. The big difference is that we don't understand the fundamental difference between what is alive and what isn't. We have many different ideas about the quality that is called "life" or living. We have no clue about what it is.
We have little or no understanding of the complex protocols that occur within a cell. If we did, our standard manufacturing techniques would be vastly different.
We can modify DNA and RNA in interesting ways, but they are not living. It is not until we put them into an already existing living cell that we can reprogram some characteristics of that cell.
It's a spectrum. A rock is non-alive, and a human talking to another human is rather alive. A virus is closer to a rock than a cockroach is to a human newborn in terms of life, but a brain dead patient is probably closer to a tree than to a butterfly, and so on.
We have pretty fine understanding of cells, but our materials science and manufacturing technology is not "vastly parallel incremental molecular", but "big precise drastic pure chunk" based compared to cellular manufacturing. Not to mention protein folding and self-assembling biomachines and so on. We're getting there.
But this is not necessarily to your favour. I think it's more of an indication of how the world doesn't fit into our... anthropomorphic way of thinking. That is, everything follows the laws of physics, no magic involved. We aren't special.
We certainly can define life, it's just that people don't generally agree on a definition. Some people get offended if you don't include their favorite things in your definition.
Even so, we understand it just the same no matter how you define it, because what we understand is not a function of word choice or definition. It's a function of capability.
Have you not been following the work of Craig Venter? Depending on your point of view, he's already done it. Even if you don't agree, you have to admit that he's probably one of the few closest to actually doing it.
Craig Venter has not created life from non-life. He has synthesised code that can reproduce and grow into a synthetic life form once implanted into an already living cell. So no, not life from non-life.
There were experiments that were not explained by the liquid theory.
Now we have data and people for some reason want to claim that a theory with magical super complex and not-even-yet-describable and very-very-irreducible element(s) is a better fit than a good old box full of tiny yet specialized parts fine-tuned to work together over millions of years.
You mean, is this a falsifiable/testable theory? Yes, it's testable, we see very specific neuropathologies, almost like on-off switches affecting very specific functions/faculties of the mind, and they usually correspond well to brain damage locations.
So in that sense the experiment is to enumerate the basic (built-in) functional components of the mind and corresponding implementational level machinery, and the of course the reverse (try to enumerate the implementation components and match them with functions) can generate important data (is there a function that has no implementation?).
That said, since the claim is that there's no magical component in the mind, and that's kind of hard to prove, but easily falsifiable, just find a/the magical component.
The problem is the same as with the soul, and the self, and so on.
There is plenty of magic going on. Today we cannot replicate or understand how emergent properties born of biological structures. Not even in "simple" systems as the metabolic pathways.
We mapped the whole genome and connectome of C. elegans, no? And most of that is understood. For example, it seems to be a good model for substance addiction (especially for nicotine). That seems a pretty complex emergent behavior to me.
If you mean bigger biology, yes, sure, we don't have a full map of functional genomics for humans, but we're getting there.
Or maybe not, maybe it's so exponentially more complex, that it'd take as much time to understand it as it took for evolution to work it out. (Especially considering that evolution played with every individual, whereas we like to constrain our data gathering to non-aggressive methods.)
We have the connectome of C. elegans, yes, but we are still pretty far from understanding how it 'works'. The functional connections are still an active area of research, and it is greatly complicated by connections not explicit in the connectome (neuromodulator effects), as well as the internal dynamics of neurons and non-linear network dynamics.
The connectome is necessary, but far from sufficient, to 'understand' a brain, even one made from only 302 neurons as in C. elegans.
The rules that govern a system can create patterns, which themselves behave according to rules, but with a set of rules that was "hard to predict" from the underlying system.
That is exactly my point. We can use fluid dynamics and PDEs in waves. We understand some properties and processes. We are nowhere as close in biological system.
I put the example of the metabolic pathways because last time checked (~2015) the most advanced things in the field were extremely simple and without any predictive power. Things like calculating the kernel of a stoichiometric matrix or the centrality of a node in the interactomic graph.
But you've now shifted the "how" question. (Or I misunderstood the original.)
It is not a question of how there is emergence, why there is magic. The answer to that is is because systematic interactions at a low level can create higher level playing fields.
So the "how" is now a technical question, what is this system, how complex is it, and at which levels can we understand it. And since this system has been learning how to avoid erasure by entropy or by competition for 3.5 billion years, it has searched quite a possibility space, namely 2^1277500000000, if we assume making a copy every day.
There's probably no need to go that low-level for modeling a mind, but of course the aggregate effects of biochemistry has to be taken into account (and it's full of non-linearities).
None of that means we don't understand the principles. I'd say it's pretty much like fusion. Yes, we know how the Sun works, but putting it into a bottle is a bit of a pickle, similarly with brains. (Except brains have a lot more complexity.)
Yes, of course, the problem of bootstrapping consciousness from blueprints of a human mind is that we depend on our parents' whole epigenetic and other extra informational make up, plus their support for years while our mind finishes setting up.
Or we are simply not ready to accept that it's simply a big book of heuristics fine-tuned over biological eons.
It's just big. We have too many interwoven, interdependent, synergistic faculties. Input, output, and a lot of mental stuff for making the right connections between the ins and the outs. Theory of mind, basic reasoning, the whole limbic system (emotions, basic behavior, dopaminergic motivaton), the executive functions in the prefrontal cortex, all are very specialized things, and we have a laundry list of those, all fine-tuned for each other.
And there's no big magic. Nothing to "understand", no closed formula for consciousness. It's simply a faculty that makes the "all's good, you're conscious" light go green, and it's easy to do that after all the other stuff are working well that does the heavy lifting to make sense of reality.