The classic paper "More is Different"[1] is another good essay that touches on this subject, and opens with some really interesting examples showing why it can be hard to apply low-level laws to anticipate the behaviour of complex systems, even when you can prove that the complex systems must obey those laws.
Sometimes I think there is other information that gives the physics its meaning.
For example, if you took a Shingled Magnetic Recording hard drive filled with VP9 encoded movie files on a NTFS partition, even if you perfectly understood the physics, and figured out all the individual magnetic fields, you would still have a rough time making sense of the of what was on it without this higher level information information.
Sure, but part of the issue is that we can make accurate predictions of pretty much everything we can readily observe today. If there is higher structure, we can't even see the noise yet.
There's much work to be done, but unless something really amazing happens it'll be a long time before we get anywhere near new fundamental(there's still loads of work to be done elsewhere) physics
> Sure, but part of the issue is that we can make accurate predictions of pretty much everything we can readily observe today.
Actually there are a lot of predictions we can’t make today. Stuff that are very important
1) Why do most young people do fine with COVID-19 but some apparently healthy people do not?
2) How exactly do proteins fold
3) Will it rain tomorrow at location x at time x
4) when will my pet tiger decide to maul me?
5) will a drug with this molecular structure for this disease actually treat the disease?
We know a load about the basic building blocks. About quarks and protons and electrons. But once we get to more complicated systems composed of these building blocks, we struggle to make predictions.
See for example the scrubbed SpaceX launch due to weather.
I am sure NASA and spaceX have scientists who are experts in physics, but they could not predict in advance that there would be bad weather on that exact day.
What we do now is make calculations on “spherical cows” and call everything else “chaotic” and “too many variables” and pat ourselves on the back for reducing everything to its basic blocks.
I think the problems you described are in two different groups though. The first group, is where we understand everything well, but we don't have the ability to track every variable accurately and record the amount of data required, in order to make good predictions. Weather falls into this category.
Then there are those like protein folding and animal behaviour where we understand the building blocks, but even of we could track every single building block there would be something missing for us to make predictions.
That's what we do because it's all we can do. Meaning it's likely predicting all those things you listed are as hard to predict as the lotto numbers of next week, for which even if we knew the exact position of every ball in the lottery ball machine at atomic level we would need to predict the thoughts of the person who will spin the machine, for that level of predictions nothing short of a full simulation of reality (ala Matrix) would suffice, and even if we already had a machine powerful enough to do that inputting the state of every atom into that machine would still be outside our capabilities, and I'm afraid it will always be.
Fluid dynamics is one of those problems I was referring to at the end.
I'm not a CFD guy but it's partly an issue of not having enough computing power - fluid dynamics doesn't scale the same way as regular mechanics so you have to compute the whole picture.
None of the things you listed depend on explanations outside of the realm of physics that we already understand. Theoretically with a simulator powerful enough and using the principles we already know, all of the items you listed could be solved.
I think there's a difference between understanding a system well enough to make predictions, and recreating a system (a simulation) and then seeing what it does. Many times in history before the physics was properly understood, the best scientists and engineers could do was create some kind of "dry run" and simulate/recreate whatever process they were testing, and use that to predict outcomes of the real one. But actually being able to look at the state of a live system and predict what it will do next accurately is quite different.
It seems to me to be more of a spectrum, where a 'well-understood' system is one we can simulate accurately with extremely pared-down models. (I.e., we can simulate it much more cheaply than running the whole atomic simulation) because we found symmetries/simplifications/shortcuts. Approximating ballistics arcs with parabolae, for example, is sort of a compression algorithm on the full simulation.
Some systems are simply random. No amount of simulation will tell you exactly when an unstable atom will decay.
We've spent the last century arguing about what this means for physics, and we still have no idea - even though quantum events can have macroscopic consequences. (To give a contrived example - a decaying atom may damage some human DNA and cause cancer in a historically important individual who dies decades earlier than they would have otherwise, changing the course of history.)
It's more accurate to say that we can model a small selection of mathematically 'pure" system types that are neither random nor emergent, on timescales that are comprehensible to us.
That's a good one. I also wonder how many other maps there are, and what _they're_ good at. Consider physics really deep and full of leverage that's currently really helpful to humanity...but still only one general system of map design. The nice thing is that lots of alternatives are considered within physics, but we also like our "best" concept; take Wolfram finding the "best" universal rule system for example. This immediately begins discarding alternatives. And maybe they're just different, not just worse.
Even the map metaphor starts to break at this point because it's hard to think of an alternative map that's not a literal map, and maps all seem the same (i.e. resolving down to the geometry-based, literal "map system" we already know) if you squint your eyes enough. The map is the territory, and it's not, and we can say the same of the pencil, and your shoes, and the concept of intuition. Just as a way of putting "known physics is the map" into context.
There are certainly levels that we can't comprehend because we are physically incapable of it, just as an ant is incapable of understanding a book. Perhaps with neural augmentation we will discover new vistas previously unavailable to us.
That's because the specific information on the disk was initially generated by, and is the consequence of, a great many previous events. Even if we assume the disk holds only the complete works of Shakespeare, we would need to understand vastly more than just the drive's physics, in order to understand it completely. That does not imply that the explanation for the drive, as we find it with its specific contents, is not reducible to physics, but performing such a reduction is not the path to understanding it.
When I was a kid, I was fascinated by the word "emergence". In pop science, it was spoken in reverent tones. Never mind that they never gave a definition of it that didn't sound trivial -- the word was dripping with meaning and the promise of hidden wisdom.
I went to college and studied physics. I learned how the behavior of many atoms leads to thermodynamics and the properties of materials. I learned how elementary quantum mechanics leads to atomic physics and chemistry. I learned how the strong force gives rise to nuclei. I learned how the Standard Model produces everything on this list. I learned how, at each layer, the correspondence was fiendishly subtle, but the work of thousands of physicists over decades had built the bridges.
At the end of it, I was still dissatisfied, because I hadn't learned what emergence was. I'd see crackpots presenting their personal theories of everything, declaring that they knew what no physicist ever could, because their model had emergence. They knew about it, and we didn't. But I still didn't know what it was.
Then I dug deeper and it became obvious. Emergence is a beautiful idea that has always been present, in some form or another, in every field of science. It is so well-developed in some fields, such as particle physics, that it has become a suite of quantitative methods with incredible accuracy. I had been learning it the entire time, and now use it every day without a second thought. Meanwhile, "emergence" is a brand name. It's a specific word passed down over generations of pop science used to convey mystique. It's a tool some charlatans use to pretend they know more than they do.
No offense meant to the article, but I often hear people say they're most excited about some substanceless theory because it "has emergence", and this is part of why.
I prefer reading these on LessWrong, because people often point out when Eliezer's said something unjustified. (Often he provides the justification in the comments, but on a couple of occasions there hasn't been a justification – he's just been flat-out wrong.)
Turns out that https://www.readthesequences.com/ contains the edits for the book version, so if you're reading them all through it'll probably be more coherent on that website. Dilemma!
I agree it's wooey in many cases, and is used as a near synonym of miracle or magic.
But I think a good summary is that emergence happens when relations between things on a finer level of analysis become the things themselves on a coarser, higher level AND that these new things also have rich and complex interactions among them, making the levels roughly equally interesting, complicated and useful. Overall the bottom level is "enough" in theory but rising up the levels allows us to gain orders of magnitude more "practicality".
Sometimes by common sense we stumble upon a mid-level concept and then when we learn about the lower level explanation, we proclaim that the original phenomenon was "emergent".
Now it's a great (but I think not very fruitful) philosophical debate to decide which level has how much "realness", is the higher level just a convenient fiction, an illusion, a practical description and there is a lowest, rock bottom level which the universe/God really "cares about", or are all levels just as real and the whole hierarchy/ladder is just our conception,and not only can rising up the abstraction be an artificial action, but maybe descending lower can also be a illusory/non-natural and a result of our search for practicality and the rock bottom isn't really that real and the universe doesn't know to care about it.
Just as in math the naive explanation is we have fundamental axioms and we derive the higher level theorems from it. But in reality what happens is we know what theorems we want (to make things interesting or empirically useful) and then look for ways to structure the axioms to give rise to the "real" theorems.
Or, what view of a signal is more real and fundamental, the time-domain or the Fourier-domain? Both can be equally real or equally fundamental, and can give rise to equally interesting Analyse. Or perhaps not, and the time domain is what the universe cares about and has in its "source code" (which is probably a misplaced metaphor though) and Fourier is just for convenience...
Other times when making the relations into thing (reifying them) and vice versa we don't really move higher or lower, we stay roughly on the same level. An example is duality in graph theory (faces to nodes, nodes to faces) or in optimization (exchanging the role of variables and equations). And at the risk of sounding wooey, this is similar to the duality, non-duality ideas in Eastern philosophy, looking at the same stuff but interpreting "the gaps" as "the things" and vice versa. And then saying neither is more real, both are real and neither are, together they are real, separately they aren't etc. Maybe the levels are not a strict hierarchy, maybe they loop back, maybe consciousness is indeed such a feedback loop across different levels (a la Hofstadter), then you get to resonances in feedback loops and if you push it far it gets quite wooey. But it's still very differently wooey than some quantum healer fixing the resonances in your liver through the TV.
Isn't this what abstraction is? A computer looks magical if you don't understand binary logic gates and transistors etc. Even though I understand it all, the things my computer can do seems magical. I think this is what people mean when they say "emergent" but for some reason don't use more widely understood terms like "high levels of abstraction".
Perhaps this hints at us not even understanding the lower levels of many areas well enough yet.
I expanded my comment because an important thing is to discuss whether the different levels are actually a property of the universe or just a property of our description? Is math discovered or invented. What is part of the map and what of the territory.
I think people use emergence more when it's viewed and suggested as independent from our description, it really emerges on its own and we just discover and describe this. And "abstraction" is used when we view our role greater, we invented an abstraction to easier manage the complexity. So emergence is more incidental, accidental, serendipitous and unexpected, while abstraction is intentional and purposeful and designed.
Now which is which and whether the distinction itself touches on something real or is just a different view of the same thing, is another great topic for discussion.
> But I think a good summary is that emergence happens when relations between things on a finer level of analysis become the things themselves on a coarser, higher level AND that these new things also have rich and complex interactions among them, making the levels roughly equally interesting, complicated and useful. Overall the bottom level is "enough" in theory but rising up the levels allows us to gain orders of magnitude more "practicality".
Hofstadter discusses this in his book I Am a Strange Loop: there's a line that's something like "sometimes the bottom level - despite being entirely responsible for the effect in question - is nonetheless totally irrelevant to it"
This reminded me of the hard science fiction novel Dragon's Egg by Robert L. Forward.[1] Its core idea is that complex, intelligent life could be possible on the surface of a neutron star.
He went to a lot of effort to try to come up with a plausible description of what "life" would be like in such an environment. That fired my imagination also, and I wondered how we could do such a thing in a scientific manner. Given the rules of the Standard Model, in principle almost all aspects of the surface conditions could be elucidated, but I feel we would be missing almost all of the essence of it. Just like a deep understanding of Quantum Electrodynamics won't help you determine from first-principles that Earth has pretty sunsets or that your hair stands up if you take a wool jumper off.
For example, the surface of a neutron star would be very bright in gamma rays, but those are subject to pair-production in the strong magnetic field of the star, making light effectively a short range sensory mechanism. Meanwhile, the enormous density of the crust means that it transmits sound incredibly well and at enormous speeds[2], making acoustics the equivalent of our photon-based vision sense! But it would hemispherical at short range and 2D in a complicated way at long range. Spacetime itself would be strongly polarized, atoms would be distorted into spindles and have strongly anisotropic behaviour, general relativistic effects could be felt in everyday scenarios, and there are likely dozens of other effects the we can't even fathom by merely staring at a handful of field equations that we developed in our comparatively cold, flat spacetime.
That's all quite interesting, do you have some ressource on how I could understand it more deeply given that I have a highschool physics level understanding of physics more or less?
I'm particularly intrigued by why the range would impact the sound based based sensing? Is it because of spacetime curvature or something?
No one knows! That's kind of the point. We know some of the fundamental limits and constraints of what goes on around and inside a neutron star, but the essence of it is still a giant mystery. We'd have to go and look, and that's not likely to happen any time in the next ten thousand years or so.
Oh ok, but what in our current understanding could let us suppose this difference in sensing? And by hemispherical do you mean somekind of fisheye projection? I would really like some beginner ressource on this subject.
A dolphin is surrounded by water, and its sonar sense is truly 3D, in that it can go in any direction. Up, down, left, right, ahead, or behind.
A creature living on a neutron star might use sonar like we do light, because the speed of sound in the star crust is more then half the speed of light. In many ways it would be comparable. But the crust is only below the creature, not above. So its "sound sense" would only work for one hemisphere surround the creature, the other side would be silence, its equivalent of total darkness.
Neutron stars have thin crusts, possibly mere meters thick, and almost certainly with layers of some sort. Again, sound would probably travel at different speeds in the various layers, just like sonar does in oceans, where salinity affects propagation. There would be complex effects with distance to do with this. Locally, the sound could travel outwards in a hemisphere, but at long range it would likely be more 2D.
This is all speculation, but it's based on real science. My point is that we cannot really know, and no amount of staring at equations will help paint a realistic picture.
When I learned during my computing and philosophy class about emergence it was like I found the missing puzzle pieces so everything (I mean that) makes sense.
I love how all things in existence emerge simply via self organisation. All that is needed is communication. This can be gravity for stardust to form planet systems; using once eyes to form bird flocks; using chemicals for communication to form ant colonies; using human communication to form societies.
Once during my diving lecture, the lecturer pointed out, that we can, in many cases, call self organisation simply an eco system. Which makes it much easier to explain this topic to people.
He said, a lake will self organise when you rob it from an important fish, and so will our body when we start doing sport (grow muscles). Every system will react to change, and will try to re-organse itself.
When applied to behavior I would say that all that is needed is a shared goal, a shared understanding. The individual units don't necessarily have to communicate. They only need to share some common direction or goal. For example, the infamous starlings don't formally conspire. Yet the result functions as a conspiracy, so to speak.
I must say that reductionism is my default position. And that this is at base a value I have chosen. I did not consider the usefulness of reduction or unity until I encountered the ideas of Sabine Hossenfelder. The idea that reduction, unity, and elegance might be holding inquiry back is a new idea that I am increasingly sympathetic to.
Something I also did not consider much was the sociology of science. It seems, at least from my outsider perspective, that the highest status scientist is a theorist who either predicts an experimental result observed decades in the future, or a theorist who synthesizes disparate observations. This value, insofar as it is true, stems from a reductionist perspective. I see reductionism being harmful to inquiry if only because it favors some scientific roles over others, theoretical over applied, theorist over experimentalist, discovery over reproducibility.
This whole article reeks of "science is BS, we need free will for humans to be real and morally responsible".
For all of its smart wording, this amounts to the same old issue Aristotle had with describing the mathematical universe in which somehow humans are different enough to warrant "higher stuff".
Is there any useful criticism of science besides pseudo-attacks always coming from antropocentric, and usually religious, corners? Can't we just do the one, final Copernican shift and frickin' move humans from the centre?
The development of quantum mechanics suggests not. When we come up with a new field, even the brightest minds will struggle to ignore anthropocentrism, coming up with wild theories like:
> These equations seem to govern the behaviour of all our toy experiments, with multiple superimposed world states interfering with each other when they happen to transform into the same state as each other… except when a human looks at the result, whereupon all but one of the world states just vanishes!
The first person I know of¹ to propose that the world states don't just vanish – it's just that "brain that sees event X" and "brain that sees event Y" don't converge to the same state, so you don't see quantum interference when people get involved – was Hugh Everett III. This is a simpler explanation, stops quantum mechanics contradicting special relativity, solves the EPR paradox, side-steps Bell's theorem, stops God playing dice with the universe… in short, it solves every problem² except where the Born rule comes from.³
When he proposed it to Niels Bohr, he was laughed out of the room.⁴
In principle, we could probably eliminate anthropocentrism in physics models from popular consciousness entirely… but then we wouldn't be prepared for new fields, where we'd introduce it right back again.
---
¹: Okay, technically Erwin Schrödinger mentioned the idea five years earlier, but he didn't do much with it. Apart from, you know, coming up with the equation in the first place…
²: Edit to add: I didn't know about Grete Hermann's flaw in John von Neumann's proof that all non-local hidden variable theories were impossible. Such theories still violate special relativity, but they're not impossible; many-worlds doesn't solve this problem because it isn't actually a problem. (Many-worlds is still the simplest theory I know of, but I'm less certain that it's the simplest possible theory consistent with the evidence… making this comment less relevant than I initially thought it was.)
³: Some people think many-worlds explains the Born rule. I haven't heard all the arguments, but all the ones I've heard have been wrong.
⁴: Artistic license. But Léon Rosenfeld certainly considered him "undescribably stupid" and unable to "understand the simplest things in quantum mechanics".
> This whole article reeks of "science is BS, we need free will for humans to be real and morally responsible".
For what it is worth, I did not read it that way at all, and the fact that the author is a neuroscientist suggests (though does not prove) that it was not intended as such.
To me, the article seems consistent with the view that the universe is reducible to a fundamental physics, leaving the author puzzling over why many of us feel we have more agency than this view would seem to imply. It is a reasonable issue for anyone, not just neuroscientsts, to ponder.
> But subscribers to a scientific worldview often make a more ambitious claim: that the best theories are isomorphic with the fundamental nature of the universe.
This is not an "extra" claim on top of conservation laws/fundamental symmetries.
> Reductionism can be understood as a combination of (1) the claim that the intelligibility of the universe depends on the unity of scientific theories
It's strange and frankly likely just projection to say that it's the reductionists that claim the universe must be a certain way in order for it to be intelligible.
> Despite its limited usefulness as a guide to scientific practice, reductionism is a powerful cultural idea. We might call it the Lego-block conception of reality: only the Lego blocks are real, so ‘fundamental’ science involves identifying what the blocks are and how they interact, while ‘applied’ science involves discovering the right combination or permutation of blocks that accounts for the phenomenon in question.
The question of realism is separate than reductionism of fundamental law, and it's not a good sign to (deliberately?) confuse them. EDIT: Just to be clear to people skimming this stuff, I can hold two theories: a) your dog is real, b) your dog is not real, only quarks are real. We can debate this for as long as we'd like, but what I am not necessarily saying is that your dog's dogness corresponds to some suspension or modification of fundamental physics.
> that parts and wholes have ‘equal’ ontological priority, with the wholes constraining the parts just as much as the parts constrain the wholes.
Again, if ontology means "realism" this is a confusion, if it means the way things work, it's simply wrong or completely unsupported.
This essay feels like "philosophy lite[0]." One one hand, it's great to introduce new eyes and ears to these deep concepts, but on the other, there's a deep historico-philosophical morass that one needs to wade through to get to the modern state of the art. There's an incredibly rich corpus discussing emergent properties[1] (most notably of which is, arguably, consciousness) but even the SEP article (which is orders of magnitude more informative than the blog post) is quite terse.
[0] For example, putting emergentism and reductionism on two ends of a spectrum is not strictly correct. There are reductionist emergent theories out there (in both phil. of science, as well as phil. of mind).
I think the description feels klunky and kludgy because it leaves out a key aspect of what physics sees in emergent properties[1]; these are properties arising out of the interaction of an ensemble of elements. And looking at your link, it seems most "emergentists" also were specifically considering ensembles. But article just says: "Emergence occurs when there is a conceptual discontinuity between two descriptions targeting the same phenomenon.", which seems too broad and so unilluminating.
Entities have state (including position) and ways of interacting with each other. This leads to new state in the next moment, which leads to new state in the moment after, and so on. Thus the more global state affects the future state. There’s nothing mysterious about any of this.
Usually when people make a deal about emergence they’re confusing the map for the territory. They couldn’t predict the overall state evolution just by considering the parts.
They’re thinking of their notion of the generic entity type. But all that’s out there in the territory are the entity instances with their specific state.
The parts can’t exist in a stateless fashion. It’s just that in our heads — the map — we can consider them in that fashion.
The issue is that emergence is just some arbitrarily picked meaning of something observed from a certain "distance" or from a certain perspective. Emergence is not an intrinsic property of anything, it is just a (subjective) point of view.
It is just another way of saying that things look different from different "zoom levels", hence we assign different models and/or meanings to them at each different level. But that's just our very human perspective, it is not some pre-existing/universal truth of a system.
You choose, somewhat arbitrarily, a particular level of reductionism to apply. (It seems from what you are saying that you would agree that) atoms are real, but bricks are not; bricks are what we call a clump of iron, oxygen, silicon, aluminum, etc. which is mostly stuck together in a particular shape and size. Bricks are based on a human perception of togetherness and objecthood that is completely arbitrary; there's no real delineation of the edge of a brick, nor is there a real definition of 'brick' in the first place.
All of this is (more-or-less) true. Yet, at the same time, not only can I move the slide to a place where the brick is very real (say, when it's smashing out your brains after being wrapped in a slice of lemon), I can also move it to a place where atoms are no less emergent than bricks. 'Atom' is just the name we give to a cluster of quarks in various configurations, with an electron (kind of, arguably) nearby.
Yet, atoms are a construct present in physics because they're useful in making predictions about physical phenomena. Nothing fundamental about it. And bricks are a construct in engineering and architecture because they have a certain functional and aesthetic purpose in each of those disciplines.
It's all always arbitrary. It's all always human understanding. Science is human understanding. The question is not 'is this just an artifact of human perception', the question is 'is this relevant to the questions I am asking'.
It's an issue because there's completely mutually exclusive conceptions of emergence that take you in potentially ridiculous directions, such as proposing new fundamental physics, violations of what we normally think of as causality, etc, and it has a funny way of showing up in arenas that are notorious for woo and mysticism, such as quantum mechanics, or consciousness, or explanations of how morality 'really' works, it seems to bring with it an entirely new idea of what it means to even explain something.
Depending on which variant of emergentism people are talking about at different times, they could be talking about something that's fully consistent with the reductionistic program, something that's not making new claims but is just about levels of description, something that's vague but not obviously wrong, or some profound revelation proposing we overturn everything we thought we knew about physics or some natural phenomena or anything in between. The stakes are either utterly trivial, or as profound as they could possibly get. And depending on who you talk to about emergence at any given moment, you'll get a confident answer that could potentially fall anywhere on that spectrum, or worse, a strange kind of passiveness where this issue is kind of handwaved away.
> In this post I haven’t really provided any references, but hopefully in future I’ll do short posts explaining where these ideas came from.
One such reference might be "Strong and Weak Emergence" by David Chalmers [0], given the fact that "weak" emergence was introduced but "strong" emergence was not even mentioned (though indirectly implied).
Philosophical theses need empirical support. The biggest issue wrt emergence is "downward causation". If there is strong emergence, it is going to be ontologically different: that's why it is ontological mergence.
As a consequence, this new entity has to function as a causal agent. Otherwise, why populate the world with entities that don't work as causal agents. Upward causation is consistent with both ontological and epistemic emergence. Downward causation is consistent with epistemic emergence. Are there examples of ontological emergence that can play a role in downward causation?
Until then, it is all unbounded philosophical speculation. These speculations get bounded (constrained) by empirical sciences.
And to take your argument just one step further: physics operates on causal closure, and I don't know how proponents of emergentism reconcile it with causal closure.
One of the toys philosophers use to explain away anything is to divide/classify. If there is an issue with X, make a distinction between strong-X and weaker-X. First, it is strong emergence (ontic) vs. weaker emergence (epistemic). Later, it is "strong" downward causation vs. "weaker" downward causation.
If the author sees this, please post the journal club syllabus that this grew out of! Would be very interesting even without aligning references to your text. Thanks for the post.
You are most welcome! Glad you liked it. I've never logged in here before, and I am amused that someone posted this little emergence post here. I intended it mostly for circulation among friends. But you have inspired me to collect the 'syllabus' of my informal journal club. Here is an annotated reading list:
Much obliged and glad you visited. Your characterization of emergence as how we experience a jump between conceptual frameworks, rather than progression through an objective hierarchy, scratched an itch. Appreciating science as a set of frameworks floating in sea of unknown, rather than tiling and comprehending our world, would fix a lot of issues in science and a lot of the ongoing crisis of faith in expertise. Look forward to reading what informed the writing.
If you imagine the physical world as a vast machine. Something like a cellular automata, you can imagine theories about the universe as ‘compressions’. Parts of the universe which show repeating patterns can be explained by theories, which can be used to make predictions about future state. Sometimes the universe permits such theories and some times it does not. Sometimes it’s irreducibly complex and there is no ‘compression’, no pattern.
The entire enterprise of human knowledge is an ongoing obligate and procrustean optimization of compression heuristics that force the infinities of space and time into a model that fits into our finite lumps of neuronal tissue.
Even as someone who is more of a physicist than a biologist, that quip of Rurherford's, quoted in the article (“science consists of physics and stamp-collecting”) has always bugged me. Even though I assume that all of biology can be reduced to fundamental physics, very little of importance can learned or understood about biological systems by doing so - and biological systems are significant physical phenomena.
Maybe I am speaking outside my comfort zone a little, but isn't this the same as trying to find physical axioms that build up a whole and it has been disproven by Godel with respect to Whiteheads and Russels Principia Mathematica and completely blows reductionists out of the water if we go as far as equating math == physics in all but name
"math = physics" is true on paper, because most, if not all, physical laws are equations. The issue lies elsewhere: ontology. One can subscribe to realist ontology in physics, yet become a Platonist wrt math. One can say that "particles exist and play the role of causal antecedents" and that "sets don't exist in this world and don't play the role of causal antecedents"; that's why many mathematicians subscribe to Platonism (sets, numbers, etc don't exist in this world, but exist in the Platonic world). Same goes for the so-called 'Platonic' love that many seek in the modern world!!
You know how OpenAI trains language models with more and more billions of parameters, expecting that at some point a Skynet will emerge and yield their projected 100x return on capital, and all they get are increasingly fancy lorem ipsum generators? So that's what emergence is not.
What is the simplest system we know that produces unexpected results? Conway's Game of Life comes to mind. Would there be a surprise if we just gently shake 1 billion 2x4 lego blocks?
I really appreciate the intuitive explanation that in Emergentism the whole can have the same ontological importance as the parts. Could it also be more important?
Thanks for the recommendation, just got it on Amazon! I generally buy every (interesting) book I see mentioned on HN, and here's hoping it's a good one ;)
1 - http://robotics.cs.tamu.edu/dshell/cs689/papers/anderson72mo...