This is mostly a problem in biology. Biology doesn't have parsimony. There's no evolutionary drive towards simplicity. Biology is a collection of evolved patches. It can have much higher complexity that it would seem to need.
Yet human DNA is only about 4GB. That's big, but not so big as to be beyond analysis. Without computer assistance, getting a handle on it would be hopeless, but we're past that point and making steady progress.
(How complicated is physics? Things looked good in the era of Maxwell's equations, but it's become much messier since. In physics, much is out of reach of experiment due to being either too small (superstrings, if they exist) or too big (where's the missing mass and the antimatter? That's not an understanding problem. That's lack of experimental evidence.)
Biology does have parsimony, it's just that it's slow-acting.
Unnecessary complexity is expensive. It's cost or risk that the organism carries around, and it either imposes a metabolic cost or there's some mutation or genetic risk associated with all the additional complexity.
The difficulty is that simple edits to remove complex bits are not generally possible, or rather, unless the edit is very simple, it takes a long time, by random chance, for it to be made.
But you do see excess complexity hived off over time, all over the place. Flightless birds, blind cave animals (one case of a fish in Mexican cages relates directly to metabolic load: the neural and caloric expenditure of visual cortex processing is appreciably greater than for blind fish, based on sighted cousins outside the cave habitat).
The complexity constraint operates poorly, but it operates, and in evolutionary time, maladaptations are maladapted away.
To move to another domain: in human systems, a mechanism for general complexity constraints is Gresham's Law. That's a longer comment ;-)
>There's no evolutionary drive towards simplicity.
There's a drive towards smallness (as in, lower food requirements) which has probably made simplicity a lot less bad than otherwise.
>In physics, much is out of reach of experiment due to being either too small (superstrings, if they exist)
In the era of Maxwell's equations, atoms were too small. Never under-estimate the power of cleverness and indirect observation! (Now days we can image atoms directly: but we needed to get our understand of them elsewhere in order for our technology to advance that far.) The jury's still out on strings; although plenty of people claim that we won't ever reach the energy scales necessasary to totally rule it out, we still might find something predicted by a lower-energy version, and maybe one day we'll uncover a new theoretical result that gives us an avenue we can't currently see.
>or too big (where's the missing mass and the antimatter?
That's not too big, we're expecting the answer in the form of slight asymmetries in particle interactions like the ones we're studying in the LHC.
If we look back on the development of relativity and quantum mechanics with the benefit of hindsight we see that there were already many clues in classical physics. Perhaps our current experimental evidence and current theories have enough clues to make progress, but nobody has had the right idea yet.
> "That's not an understanding problem. That's lack of experimental evidence."
I find that phrase very interesting. I don't wish to single you out, as the underlying sentiment is something recognisable, but I would suggest you can't have scientific understanding without experimental evidence.
The scientific method relies on both definition of a hypothesis and exploration of that hypothesis through experimentation. If you only have a series of untested hypotheses then at best they're a bunch of educated guesses. Verification through experimentation is at the core of what elevates science beyond idle conjecture.
You have summarized logical positivism. It holds for the physical sciences where, say, dropping a stone from a certain height has repeatability. Social sciences involve people, and sometimes even the same person responds differently to similar stimuli.
It holds for both, the difference is in what is classed as a successful result. With the physical sciences you expect the outcome of your experiment to be something repeatable, as it implies you've understood the variables well enough to do so. On the other hand, with social science we accept that our knowledge of the variables is too limited to expect repeatability, so our experiments rely on statistical correlation in order to show whether a hypothesis was worthy of further exploration.
In both cases, you need experimentation to back up your claims. Social science without experimental results is just collective dogma.
"With the physical sciences you expect the outcome of your experiment to be something repeatable." Is it? Look at statistical mechanics. Repeatable, kind of, but on statistical basis.
An experiment is just collecting data and analyzing it. Sure, you try to control for variables, but in some cases that's best done through statistical methods. The concept of an experiment as something you have to do, ie "pour two substances together into a beaker and see what happens," is just a way to collect data. Space probes have experiments for example, even though they're doing nothing but collecting data. The important part is whether your hypothesis explains the data from your experiment.
Aristotle was notably precisely because he was an empiricist. He was the first philosopher to actually observe things (mostly plants and animals). This was a departure from Platonic rationalism, which told you that knowledge of reality was knowledge of forms.
One interesting way to think about the disordinate ratio of data to complexity is via hash functions. The output space of sha256 is finite, but because input is infinite, there are infinite collisions in the output space despite each output being pretty much unique for anything we'd want to describe.
DNA is like sha256, it's generated by the function life4g(m,f) and is a self-mutating reverse-hasher that creates/finds new inputs with the goal of finding an input that hashes to itself. Also known of live().
The goal of live() is to live(life) = <live source code> or adjust(live). There is an illusion that there is a input life that is an isomorphism of what living is all about. The so called "meaning of life". However this is only because it is forgotten that the input space is infinite and that infinite such isomorphisms exist.
This seems to be a problem of constructing approximate mental models to explain "emergent" properties of complex systems composed of interacting simple elements (such as atoms). I would like to think that such explanations aren't always necessary, so long as a computer simulation of the simpler (and well understood) elements can reproduce the emergent behavior. It seems to be a limitation of human working memory rather than some theoretical limit to comprehension.
We are narrative creatures, and have no problem constructing approximate mental models. The problem is building useful and correct models which can be used for prediction.
The great thing about gravity is you can plug on a couple masses to an equation and know exactly what will happen. On the other hand, we have lots of short hand for what the brain is doing (eg, "memory") and lots of just so stories about how those things might work (eg, "like a stick of RAM"), but those stories rarely give a full picture of what's going on it help predict how to improve things - like remember, or cure Alzheimer's.
For many complex systems, simple observation is a problem. With a brain, you can't observe things in fine detail without killing the host. So we can't see how local phenomena interact to create complex emergent behavior - observing the local means suppressing the global. And while we have big global observations from MRIs, and tiny local observations from cutting out individual neurons and applying electricity, there's a huge middle ground in between.
Another problem is that global emergent phenomena can change the way the local systems behave... We tend to imagine knowledge progressing from the tiny systems or to the big systems, but we see feedback in the other direction too, for example through hormone release. Which, again, is difficult to observe when isolating tiny parts of a system.
Think of it like this. You've got a dog. Your dog is very smart. She knows her name, she does tricks, she has a pretty apparent understanding several spoken words, tonal inflections, and body language, and can even catch a frisbee out of the air when you throw one. However, no matter what you do, no matter how hard you or she tries, your dog will never understand calculus. A dog is simply not capable of understanding what calculus is or what it is for or how it describes relationships of objects in Newtonian physics. There is a fundamental physiological limit to her intelligence, and topics like calculus are beyond it.
Humans are much more capable than dogs. However, humans are still physical beings. As such, they have physical limits to brain size, and physical limits to brain activity. We also have a limited life span, so if something exists which might be comprehensible to a 200 year old human, we may never know it. It is most probable that there are concepts, topics, and systems that are simply too complex for human intelligence to understand even if we had access to the equivalent of a Calculus For Dummies.
Worse, human brains are limited to their senses, and scientific understanding is contingent on both observation and repetition. A system can only operate on the data it has available to it. If something exists which either cannot be perceived or cannot be repeated, then we have essentially no capability of even knowing that we don't understand it. The fact that some of the most complex systems have been understood though abstraction, encapsulation, and modeling doesn't mean that all possible systems will behave that way or can be understood that way or, frankly, even that the knowledge we gain about systems that way can truly be called "understanding." How often do we say, "It wasn't until I experienced it myself that I understood"?
As far as AI, well, we don't even know if they're capable of consciousness or sentience, let alone sapience. They may just be arbitrarily complex systems good enough to simulate human behavior and fool a Turing test. No, passing a Turing test doesn't mean you're conscious or sentient; it means you're not definitely non-sentient or inanimate. And if the basis of everything an AI knows is what they learned from us using algorithms we decided to give them, what does that mean they won't be capable of just because we're the ones that taught them?
TLDR: The universe is not obligated to function in a manner comprehensible to human intelligence.
I've always liked the dog analogy ever since I came across it in Stephen Weinberg's book Dreams of a Final Theory:
"...it may be that humans are simply not intelligent enough to discover or to understand the final theory. It is possible to train dogs to do all sorts of clever things, but I doubt that anyone will ever train a dog to use quantum mechanics to calculate atomic energy levels."
The impossibility of dogs doing Calculus is an equally nice example, but I think we could push the point further. Calculus is not only hard and unnatural for dogs; it is hard and unnatural for humans as well. There were anatomically modern humans equipped with culture and language for something like 30,000 years before even one of them understood calculus. It's truly extraordinary that this thing which was at the absolute limits of the human intellect in the 17th century is now taught routinely in high schools and colleges across the world, and used for such mundane purposes as building bridges or stock trading.
Those inclined to optimism could read in this a story about how humans are not like dogs, because culture. (i.e.: "culture allows us to consolidate and simply the understanding of previous generations so future generations can build on it towards almost limitless horizons", etc.). But I'm a pessimist: calculus is still damn hard for nearly all human beings (just ask your average college student). Heck, I found it pretty tough to learn and I ended up getting a masters in math. And - to get back to the article - I'm not convinced that current topics at the edge of human understanding today (string theory, or whatever) are ever going to get 'mundane'. It takes many years of special training and extraordinarily single-minded dedication to even get to the current frontier of knowledge in these areas. At some point, as the article says, perhaps the math just gets too hard.
Fortunately, just because the universe might not be comprehensible by humans (we never get to a successful theory of quantum gravity, or whatever) doesn't mean science itself will have a end. There's plenty to do in other fields.
I find it odd that everyone is presenting the analogy as “humans is to understanding the final theory as dogs is to understanding calculus.” Why not just use “the final theory” for both second terms?
In other words, do we expect a dog to ever understand the final theory of physics? Surely not. Do we expect a 2 year old human to ever understand the final theory of physics? Still, presumably not.
False; mostly because a simple analogy can make one understand at least slightly topics that one has no prior knowledge; dogs cannot understand even the simplest of analogies, mostly cause they have no language.
And even if that argument isn't good enough; the dog has no tools to escape its constrains; humans do; we are messing with our brains constantly looking for a way to make it better; may it be by genetic therapy (DNA mods), chemically induced (drugs) or hardware implants (artificial neurons).
As I read it, the point of the dog story is really to suggest that there are probably limits to understanding at every level of intelligence. Sure, humans have a lot of cool tools for thinking about the world that dogs don't have (analogies, language, symbolic thinking, whatever). But it's only a kind of happy accident that they have also made us pretty good at doing science. Is it really so unreasonable to think that there might be things about the physical world that we can't understand with these tools?
> we are messing with our brains constantly looking for a way to make it better; may it be by genetic therapy (DNA mods), chemically induced (drugs) or hardware implants (artificial neurons).
What makes you so sure humans can bootstrap their way into ever higher levels of intelligence, ad infinitum? Sure, it's possible to take that view, but I would say it's incredibly optimistic. And not one of those ideas you mention has yet made humans any better at doing science (well, maybe 'drugs' - scientists do love their coffee).
The set of things humans are in principle capable of understanding at least slightly are covered by the analogy about dogs. Try to imagine the relative complexity of concepts, for example how much more complex the concept of calculus is than the trick to sit when given a verbal command. Now imagine some concept that is equally more complex compared to the most advanced human math/physics as calculus is to the sit-trick. There is some scale of complexity where analogies break down, and even though they convey some concept that is familiar to the audience, its connection to the actual matter is so vague and strenuous that you haven't actually explained anything. There may be facts about the universe that are completely counter-intuitive, and the understanding of which depend on a regression of trillions of other counter-intuitive facts and processes.
We already see computer assisted mathematical proofs heading in the direction where there are simply too many steps for a human to understand. And thats merely us scratching the surface of the tip of the iceberg. At some point, I believe, computers will generate new maths such that not only is the proof incomprehensible, but the result too. There is no reason to believe that the universe is simple enough that humans can understand everything.
The ability to have mental models requires some form of perceptive learning, as far as we know. This has seeped into AI study.
An analogy is just a described mental model. A "thing that is thrown" is something a dog can understand. I throw a ball or a bone or an orange mouse toy, it makes no difference to the dog...unless I "fake throw" or perform a "magic trick". I think the dog is likely to understand some part of calculus, given it has a sufficient number of neurons and was to use them optimally. The study into injecting information into a monkey's brain is particularly tantalizing toward that end.
A mental model is the primary tool necessary to escape constraints e.g. the ingenuity of Crows. Without external pressure or evolutionary pressure, humans haven't observed most animals get measurably smarter.
You have a good point, but analogies are not proof. A dog simply does not have the capacity for analysis or decomposition, which is how we understand anything complex -- we just keep breaking it down until we understand the constituent parts. Then we understand the interactions between those parts. I think that's enough to "understand" pretty much anything. Unless of course you can think of a mental faculty that you think we're missing.
> The universe is not obligated to function in a manner comprehensible to human intelligence
True, except it's exceedingly unlikely to be so. The SKI calculus, that can represent any effectively enumerable function, no matter how complex, is itself incredibly simple.
Complexity trivially emerges from simplicity, so the odds that most of the complexity we see around us is irreducible seems very low.
> I would like to think that such explanations aren't always necessary, so long as a computer simulation of the simpler (and well understood) elements can reproduce the emergent behavior.
But that just switches the limit to the limit of our computing power. Which is a pretty strict limit. For example, we can't construct computer models to predict the properties of all the elements in the periodic table just from knowledge of the underlying laws of electrons, protons, and neutrons.
> For example, we can't construct computer models to predict the properties of all the elements in the periodic table just from knowledge of the underlying laws of electrons, protons, and neutrons
Yet. And the upper bounds on available computing are not actually known. Well, not strictly true, as arguments like the Bekenstein Bound suggest that information density that gets too high will collapse into a black hole.
But typically, computing properties with precision involves exploiting symmetries and other properties to eliminate impossible cases from consideration and enjoy a shortcut to the answer.
I think we can, if we get to the layers below those particles. i.e. if we develop full mathematical descriptions of quarks and electrons, and the behavior of the four fundamental forces (strong, weak, electromagnetic and gravity), I think a computer simulation will be able to predict the macroscopic properties of any element.
> we can't construct computer models to predict the properties [...] from knowledge of the underlying laws
Hmm, or can we? This gave me the idea of trying to extract "emergent laws" using machine learning.
There should be a way to build an AI that tries to find rules that approximate behaviour of complex systems. I'm sure there's lots of work on this already.
*> This gave me the idea of trying to extract "emergent laws" using machine learning.
This is not what I was talking about. I was talking about a case where we know the underlying laws (the electromagnetic and strong interactions), but the computer models that we would construct from those laws would take more time than the age of the universe to run on our current computers and give us useful output.
Using computers to try and infer underlying laws, not currently known to us, from data is a different thing, and I believe there is indeed ongoing work in this area.
But my point being that if we can construct approximations on top of underlying laws that are cheaper to run, then the outcomes of any complex model could be approximated.
I agree. To model any complex system, it only really requires that a person or a group of people understand, and are able to codify their knowledge the system for a given period of time.
Just think of your average decently sized code base. There's enough complexity there such that any single human can't understand it at a given time, save for a very high-level explanation.
Does this mean that we don't understand the code base? I don't know if that's the right way to think about it.
Except that there is no emergent property of code. It is the sum of its logical parts, no more, no less. Any emergent value requires our conscious design.
> Except that there is no emergent property of code. It is the sum of its logical parts, no more, no less
Why do you think emergent properties are not the sum of their parts?
If I may simplify, there are generally two classes of emergence, metaphysical and epistemic. The former claims that systems are more than the sum of their parts, that some magic happens when some combination of things are brought together. Few people in science or philosophy takes this position seriously.
Epistemic emergence is more about what approach to use to learn about a system. Science is typically reductionist, and it's very successful. However, some properties would be difficult to discover this way because they are only observed in complex interactions between multiple entwined subsystems.
For a simplistic example, consider a scenario where you know nothing about physics, but are given unlimited time and unlimited tools to separately test the properties of oxygen molecules, and hydrogen molecules. But, you can never test them together, only separately.
Given these constraints, it seems virtually impossible to infer the property that H2O is transparent to visual light. But this property is easy to see when studying them together.
Epistemic emergence would hold that we maybe couldn't learn everything about hydrogen and oxygen by studying them separately, or at the very least, doing so isn't always the most effective means to learn about them. But ultimately, the properties revealed of H and O belong to them, and aren't metaphysical.
And I asked for clarification. The rest of my comment describes the various positions on emergent properties. So which is it you intended? The fact that it's code we're talking about isn't relevant.
The most obvious limits are observational limits and comprehension/interpretation limits. This first is determined by the tools available, and thus technological and engineering advancements extend this limit. Of course, there are inherent difficulties that accumulate as we try to peer further and deeper, and I think it is likely we hit some practical barrier that is virtually impossible to get past. If for example, a thousand years from now, we need the energy from a million suns to fully verify that we have reached the bottom layer of physics, we may never be able to pull that off.
Comprehension and interpretation limits seem fuzzier and trickier to speculate on -- but they definitely exist. We are running up against many of them right now with the complex systems we attempt to model. It seems that software tools and AI developments are the most obvious ways to continually push this forward, but there may be some deep difficulties and pragmatic barriers lurking here as well.
There is a lot of room before we start hitting serious limits and overwhelming pragmatic difficulties. Once we have a highly utilized Dyson swarm, then we can reevaluate.
When wise guy is used as an insult, its meaning is close to that of smart arse, in that it implies someone is acting smart to belittle others. Whether that's the intention or not, that's what's being implied. If you take the ego boosting aspects out of it, then the term 'wise guy' is not derogatory.
didn’t Godel and Turing and Church and Heisenberg definitively answer this question in the affirmative for most of the first half of the 20th century?
There exist an infinite number of questions of the physical world and in math alone that one can ask, whose answers are “we cannot know within the context of these axioms”.
Somewhat by definition, these questions are generally somewhat uninteresting, but there sure are a lot of them.
Gödel's theorem does applies to mathematical systems of axioms, and I don't think it's clear how it translates to physics. To take an extreme example, you could answer any physical question about a universe that's made up of a single particle traveling in a straight line, even though doing so requires a mathematical system complex enough for Gödel's results to apply.
On the other hand, there are a number of interesting undecidable questions (in ZF, the axiom of choice and the continuum hypothesis are examples).
> To take an extreme example, you could answer any physical question about a universe that's made up of a single particle traveling in a straight line, even though doing so requires a mathematical system complex enough for Gödel's results to apply...
...using current models. It's not clear them at such models are ultimately needed.
Something being infinite doesn't mean it has no limit (I guess "boundary" would be a better term). If, for instance, it were impossible to answer any question about the strong interaction, I'd clearly call that a limit to our understanding, regardless of the infinity of things we can say about black holes.
In computability theory, Rice's theorem states that all non-trivial, semantic properties of programs are undecidable. A semantic property is one about the program's behavior (for instance, does the program terminate for all inputs), unlike a syntactic property (for instance, does the program contain an if-then-else statement). A property is non-trivial if it is neither true for every computable function, nor for no computable function.
I think it’s worth noting that Rice’s theorem and the undecidability of the halting problem apply to arbitrary programs. In practice, the programs that humans write are very much non-arbitrary. Moreover, sometimes we’re happy to trade a bit of expressive power in exchange for stronger guarantees—like using a total/strongly-normalising language in which the answer to “Does this program terminate for all inputs?” is always “Yes”. In a similar vein, most real numbers are uncomputable, but we pretty much only care about the computable ones AFAIK.
Depends on your point of view. If you something is not knowable even in principle then maybe the question was wrong, like "how many angels can dance on the head of a pin?".
A better answer is perhaps that there are things that we could in principle find out, but cannot because either:
1. We can't do the experiment in practice, such as building a particle accelerator much bigger than the LHC, or factoring a 100000 digit number.
2. It's too complicated and there's no simple description in terms of macroscopic variables, such as the earth's ecosystem.
Not always; the unknowable information in quantum mechanics is simply not there. It's not that there is no way to get at the information in practice, it's not even that there is no way to get at the information in theory. It's even stronger: Bell's theorem proves that the information can't be there: simply assuming that the information is there already leads to conclusions that contradict experimental facts. The idea that there ought to be information comes from erroneously applying our everyday intuition to quantum mechanics.
i would lump most questions about biological systems into this category, yes. (this is my research area) Biological systems become intractable really fast, and deterministic chaos rears it’s ugly head more than you might expect.
The game isn’t “can we understand this fully”, normally. It’s more like “can we understand enough to usually be able to fix things when systems are egregiously out of whack”, and that we have a shot at.
I don't think Astronomy is easier than biology. It just happened that we were successful to calculate some stuff, but where we successful in say: Flying a rocket to Alpha Centauri. Curing common cold is not a calculation.
Apples and Oranges.
That being said, there is no limit to scientific understanding if we accept mathematical models (and maybe probabilistic ones). The trouble with the world is that we try to understand everything with our simple senses: feel of gravity, size, picture, touch, etc...
Case in point, I was trying to explain to a friend that he should ditch the idea of a physical atom. An atom has no shape. It has volume and mass but that's it. It makes no sense to think that the nuclei is spheric or rectangular. At these scales, the geometrical reality is not a reality.
But he still insisted that he wants to see what one "looks like". That's how we make sense of the world.
That's quite some stuff I'm not aware of. I know that we know the shape of the atom by interacting with but I didn't know that we have that much precision at predicting a something more complex.
The article mostly argues that some concepts are too complex for humans to understand and that only though computers or post-human intelligence can we discover further scientific truths.
But, the scientific process is independent of human intelligence. So, more interesting is: given an infinitely intelligent organism, can that organism be use the scientific process to completely understand the universe?
Or, does science have a flaw that limits it's ability to explain certain truths?
Arguably, there are already many concepts that we as a society understand deeply but any individual human is completely incapable of grasping. Certainly, no single human could possibly hope to understand the whole sum of modern human knowledge; collectively we already constitute an intelligence billions of times more complex than a single human. In fact, there's a good argument to be made that the scientific process can't even work on the scale of a single human because it would be destroyed by bias; it's entirely in the notion of how you communicate and provide evidence for your theories to other people that the objectivity in science arises.
When it comes to whether any intelligent system could understand the entire universe, I think at some point we start reaching the level where the terms of the question start to break down. What if we required an intelligence that spanned the entire universe in order to understand the universe? Is there maybe a sense in which we can say that the universe taken as a unit necessarily understands itself already? We start to rub up against very fundamental epistemological concepts that might not hold when we're considering a system vastly different from a human being.
The fundamental question here is what this understanding would actually entail? Surely describing how things work will always be an approximation, a model in a way.
I mean, consciousness is obviously real. As conscious beings it is fundamentally the only thing we actually know is real. The confusion there comes from the fact that we can't prove our own consciousness to each other and we poorly understand what actually constitutes it.
How do you know for certain that the scientific method works? You only know it via anecdotal evidence, because you can't use the method to test itself, since that assumes a priori that it works. So science cannot be used to investigate certain questions about itself. Instead they fall into the topic of philosophy of science, and have no definite answers.
The limit to scientific understanding could maybe be described as this:
You're looking for explanations in the format P(x1,x2,x3...)->R where P is a "function" (the modeling of your theory), x1, x2 etc are measurable factors and R is a result
Now, you can't measure all the factors, but you can have a good set of them that explains a result closely.
Hence the question is, are there phenomena for which you can measure all the x's that give a good prediction for your result? (or even having good proxies for the x's) And that's not even going into the problem of establishing the theory in the first place
In physics you can repeat experiments multiple times, few experiments are unethical and you can have very exact measurements.
Now compare this with medicine. Also compare the difference between general predictions for a whole population to doing precise predictions for one specific individual. How many x's do you thing would affect risk of cancer/risk of cardiac disease or just the risk of a weird mutation that changes risks slightly (and that medicine hasn't even heard about it)
Basically what foolrush is saying is that humans have a desire to understand their world through stories, which leads us to invent believable stories even if they don't line up with reality.
The suggestion is also made that Foucault had a different explanation for how scientific advancements are made, and that explanation suggests that the progress is guided by the society the scientists find themselves in.
Personally, I don't believe either is completely accurate. Whilst I agree that storytelling is a fundamental part of the human condition, which can lead us to jump to false conclusions, in general scientific knowledge is often built up iteratively, we build on what we already think we understand to explore what we don't. Society may influence which frontiers we're most likely to explore, but you can't decide on the impact of new discoveries, and what other new frontiers they unlock.
Okay, haven't read any Foucalt yet, but just thinking more through it myself, it seems self-evident to me that there is no separation of self from the society that self finds itself within. Much like mind/body dualism makes zero sense to me, there can be no truly conscience self to understand and pursue science without some semblance of societal structures.
Of course the society steers science, and vice versa.
Thank you for the analysis. I suppose I need to read some Foucalt to form an opinion. My only exposure to him is his tangential relationship to Foucalt's Pendulum by Umberto Eco, which I read years ago.
While most heavy rationalists insist that Foucault is full of rubbish, it is pretty easy to see how his vantage has a significant degree of merit, even if one entirely ignores the transmission of ideology.
If we look to "science" today, we can see that it relies heavily on funding. In fact, so much so that "science" can't move without it in some instances. So where we have markets and capitalism, we can see that the thrust of science is driven by such; some "science" is crafted, other "science" is utterly ignored or starved. So even in a superficial way, we can begin to take note as to how our perception of what "science" is has a contextual element from within the society it is birthed from.
Foucault's primary gist, if you can survive reading his work in English, is that the very ontologies are fundamentally flexible and yield to the ideological underpinnings of a given time. One must historicize to see "science" clearly, and indeed he does just this in the work. Others have written since then, including discussions of the recent creation of "science" as we know it today, which isn't nearly as ancient as some would have others believe[1].
It can be an incredibly difficult tome to wade through, but one that comes with significant reward if you manage to parse the work.
The trite example I listed _is_ obvious, yet as obvious as it is, some folks insist that “science” is this free thinking process.
Foucault of course is much more expansive than my trivial example, covering the notion of institutions as ideological enforcement, among other things.
No fool on Hacker News can summarize his work in a reply, and one would be heavily encouraged to read it. It cuts right to the core of epistemology itself.
Apologies if I misunderstood the example. I will be reading some of his work. It sounds like I may be in agreement with his premise. Thanks for pointing him out to me.
In hindsight, I truly believe that not only were his theories several generations ahead of their time, but also likely a critical body of work regarding epistemology that would relate to developments in AI etc.
Science is a process to find objective truths. It relies upon consistency, if a counterexample is found to a theory that theory is considered incorrect. New theories must match all existing data. Einstein's theory of General Relativity is a good example of this; it replaces Newtonian gravitation in extreme circumstances but provides the exact same results in the limit around everyday energies.
Science cannot create objective understanding of purely subjective systems. The issues surrounding consciousness and qualia are of this sort, while everyone experiences their reality they are inherently outside the domain of objective truth. The closest science can get is to consider them as "emergent phenomena" and give up on explaining the issue.
The problem is the frame of reference of "subjective." For example, few would question whether or not coat color on cats is an explainable phenomenon, although it varies. So, why isn't qualia similar? Subjective experience is a property of a biological system, so it would seem that the claim that subjective experience is not scientifically explainable at some level would be a claim that biology is not explainable.
I agree with your sentiment at some level, in that I think the explainability of certain things is at least open to question, or should be questioned, but I think the subjective/objective distinction is misleading or misguided because from some frame of reference, subjective is objective.
The bigger issue maybe, that the article touches on, is the problem of emergence.
One definition of emergence is basically that the complexity at one scale of analysis becomes so extreme that you have to move to another scale of analysis. I.e., emergence is associated with unavoidable information loss, where what is random at one scale is nonrandom at another, but predicting from one scale to another is impossible. It's kind of a measurement horizon, to borrow a cosmological metaphor: your measurements at one scale become so complex to model as a system at some point that you have to simply remeasure at a different scale.
I think this is a more immediate pressing problem with science, that there may be some kind of information-theoretic limits to explainability across scale in complex systems. It's something that the reductionistic push kind of misses: just because something is physically reducible, and logically necessary, it's not necessarily informatically reducible, and logically knowable a priori (to borrow from the philosopher Kripke).
There are no "objective truths" (that's metaphysics).
Science is a process to find which models/hypotheses make more accurate predictions and have more explanatory power.
>It relies upon consistency, if a counterexample is found to a theory that theory is considered incorrect. New theories must match all existing data.
That (a popular but naive epistemology) doesn't hold with how scientific process actually works.
New theories can have counterexamples or be unable to match all existing data and still be successful in providing a handier model for more phenomena. Theories might even be inconsistent with other theories (e.g. QM and relativity) and still co-exist and successfully grow while searching for a unification factor (whether that's successful or not).
New theories can succeed (and historically have been known to) even when not accounting for all existing known facts predicted by earlier ones. In other words, more explanatory power doesn't necessarily mean a complete superset. The intersection might not cover 100%.
And that's just for hard sciences. For soft sciences, from economics to social sciences, it's even fuzzier.
I think saying that science doesn't arrive at objective truths means defining "objective" to be a word that nobody would ever use.
Give me a number, and I can put that many nines after the decimal place of percent-certianty in my result. For each nine, I have to run my experiment longer. Infinity nines -> complete certainty.
So, in practice, when we're choosing between fact and fiction, what we're really doing is putting our theories into a bucket and letting them duke it out until there's only one voice left telling us how many struts our bridge needs to be built with. The beauty of science is that whatever poetry, rousing manifesto, or brilliant connections you want to pit it against, you can always look at them and order up exactly as many nines as it takes to beat them. The certainty of science is finite, but unbounded: so it's the best we've got.
>Theories might even be inconsistent with other theories [...]
There's a nice philosophy-of-science trick you can do if you want QM and GR at the same time. Just look at the error bars on the experiments that support the two theories, and propagate them down into error bars on the theories themselves. That leads to theories being statements of knowledge over appropriate ranges (and to appropriate precisions), making them "true" in an absolute sense. (If you do this with Newton's classical mechanics, you will realize that QM was not an overturning but an allowed-for refinement.)
I think the person who you are responding to took objection at a philosophical level, namely, it's simply not the case, however robust the scientific results, that you have direct access to the 'pure nature of the world in itself'. That is a conceit of the correspondence theory of truth that hasn't been thought credible since the linguistic turn. There is no thought independent of the language we use to represent the world, and that language is rooted in and conditioned by history. Hence Kuhn's The Structure of Scientific Revolutions. Of course, that doesn't practically undermine the value of scientific investigation.
>There is no thought independent of the language we use to represent the world
This is debatable[0]. Presumably, people without language can still think, and even if you believe that they invent an internal language to think with you are placing language into a tool-role, where it can be created and destroyed in service of a higher goal (thought). However, that still leaves the problems with correspondence unexplained: but hopefully I can convince you that they are not too bad.
I'll sketch it for you:
Coordinate systems are all equally good by the measure of how well their predictions work. They're obviously constructions - but that doesn't itself imply that the models that use them are constructions, any more than the wrappers burgers come in makes their meat paper. (I mean, the models might be constructions too, but it's not the use of arbitrary coordinate systems that makes them so.)
Specific expressions of models are all equally good by the "correspondence" measure as well: anyone with even a grade-school math education should be able to recognize A=BCD and I=JK,K=QP as distinct in the literal sense but identical in some higher one. So, once again, it's obvious that we are looking at different ways of expressing the same thing; a thing which hasn't yet been proven not to exist, even though no way of expressing it can by itself be more than just "a different way."
So, one step above algebra and two steps above specific calculations, there's the concepts. From a scientific perspective unless you think the brain is supernatural it's really to be expected that ideas are no more than symbols. BUT: for the same reasons as the two cases above, it has yet to be shown that the concepts are not themselves dancing around a still higher truth, each (effectively true) idea differing only in the baggage of being human; that silly but fun phrase referring to how we like symbols but are presumably using them to mean something.
>This is debatable[0]. Presumably, people without language can still think, and even if you believe that they invent an internal language to think with you are placing language into a tool-role, where it can be created and destroyed in service of a higher goal (thought).
That's a moot point, I think, as it can be argued easily that thought is a language itself, or requires one (whether it's a human language like english, or a language of symbols, or some other form of structured description of events and thoughts, even the structure just happens at the chemical level in the brain).
>Specific expressions of models are all equally good by the "correspondence" measure as well: anyone with even a grade-school math education should be able to recognize A=BCD and I=JK,K=QP as distinct in the literal sense but identical in some higher one. So, once again, it's obvious that we are looking at different ways of expressing the same thing; a thing which hasn't yet been proven not to exist, even though no way of expressing it can by itself be more than just "a different way."
It's obvious in these examples -- which, coincidentally are all from math (trigonometry, coordinate systems), that is the quintessential non-empirical/reality-based domain. It's easy to find isomorphisms like that in math, since that's inherent in their core purpose.
It's not obvious (or true) for the general case, talking about the outside world.
I'm not entirely sure that I understand what your argument is, but I think you're saying that different concepts represent the same ideas. I don't think concepts are distinct from ideas, so I'm not convinced by that. When the Greeks used the concept of democracy they were not accessing an ethereal plane outside of themselves. They just invented the concept and social practice in the context of their own historical reality. This is simply not the same concept as that used in the French Revolution, although there are important historical connections between the two. I don't know how you would go about arguing that, despite the fact that these are different concepts, they are somehow, unbeknown to themselves, referring to some idea that stands outside of space and time.
I do think that there's a mind-independent world that gives us basic referents to represent. But the world is not cut at the joints (it doesn't come self-divided into identifiable objects) and is not self-interpreting (it does not represent itself to us). Thales, the first philosopher, certainly had a concept of water - he thought that everything in the universe was made of water. But despite the fact that the referent is the same, his concept of water is markedly different from our own. There is not an unmediated correspondence between the concept and the referent; hence the historical variability and contingency.
It's nice to see Kuhn get a shout out; just occasionally I wish that the HN community at large was as well-informed on philosophy (and history) of science as we apparently are on all manner of technical topics. In fact one of the things that annoyed me about the original article was its complete lack of acknowledgment* that there are, you know, actual philosophers of science who might have thought about this issue before.
As an aside, I do think it's worth mentioning that the propositions:
> it's simply not the case, however robust the scientific results, that you have direct access to the 'pure nature of the world in itself'.
and
> There is no thought independent of the language we use to represent the world
are really completely separate. I heartily disagree with the second (Wittgenstein?), and heartily agree with the first (any philosopher who's not a naive realist?) :)
---
* Martin Rees is a well informed guy, and I'm sure he knows what philosophy of science is and has maybe even read some of it, but apparently he didn't choose to mention it. Sure, maybe that would be getting a bit too heavy for The Atlantic, but still... grrrrr...
They are separate in the sense that one can believe the first and not the second, as you do, but not in the sense that they have no connection. The second is, albeit crudely put here, a common starting point for reaching the first.
And yes, I think Wittgenstein's Philosophical Investigations is the most profound book that I have ever read. My view of the world was transformed upon reading it, though it took a couple of years to properly digest.
I agree that a lot of the tech community is neither well read in philosophy (or the humanities in general for that matter), nor believes that they should be. Sometimes that creates wonderfully naive attempts to think of philosophy as one would code, but in general, it leads to a really one-dimensional view of the social world.
It's true. But I do get the sense that at least the Hacker News community is more curious about and open to philosophy and the humanities than the tech world on average, and certainly more so than other online tech communities that I enjoy not reading (Slashdot, Reddit, etc.). Of course there is the usual subset of people who think all work in the humanities is either gibberish or part of some sort of monolithic vast Stalinist-identity-politics conspiracy, and a much larger number of well-meaning but naive people who seem to think they can resolve big social, historical, and economic questions much as they would debug a piece of Javascript. But time and again I'm genuinely surprised by the kinds of things that make it to the front page, and by how the comments often go deeper than the original article. There was even a sub-thread on this thread where people were bringing up Foucault's The Order of Things. Not bad for comments on a piece of naive pop-philosophy.
I'd say it's more accurate to say that science doesn't arrive at objective truths so much as shared truths. It arrives at truths that as many scientists as possible can agree upon. Consensus, after all, is the highest state a scientific theory can aspire to.
Consensus has nothing to do with science, which can be done by a sole individual completely separate from society. You can follow the scientific method just fine and discover new phenomena and nobody might believe you, but you still did science.
Getting consensus involved in the philosophy of science is usually done as a direct assault on the idealst's question of, "what if this is all just your elaborate hallucination?"
When I'm asked that I just say, "begone, dream-foulers." Then all of the idealists in the universe disppear.
"emergent phenomena" isn't quite the term for this.
An emergent phenomenon is simply a phenomenon that arises form the interaction of components, and so can't be explained purely by reduction to components.
> we'll reach the limits of what our brains can grasp.
But different brains are good at different things. There will never be a line where everyone can grasp everything on one side, and no one can grasp anything on the other.
If one in ten brains can grasp something, there might still be academic departments that study it. But if only one in a thousand brains can grasp something, what do these people look like to the rest of us?
What if the concept you need to grasp requires you to hold a extremely high number of "gateway" concepts simultaneously in your short term memory? There ought to be a limit at which point no humans can grasp the concept. And there would still be a lot of unexplored ideas on the other side.
The article and comments mention possible limits of understanding due to limitations on working memory: there could be a problem with so many moving parts that a human cannot keep all of them in consideration at one time.
But people solve problems with huge numbers of moving parts all the time by breaking them into smaller modules.
I wonder if the actual limit is due to human lifespans instead. There are certainly problems and sciences and systems that are understood through 12 years of primary education, followed by an undergraduate degree, increasing specialization in a masters degree, specialization and new discoveries beyond that in a doctoral degree, and then postdoctoral studies and professorial research far into the middle and end of a human lifespan.
Perhaps there are problem domains which require 100 years of study amd education to comprehend before useful new research can be done.
I think we're a long way off before reaching that limit. In research fundamental discoveries can take a genius many years of his career, but once it has been discovered the basic theory can often be taught an undergrad within a few months and adjacent discoveries will improve our understand and things easier to explain. Textbooks explaining it in 5 different ways will be written and our tooling improves etc. etc.
You don't have to be fleming to grow penicillium, you don't have to be einstein to calculate relativistic corrections to GPS satellites.
Eventually such diminishing returns might be reached if everything remained the same, but you also have to consider that IQ and lifespans are slowly ticking upwards over the decades and eventually we may be able to build a better researcher.
I agree that modulation and divisions of labour reduce the limits otherwise imposed by human working memory. I'm not sure that the same is not true of lifespans, namely, what kind of thinking cannot be sufficiently comprehended as to be tackled within one lifetime? I know of nothing, and this may be ignorance, that takes more than a PhD to comprehend (of course, one wants more than to comprehend, but one can begin to contribute once one comprehends).
I was more thinking of the limits of humans ability to internally represent processes that we do not observe within quotidian life, and problems which requires such leaps of creative genius - like Einstein but 100x - that we cannot hope to traverse the chasm of our own stupidity in that singular moment of discovery.
I think an additional limit is data. Particularly for sciences with strong path dependence, such as ecology, cosmology, evolutionary biology and (my field) geosciences, time has simply erased nearly all of the record of previous events. This is both a big problem (it is quite frustrating) and an opportunity, as we progress through both discovering new records of the past, and developing new techniques for analyzing existing records through new or improved proxies, instrumentation, or theory. I think it’s obvious to the practitioners that we will never know the vast majority of what we want to know, but we can learn enough of what we need to know to keep moving foreword.
I always wondered what if comprehensibility of physics is like a game of minesweeper - you solve a problem and you get clues that help you solve the next problem that gives you more clues. Sometimes you get stack and you have to continue on some other side of the maze that ultimately brings you back and helps you solve the stacked position.
But sometimes the game of minesweeper is unsolvable, you are stack and the clues you got so fat are not enough and you will never get new clues. Can this happen in reality? I don't know. To make things worse, we will never be 100% sure that we didn't miss something, if we get stuck we will never be sure.
Martin Rees is very inventive: he postulates limits to scientific understanding and immediately demonstrates them, using himself as the exemplar.
Rees states, "Big things need not be complicated either. Despite its vastness, a star is fairly simple—its core is so hot that complex molecules get torn apart and no chemicals can exist, so what’s left is basically an amorphous gas of atomic nuclei and electrons."
OK, Martin, in that case I expect your prediction for the exact number and distribution of sunspots on Sol for every day of 2018 on my desk in the morning. A star is fairly simple, no? So what's the holdup?
It's kind of like the argument against trying understand the brain by modeling one: ok, so let's say you have a perfect model, all that's left is to understand your model.
Or another way I've heard it explained. If we were trying to understand Mario and could perfectly emulate its circuit, would that translate into us understanding the concept of saving the princess? Probably not, we'd probably venture a guess that collecting coins is the most important thing in the game.
I was disappointed that the article doesn't mention machine learning. To me it's the perfect example of both the limits of our understanding, and how we can overcome them: Even though we don't understand how AlphaZero learns to master the game, we can use it to do so.
"Science is not and cannot be a quest for a complete knowledge of the universe. Rather, it is a process whereby certain information is selected as being more relevant to human aims and understanding."
There are many interesting comments pro and con the the question. Over a recent period of time, I have had a good and thoughtful discussion with a nuclear physicists and educator about models in science. He did challenge me to look further into the subject, which I have.
My statement to him was that "All models are wrong, but some are useful". Since that discussion, looking into mathematics , experimental evidence, various opposing theories about particular areas of science, I am now more in agreement with the statement "that all models are wrong, but some are useful".
Our understanding of the universe around us is not only limited, but, we humans (all of us) get caught up in the dogmatic belief of our underlying premises. This works to limit our understanding about any area.
The tools we use to do the investigation of the universe around us are just that tools. Whether we use machine learning and computers or space probes, telescopes, high energy colliders or electron microscopes, rulers and meters, these tools are only useful to the extent that we do not ignore the experimental results.
In it very obvious that many (scientists) ignore questions that challenge their "pet" models. There are no stupid questions, just inappropriate answers. There is much data that has been collected that does not fit the standard consensus models and thinking.
Whether or not an alternative model is useful, is not the problem here. It is the immediate response of anger that you should question the "dogma" that, in the end, limits our ability to further our understanding of the universe around us.
I, personally, do not believe in major standard models in use today. I find that there are significant problems with those models in explaining the world around us and leave too many things ignored. Do I have an alternative model - No. But that's okay. I am not required to supply an alternative model, I just have to point out inconsistencies in the models and evidential data that is being ignored.
Far too often, reputations are considered more important than inconsistencies. Consensus more important than problems in the models. If a question is raised that disputes a particular model and it cannot be answered simply then it might be a good idea to look for an alternative simpler explanation.
There are ideas that are completely off the wall, but if investigated properly, we can discount appropriately and in that investigation we may find further areas for study that would not arise otherwise.
Why should there be a limit? Excluding cases when information is lost (or is somehow inaccessible in principle), everything can be investigated, learned and understood - given enough time and effort.
Yes, indeed there is a limit. However, this line of limit is the optimal line we (currently) have between "understanding everything" and "accepting every claim made".
Yet human DNA is only about 4GB. That's big, but not so big as to be beyond analysis. Without computer assistance, getting a handle on it would be hopeless, but we're past that point and making steady progress.
(How complicated is physics? Things looked good in the era of Maxwell's equations, but it's become much messier since. In physics, much is out of reach of experiment due to being either too small (superstrings, if they exist) or too big (where's the missing mass and the antimatter? That's not an understanding problem. That's lack of experimental evidence.)