"Always ask whether the hypothesis can be, at least in principle, falsified. Propositions that are untestable, unfalsifiable are not worth much."
This. That's a good way to put it. I've mentioned Fred Hoyle's line on that, "Science is prediction, not explanation" (which is from one of Hoyle's novels), but Sagan's line is better.
Key point: unfalsifiable theories do not lead to useful technology. Engineering requires predictability.
This is a problem with shaken baby syndrome, which asserts that infants presenting a certain pattern of medical findings (mostly subdural and retinal hemorrhage) MUST have been violently shaken if no other alternative medical or accidental explanation is found. It is a diagnosis by default, which raises a number of legal issues. [1, 2]
This medical diagnosis is "certain" even with no witness, no admission of violence, no antecedents of violence, no sign of trauma on the baby's body. There is absolutely no way to falsify the idea that this gesture occurred (and no way to prove one's innocence — it's a presumption of guilt). Yet, this theory is qualified as "scientific" by its proponents, but it does not lead to any predictions, only explanations.
That shaking must have occurred on a given child at a particular moment, while the only person present with the child denies any violence, is an untestable and unfalsifiable proposition that is totally anti-scientific.
Falsifiability is critical for engineering, but not all science is for engineering. The further you get from engineering, the fuzzier the line between science and non science.
We try to apply the tools of science to things like astrophysics and paleontology, but they're practically never going to have engineering consequences. That's fine. It's great that we want to know stuff just because we like knowing -- even if "knowing" sometimes gets hard to define.
Falsifiability is a great tool for being able to say some things don't work. But it's important not to let it limit curiosity.
If you can't explain how the statement could be falsified, at least in principle (although it may be not practical to do so), then you are admitting that reality will always be the exact same regardless of whether your hypothesis is true. And in that case, it's really not useful or meaningful for any sort of reasoning at all.
A concept which will in no way ever make a difference, is meaningless.
MWI of quantum physics is most likely unfalsifiable. But if I believe that there are other versions of me that will definitely win the lottery if I am the sort of person to enter it in particular circumstances then that may well change my behaviour.
I would argue that occams razor is unfalsifiable but is still a great principle.
> I would argue that occams razor is unfalsifiable but is still a great principle.
Occam's razor is provable, in the sense it's a heuristic based on probability. Out of the two alternative hypotheses explaining the same observations to the same degree, the simpler one is more likely to be true.
That is not what occam razor means. It doesn’t say anything about probabilities. It says that if two models give the same predictions, you should chose the simpler. If the models make the same predictions, neither can be said to be more true than the other. But if there is a difference in falsifiable predictions, this can be used to determine which model is best - but then occams razor does not apply. The most correct model wins, whether or not it is the simplest.
> It doesn’t say anything about probabilities. It says that if two models give the same predictions, you should chose the simpler.
Which is where it becomes probabilistic. The answer to what it means for one model to be "simpler" than another is part of information theory, and that is essentially the flip side of probability theory. Information is (unsurprisingly) counted in bits, and one way of defining "a bit" is as amount of information that cuts your uncertainty by half.
> If the models make the same predictions, neither can be said to be more true than the other. But if there is a difference in falsifiable predictions, this can be used to determine which model is best - but then occams razor does not apply. The most correct model wins, whether or not it is the simplest.
The point of applying the razor is that, if there's no evidence to support one model over other, your best bet is to chose the simpler one now, because when new evidence comes to light that will favor one over other, it's more likely to favor the simpler one.
> The point of applying the razor is that, if there's no evidence to support one model over other, your best bet is to chose the simpler one now, because when new evidence comes to light that will favor one over other, it's more likely to favor the simpler one.
I highly doubt that. The history of science has plenty of examples where more complex hypotheses turn out to be more correct. E.g Einsteins relativity is more complex than Newtonian mechanics, but nevertheless better matches observations. The current model of the universe is far more complex than the Ptolmaic model, the periodic system is more conplex than the four elements etc.
Occams razor applies if two models make the same predictions and therefore neither can be more correct than the other.
> The history of science has plenty of examples where more complex hypotheses turn out to be more correct. E.g Einsteins relativity is more complex than Newtonian mechanics, but nevertheless better matches observations.
Yes, but that's after people made observations that couldn't be explained by Newtonian mechanics. If you were living at the time the latter were formulated, and someone came to you with relativistic equations, showing that their result all line up perfectly with Newtonian ones, but otherwise offering no example of divergence, nor any explanation why the person chose these particular equations - you'd be right to call them a kook and ignore them. After all, there would be no way to distinguish between Newtonian formulation, Einstein's formulation, and an infinite amount of other formulations that also give the exact same results.
Ockham's razor makes sense only if some two models in question, which today make the same predictions, can also make divergent predictions that you expect to be testable in the future. The Razor tells you to stick with the simpler one, because it's most likely to remain the best model. The more complex model has more moving parts, requiring more bits of information to identify among many possible variations of same or greater complexity - bits you don't have, because if you had them, you could use them to disprove the simpler model.
In other words: the Razor worked for Newtonian mechanics despite Einstein, because a Newton's contemporary couldn't just randomly come up with the exact formulation of relativity we use today, given evidence available then. So whatever theory they would propose, it would overwhelmingly likely be wrong - as in, made bad predictions where Newtonian mechanics still made good ones.
> Occams razor applies if two models make the same predictions and therefore neither can be more correct than the other.
So to reiterate: Ockham's razor is future-facing; it applies to theories that can generate divergent predictions, but which you can't test just yet. If you expect the two models to always give the same predictions, now and in the future, then... they're literally the same thing, just expressed in different ways. There is no meaningful difference there, and you may just pick whichever one you like more, or whichever is easier to work with.
Maybe you are thinking of a different principle? What you are saying sounds reasonable, but it is just not Occams Razor.
Occams Razor litterally just says that you should eliminate unnecessary entities from an explanation. It is a philosophical principle and not a statment about the natural world.
But the statments “the simplest of two theories is most likelily to be true” on the other hand is itself a hypothesis about the world which can be examined empirically. But I very much doubt this have been proven true for real world scientific theories. Perhaps for randomly generated theories it would be true, but theories are not typically generated at random. To follow the theme of the article - how would you falsify this hypothesis?
> Occams Razor litterally just says that you should eliminate unnecessary entities from an explanation. It is a philosophical principle and not a statment about the natural world.
"Eliminate unnecessary entities" is a nice way of saying "minimize information-theoretic complexity" without having the formal framework to express it in. The intuition behind "counting entities" points in the right direction.
As for the philosophical part - once you take a piece of philosophy seriously and try to refine it into purity, it tends to turn into either mathematical theorem or a natural science hypothesis.
> But the statments “the simplest of two theories is most likelily to be true” on the other hand is itself a hypothesis about the world which can be examined empirically. But I very much doubt this have been proven true for real world scientific theories. Perhaps for randomly generated theories it would be true, but theories are not typically generated at random. To follow the theme of the article - how would you falsify this hypothesis?
This is exactly what information theory and probability theory deal with, among other things. They give a formal framework to define a measure of simplicity for a hypothesis, and to study its relation with probability of a hypothesis being correct. That framework can deal with correlated hypotheses just fine. So to answer your final question: when you strip away the vagueness, Occam's razor becomes a mathematical theorem, which you can prove or falsify using the tools of mathematics.
As for whether there is any reason for mathematics to apply to the real world, this ultimately follows from the basic axiom that the universe around us follows rules that we can infer from observations. If you accept it, you can use the tools of mathematics - including Ockham's razor - to understand the world. If you don't - well, if that axiom is wrong, then reality becomes completely arbitrary - nothing makes any sense whatsoever, nor it can ever make any sense, and we're better off giving up on the whole "thinking" thing, moving back into caves, and spending our days hunting and gathering and finger-painting stick figures on cave walls.
It is important to understand the limitations of logic and maths. They are fundamentally tautological systems - they can’t give you genuinely new knowledge about the world. Logic can’t tell you if purple swans exist or whether planetary orbits are circular or elliptical. You need empirical evidence for that.
So information theory might give you a measure of the complexity of a theory, but can’t (on its own) say anyting about whether it is true or not.
But hypothetically there could (as you suggest) be a correlation between the complexity of competing scientific theories and which theory turns out to best match evidence. But is there any empirical evidence for this to be the case? Just because it would be nice does not make it true, and it is not something you can prove mathematically.
I see how it makes sense, given it comes from medicine - the very field dedicated to literally the single most complex system we know of: the human body. But I don't see it as an opposite to Occam's razor - rather, a missing lower bound. That is, I see Hickam's dictum as a reminder that some hypotheses may be too simple.
My guess at how one would formalize this is, when you're comparing hypotheses explaining something in a given domain (such as "behavior of things being thrown", or "health of a human body"), there is a level of complexity inherent to the domain. A hypothesis that's so simple as to fall below that level is too simple - it doesn't have enough bits to express what's happening within the domain. The further below the complexity threshold it is, the more likely it is to be falsified by new evidence. In contrast, hypotheses above the domain complexity level are all capable of explaining the domain fully; however, the more complex a hypothesis, the more likely it is to be at least partially wrong.
This gives us the following takes:
- Occam's razor: for hypotheses above the domain complexity threshold, the least complex one is most likely to be true.
- Hickam's dictum: your hypothesis is way below the domain complexity threshold - which you didn't notice, because you don't appreciate how complex the domain is in the first place.
Reconciliation:
- The closer a hypothesis is to the domain complexity level, the more likely it is to correctly explain new evidence. The best hypothesis matches the complexity of the domain. Above it, hypotheses gain superfluous parts, which are either redundant (unlikely), or wrong (very likely). Below it, hypotheses are always wrong - they're too simple to account for all possible predictions, so new evidence will eventually falsify them. The tricky part is - even though we both postulate hypotheses and define their domain, we tend to hand-wave the latter a lot, so in some cases (like medicine) we may not realize that our hypotheses are too simple.
This is an interesting way of thinking about it, but then it becomes essential to determine what the 'domain complexity level' is for any domain, and the possibility of unending argument on what that that level is will almost certainly destroy any value that either dictum or razor have.
But it can be hard to know whether a concept will make a difference in the future. String theory may for example provide some testable hypotheses at some point. Theory building is important and original thoughts may have unpredicted consequences some way down the road
I like Sagan, but he didn’t make these rules for a post truth world, unfortunately. There were certainly well funded lies in his day, but right now we are seeing massive investment, easily rivalling a large tech corporation or a Disney media empire, in to creating “news” orgs who’s specific goal is to lie, muddy waters, sow discord, and otherwise mislead the public, and who are seen as independent bodies of verification.
It’s much more difficult now to find real facts, which is actually insane given that real facts should be available with the click of a button these days.
I think the increase in spending on spreading/amplifying certain kinds of information is because there's a greater depth and breath of information available now that people can and are using to derive their own opinion.
Influencing people was less expensive when media/news/information/etc was relatively limited.
Yeah, but what Sagan is trying to give is a framework for forming opinions based on reality. While it’s a good start, it’s missing imperative steps like “if Fox News is your independent verification, it’s probably a lie”.
As stated, we are in a post-truth world, and a framework from Sagan era for forming opinions with a factual justification is much more difficult.
H. R. McMaster credits Putin's people with that concept. Until recently, propaganda was promoting your position. Russia came up with the breakthrough of just promoting extremists of all stripes to reduce the credibility of all news. This is a form of tactical misdirection. "To sow confusion and reap inaction", as Willie Sutton, the bank robber and prison escapee, put it.
It wasn't really feasible at scale before the Internet, because it takes a large number of anonymous sources to make it go.
This is correct, Falsifiability was origionally concieved of by Karl Popper. I would also add, that while Popper had the best of intentions and good ideas in creating falsifiability theory, it is not generally accepted today by people who study philosophy of science. The consensus, which I agree with, is that it doesn't refelct how science atually works and it eleminatates many things that we do consider science. I mean many people may disagree with string theory, but it's hard for me to accept that it's science at all. Evolution also didn't pass Mr. Popper's tests, but he eventually recanted, not by changing his views on how evolution fits with his theories but just because his friends convinced him that it was "really important."
Popper made contributions to science in that he was one of the main thought leaders that began the field of philosophy of science. That said, I don't think falsifiability by itself is a very good criterion for science or truth.
I don't think the wikipedia article on this is very good. The encyclopedia of philosophy has a much better overview of his theory and where it stands currently
> I mean many people may disagree with string theory, but it's hard for me to accept that it's science at all.
Science is a method, not a thing. I think most scientists, including those looking into string theory, would agree that the hypothesis is not science. It's a hypothesis. Maybe not even that, depending on whether or not you think it's falsifiable. But either way, it's an intriguing idea that scientists are looking at. That's a legitimate part of the process as well.
I always cringe when I hear evolution brought up as some invented method or mere developmental happenstance. It's a cold, unavoidable consequence of variation, selection and inheritance, in the absolutely most general way possible.
The universe evolves, life just found a way to make it quicker. Then we invented sexual dimorphism to make several selections within a single generation. Our human intelligence allows us to evolve ideas at an even faster rate.
The effects of evolution are in the realms of science, but evolution itself is the superstructure science, and anything else, has "evolved" in. It's time.
I don't agree it's the simplest. Creationism is actually simpler and that's why it's appealing to so many people. To be clear, I don't support Creationism, I'm just counter arguing.
I suppose it may seem simpler because we tend to anthropomorphize god, so "god simply snapped his fingers and everything came into existence" angle doesn't sound too complicated.
But then you start asking questions like where did god come from and why did he do what he did and the simplicity starts to fall apart.
Really, the moment the words "there's an omnipotent being that..." are uttered, simplicity goes out of the window.
Am I misunderstanding something about falsifiability? In principle, evolution can be falsified by observing no incremental development of species over time.
Falsifiability just means there's some possible way for it not to be true?
He thought it was untestable because it was based on a series unique one time unrepeatable events in the distant past.
Parts of the theory like Mendelian genetics are testable but on the whole... Evolution happens so slowly that we can't observe it or test it in a lab.
To me it seems like with Popper inquiry into how species came to be can never fit into science. It's a much narrower view than what we commonly understand as science today.
Of course evolution is falsifiable, as are falsifiable the big bang theory and the iron kernel theory.
To falsify a theory does not require to reproduce it in a lab. It means that the theory predicts something that you can actually observe, or not observe and thus disprove it.
For instance, we can falsify the theory like this: let's grow bacteria and subject them to some poisonous chemical. The theory predicts that, after many generations, if the colony survive then it should become resistant to that poison.
Here is an example of a theory that is not falsifiable: after they die, people's souls go to a place that is impossible to observe when you are alive.
Falsifiability is about "impossible to observe", not about repeatability.
This is just a factually incorrect claim by someone who clearly has no background in biology.
Evolution is frequently observed in the lab, particularly in organisms the reproduce quickly and prolifically as in most microorganisms. You might want to google "antibiotic resistance", which is literal evolution creating a public health hazard.
Evolution has withstood well over a century of experiments. It's the inevitable consequence of the facts of inheritable mutations and natural selection. Biology doesn't even make sense without it.
Perhaps you meant to day "science doesn't work by p-hacking and data falsification". The great thing about science is that it involves a process of building models and hypotheses, and then running experiments to produce results that support the models and hypotheses. To the extent that p-hacking and data falsification produce results that are not correct, longer term they will have a difficult time distorting our understanding of the underlying phenomena. Short-term, certainly they can be distracting, but we care about reproducibility because scientists are constantly trying to reproduce (possibly by building on older) results. Results that cannot be reproduced do not contribute to science.
I think you're right to identify two distinct things in the air, but it's the other guy doing the mixing. What science is and the extent to which that is approximated in practice are indeed two things. Neither is inherently more real or true than the other, or necessarily more right, depending on the context.
But I take this particular context to be about Sagan's comments on distinguishing truth from "baloney" via science by expressing principles that best express the scientific ideal. I don't take that to be "corrected" by appealing to science as practiced, which is where the mixing up of the two comes in.
I think there's a subtlety which isn't implicit in what you say. There is the ideal of science - which I think there's a case to be made that it's not something that can be fixed, but constantly evolves - and the practice.
But I think there's also a separate distinction between good faith and successful enough attempts to do science, and people just gaming the system. The fact that there's plenty of the latter must be acknowledged (in fact, I think the people who discovered this current set of issues and made a big deal of it are from the same group of academics as the ones abusing this loophole, and also, it's likely to be a perennial problem), but the former also exists and is not the same as the abstract ideal you bring up.
I'm not sure I would characterize Sagan's system here as specifically best expressing the scientific ideal. I think it's a great list to think carefully about to help with clear thinking, but I'm not sure applying the label of 'science' to all these points is the right way to think about them, although I'm not sure it isn't either.
I've been wondering if falsifiability is misunderstood, based on what a few people have commented about it. The alternative version is only that if claims are made that a system has made specific predictions that proved to be true, then it should be falsifiable on that basis. This was Popper's criticism of Marxists claiming Marxism was scientific - they would constantly claim that Marxism can predict what already happened, but it's bogus to claim this in retrospect, they repeatedly claimed this for occurrances only after they had occurred. Is this the same as demanding every theory be falsifiable?
Sean Carroll talks pretty positively about string theory. I think he paints the picture that although the popular view is it's not falsifiable (or not yet), therefore it's somewhere between very suspicious and junk, but actual theoretical physicists are much more positive about it.
Popper did have the idea of combating communism's claim of being a science or scientific materialism. And that's a pretty noble goal in my mind. His heart was in the right place.
I don't think Popper was going for that soft falsifiability - but if you modify his theory to do that, say it has to make some predictions, and only require that it should be falsifiable on the basis of it's predictions, you let in a lot of stuff in that obvious isn't science.
To take the obvious example of using exactly what Popper was trying to oppose. the current Chinese communist party's claim that capitalism will eventually transition into socialism once a certain level of development is achieved, it is a prediction, it will be falsifiable later. Clearly it is still not science.
But even if we give Popper the falsifiability angle and imagine there is some workable version of falsifiability, I still don't thik his theory works, it's just not a good reflection of day to day science and scientists.
> To take the obvious example of using exactly what Popper was trying to oppose. the current Chinese communist party's claim that capitalism will eventually transition into socialism once a certain level of development is achieved, it is a prediction, it will be falsifiable later. Clearly it is still not science.
I think this is the point on this issue, right? It can't count as falsifiable later, or still not science, unless they describe what they mean by "socialism" specifically enough in advance - which I think is a pretty nebulous constraint? Not sure about it. Definitely, there's a lot of (reasonable IMO) theoretical controversy about what 'welfare capitalism' has to do with proper socialist ideas, even though it's usually given the label 'socialist'.
I've also been wondering things like what working scientists do, and what this thing is that we call science generally - non research related stuff like teaching, or using science to do better engineering, and what the connection is between the two.
Is there still a place for any variation of falsifiability?
Bonus cheeky request: do you have any recommendations on modern philosophy of science?
Fyerabend is the top dog right now I wouldn't recommend investing too much in him, he basically thinks there is no demarcation and Voodoo and Einstein are equally valid.
I accept Quine who I admit isn't much better, he thinks it's all about creating a coherent world view. I think he's onto something but he's missing truth which I think is a problem - we want to think science gives us truths about the world, or at least we want a way to get to truth.
There's another view that says science starts with a set of untested assumptions which I haven't gotten around to reading much about
I like some of what I think I understand about Feyerabend's ideas, but I didn't get on with his actual books. I expected something ethnographic-y or observation based, but only found incredibly abstract stuff.
I listened to a few interviews with Liam Kofi Bright, and his ideas about what truth is are pretty interesting. The perspective of 'giving us truths' or 'getting to the truth', is unsatisfying to me.
I think there has to be a core of some kind of 'predict what will happen when we do something, then see how close we got, tweak it in response to these observations, then repeat'.
How can we disprove the hypothesis “If we can’t disprove it - it isn’t science.”?
The scientific metaphysic relies on so many declarative/prescriptive statements which are themselves exempt from the criteria for science and are thus self-defeating on their own terms.
It is so peculiar when scientists are so dogmatic about science.
Are the formal sciences (logic/mathematics/computer science) not science? The testability/falsifiability criterion certainly excludes them from being sciences.
> How can we disprove the hypothesis “If we can’t disprove it - it isn’t science.”?
That statement isn't science. It's a definition. It's philosophy of science. It's the briefest summary of Karl Popper's definition of the scientific method. According to him, science can never be proven, only disproven.
In this context, most of computer science is more a form of applied mathematics.
Of course there are different ways to look at science, like making a distinction between analytical (or empirical) science, and synthetic science; the science that makes stuff, rather than analysing it. Not sure if that's really a good distinction; the latter is really technology, isn't it?
There is such a thing as computer science, but the majority of what gets called that is really engineering, not science. People often get those two things confused because they have a fair bit of overlap in the Venn diagram, but they are two different things.
Math is itself indeed not science. It is the language of science. It follows different rules than empirical sciences. But note that word "empirical" there; Popper was really only talking about empirical science, and according to him, that was the only real science. You could argue that there are non-empirical sciences.
Another problem with Popper is probably that outside of physics and chemistry, there are a lot of less exact sciences where predictions and refutations of a theory are never that clear cut. Like his issues with the theory of evolution.
Ultimately, I guess science is also simply "getting to stuff that works by trial and error".
How much engineering could we do without Mathematics? How much commerce?
I don't see it as exclusionary. You won't find many scientists in doubt about the fact that everything they do is built upon Logic and Mathematics, in addition to observation.
But don't we need a word to group fields that try to systematically describe, understand, and make predictions about the physical world? (Rather than seeking to explore and characterise idealised logical constructs?). What would you suggest?
You may not see it as exclusionary but many people do. Just look at the comments!
It's precisely the grouping I am talking about.
If you group science in such a way so that logic/mathematics/computer science falls outside the group then isn't that an erroneous grouping?
Isn't that a silly definition?
True and False are idealized logical constructs. It's the idea; and the idealization of the notion that there is a difference between Truth and Falsehood. Or if you want to get biblical - there is a difference between Right and Wrong.
We need a grouping to make it clear that some fields produce theories and others produce theorems.
We need theory-producers to be more humble and provisional in their statements. We need theory-producers to forever remain open for their theories to be falsified or refined (whilst not being paralysed by doubt about theories that have stood the test of time). In other words, we need a slightly different culture.
But we also need a way to rebut someone who says "OK, but can you prove we're not living in a perfect simulation of reality with a fabricated history that was created yesterday?". In science, the rebuttal is "No, I can't prove that, science depends falsification rather than proof. Can you suggest a way I could falsify it? If not, then I'm going to get on with my work because it doesn't make a difference to my field either way"
We who? Don't "we" also need a grouping to make it even clearer that some fields can't produce any falsifiable theories if other fields don't produce at least some unfalsifiable theorems? A terra firma of sorts.
It's like a dependency graph. Or something.
Your insistence on "making a difference" seems to echo the sentiment of many pragmatists:
It is astonishing to see how many philosophical disputes collapse into insignificance the moment you subject them to this simple test of tracing a concrete consequence. There can be no difference anywhere that doesn’t make a difference elsewhere – no difference in abstract truth that doesn’t express itself in a difference in concrete fact and in conduct consequent upon that fact, imposed on somebody, somehow, somewhere, and somewhen. The whole function of philosophy ought to be to find out what definite difference it will make to you and me, at definite instants of our life, if this world-formula or that world-formula be the true one. --William James
Does falsifiability make any difference? If something is only falsifiable in principle (e.g in theory), but not in practice then is it really falsifiable? On pragmatism - it's not a difference that makes any practical difference. And yet you insist on differentiating. Why?
Is "All humans are mortal." falsifiable or unfalsifiable?
It sure is falsifiable in theory, but unfalsifiable in practice. Any living human is potentially immortal until they actually die.
Any running process is potentially non-halting, until it actually halts.
If falsifiability doesn't make a difference in practice (and it doesn't!) then I guess we can all get on with whatever scientific discipline we are busy practicing.
So, I'm going to carry on my life knowing at least one unfalsifiable scientific truth: the theorem known as The Halting Problem.
It's not even wrong, because it's right.
Anybody who insists the Halting Problem is falsifiable (even in principle) is welcome to solve it in principle.
> Don't "we" also need a grouping to make it even clearer that some fields can't produce any falsifiable theories if other fields don't produce any unfalsifiable theorems?
Sure. And I suspect a subset of pure mathematicians would want terminology to make clear that they produce theorems out of intellectual curiosity rather than because they have any regard for whether those theorems can be applied by other fields. Fortunately we can categorize things in multiple ways. I'm open to suggestions on the semantics, but something more widely understood and less clunky than my own theorem/theory-producers would be good! Perhaps "Natural Sciences" or "Empirical Sciences" might be more specific terms for fields that produce theories, if you like.
I differentiate simply because seems possible to do so. And as I said, because it's worth considering whether different processes and cultures are useful. I'm intrigued as to why you object so strongly.
I am afraid my intellect isn't quite up to the application of scientific principles to the philosophy of science itself this morning. I'll have to think harder about whether that's even a valid thing to do.
I don't think you've shown that falsifiability makes no difference in practice. The fact that it's possible to come up with some borderline or problematic examples (which themselves aren't terribly practical) doesn't mean it's not a useful criterion for a scientific theory. Falsifiability is a valuable filter for ideas that the natural sciences are not able to speak to. String theory has been criticized as unfalsifiable. I think a good string theorist would accept that it's a serious accusation that requires an answer.
To be honest I'm quite happy to say "All humans are mortal" is not a well-stated scientific theory. "Human lifespan is limited to 180 years" is better, as it may one day be falsified.
It's pointless to speak of usefulness without specifying a utility function.
It is just as possible to differentiate as it is to integrate.
If it is determined a priori that unfalsifiable propositions are not useful, then knowing the result of the Halting Problem is not useful. Isn't that silly?
I strongly object to categorizations which discriminate against valid science (knowledge? truth? understanding? reasoning? Useful facts?). Is all.
The human process of trying to udnerstand reality is continuous, not discrete, so it's silly to reason about it in terms of discrete categories. It necessarily leads to confusion; and the sort of gatekeeping and self-justification Carl Sagan is guilty of.
Science benefits much more from being defined too broadly; than being defined too narrowly.
I'd rather be too permissive then ignore the junk; than be too restrictive and never even encounter good ideas which were erroneously discarded as junk.
I don't think I said unfalsifiable propositions are not useful! A proven theorem is sacred!
Of course, until the laws of thermodynamics are revised we can provisionally say that all programs actually running in nature will indeed stop at some point, no matter what is proven about idealized Turing machines.
And before I'm misunderstood. There are many ways the laws of thermodynamics can be tested. This prediction, unfortunately, cannot be tested. But it is a predicted consequence of the simplest known theory that explains of all sorts of observations about thermodynamics. Which is the limit of what the natural sciences aim to do here. Provisional truth based on observation vs. proven truth based on stated axioms.
I am explicitly not claiming that one truth is to be valued more than the other. I honestly don't think that. Merely noting, again, that the distinction is there to be made. I may be "discriminating between", but I'm certainly not "discriminating against".
It may or may not be a continuum. Curious researchers on both sides can certainly be informed and inspired by each others work, and can use the same techniques and tools. But even if only as an academic exercise can't we describe these two modes of discovery. And isn't it worth being clear about their respective limits?
You seem to be missing the point. Ignoring for a second that the laws of thermodynamics themselves are based upon a handful of idealizations (the idealization of "thermal equilibrium", the idealization of "perfectly isolated system", the idealization of "perfect zero)...the laws of nature are encoded as formalisms/equations. Symbolic computations.
If you have no formalisms you can't compute any consequences - there is nothing to test. You have no science.
So treating Mathematics and science as "separate disciplines", even though they function as one symbiotic whole - that's the conceptual error.
Interesting. I actually would group cars and engines separately. I'm always fascinated to get a peek at a different way of looking at things, thank you.
> How can we disprove the hypothesis “If we can’t disprove it - it isn’t science.”?
You can't. That's an axiom. Welcome to Philosophy of Science.
Science, at bottom, has some axioms.
1) Cause and effect
The same causes always create the same effects is an axiom. We assume that the God or the Devil don't change all the rules every other Thursday. If there is a being who arbitrarily shifts the rules, science loses a lot of its predictive power. Science will adjust to that, but it makes science much less useful.
2) Continuity
The rules "today" are the same as the rules "yesterday" are the same as the rules "tomorrow". The rules "here" are the same as the rules "there".
This is a little spicier as we do try to test that the rules haven't changed. We try to test whether or not the fundamental constants have shifted with time, for example. We try to see if things are behaving the same in our galaxy are the same as in otehr galaxies.
In fact, practically everything which defines "science" is about the ability to predict and quantify.
A) Side: "math" is NOT "science". Math, while certainly falsifiable, is neither quantitative nor predictive.
This, in fact, has provoked quite a bit of discussion: See: The Unreasonable Effectiveness of Mathematics in the Natural Sciences
Is it the "philosophy of science"?
What is now called "science" was once called "natural philosophy"?
Maybe it's the science of science?
Maybe it's the philosophy of philosophy?
Maybe it's the science of philosophy?
Maybe it's the philosophy of science?
Maybe it's all the same under naturalism?
Studying science (itself a natural process) using our computational understanding of what a "process" is and does sure fits the Oxford definition of "science".
Firstly there's no need whatsoever to be rude even if I'm wrong. It doesn't help the discussion and isn't nice. You also don't know anything whatsoever about what I do and don't know about maths or the definitions of words.
Secondly prospective theorems are absolutely falsifiable. Since a theorem is a statement that has been proven to be true yes they are unfalsifiable by definition - they have already passed that test. That doesn't really generalise to any sort of meaningful statement about the falsifiability of maths. Saying theorems are unfalsifiable is equivalent to saying "True statements can't be proven false". Well, yes.[1]
ie If I say Sean Hunter's theorem is that if you take a triangle with arbitrary sides a b c and angle opposite a of theta that
a^2 = b^2 + c^2 -42 b c cos theta
that statement is absolutely falsifiable (and false), which you can establish with basic geometry and trig[2]. When you demonstrate it not to be true it is not a theorem, so I was wrong to call it that. That is a demonstration of how maths is falsifiable.
[1] Even so it's often possible to make progress via proof by contradiction - showing that if this theorem were not true something else which we know to be true would be false. But in most of my maths books proving all of the theorems is the norm, so they are for sure falsifiable while you are trying to establish whether or not they are theorems.
[2] Drop an altitude from one of the angles at b and c and then use pythagoras and a bunch of cancelling. You will prove that a^2 = b^2 + c^2 -2bc cos theta of course. My statement is only true if a is the hypotenuse of a right triangle meaning cos theta is zero and my incorrect coefficient doesn't matter.
I merely attempting to reciprocate/mirror your tone. You are the one (self)identifying it as "rude".
I have some idea about what you do and don't know about definition and definability (in general) given the words you've said so far and the way you've used them.
Prospective theorems are not theorems until a proof is presented. At which point they become retrospective theorems.
All that "falsification" and counter-examples prove is that the so-called "proof" of a "theorem" wasn't. If you have indeed provided a counter-example that's a proof of negation which raises questions: what was wrong with the original "proof" of the theorem? Since proofs are programs - there must have been a bug in the proof. Better type-check that proof/program...
The presence of a counter-example to Sean Hunter's "theorem" simply demonstrates that it's not a theorem. It's a misnomer. Theorems are exactly those Mathematical stataments for which no proof of negation exists.
You seem to be presupposing some particular kind of mathematics. I am talking about all possible Mathematics in general; of which the particular Mathematics you are currently using is just one particular instance. A historical and cultural coincidence.
There's a Mathematical paradigm in which proof-by-contradiction is a valid proof method e.g mathematics founded upon classical logic.
And there's a Mathematical paradigm in which proof-by-contradiction is not a valid proof method e.g mathematics founded upon intuitionistic logic. This is basically what we call Computer Science. It has fewer axioms than Classical Mathematics (e.g the axiom of choice is severely restricted) and so it's a much stronger proof-system. You could even say Intuitionistic Mathematics (which is basically CS) is "more foundational" (it is much closer to the foundations?) than Mathematics.
The fact that you are admitting proof-by-contradiction in your methodology tells me about your choice of foundations, but so what? There's a foundation which axiomatically pre-supposes choice; and a foundation which doesn't.
And in the foundations where choice is not axiomatic "proof" by contradiction is not a valid proof.
The reasoning goes something like this:
1. Choice implies excluded middle.
2. Excluded middle implies all proposition are either true or false.
3. Excluded middle implies that proof by contradiction is valid.
Rejecting 1 results in the rejection of 2 and 3 also.
> Prospective theorems are not theorems until a proof is presented. At which point they become retrospective theorems.
...
> The presence of a counter-example to Sean Hunter's "theorem" simply demonstrates that it's not a theorem. It's a misnomer.
That is what I said. I showed a mathematical statement and I showed how you could falsify it. Since you said "mathematics is not falsifiable" I have shown your statement is not true. Do you see why?
You were the one who decided that the distinction between conjectures and theorems is important. I have now shown two examples of mathematics that was falsified.
Unless you're trying to say neither me nor Euler was a mathematician in which case we can agree about me but not about Euler.
Your failure to understand what I am saying is abysmal.
>That is what I said. I showed a mathematical statement and I showed how you could falsify it.
>Since you said "mathematics is not falsifiable" I have shown your statement is not true.
You have taken it upon yourself to interpret "Mathematics is not falsifiable" as broadly as needed in order to confirm your own biases; and then proceeded to attack a strawman instead of a steelman. That's the lack of charity...
>You were the one who decided that the distinction between conjectures and theorems is important.
And you were the one who decided that it isn't; so you falsely equated them.
What you have demonstrated is the falsification of the statement "X is a theorem"; not the falsification of "mathematics is not falsifiable." - a hasty generalization fallacy.
Which doesn't demonstrate anything of import or relevance whatsoever. Obviously a non-theorem is not a theorem. This is no more interesting than demonstrating that non-Mathematics is not Mathematics.
This in no way diminishes or falsifies my own claim that theorems are unfalsifiable! And neither is Mathematics.
Because if you do falsify it - then it was never a theorem. By definition. Theorems are true, not false. A false theorem is a contradiction in terms. A misconception. An error in reasoning.
Maybe Euler wasn't a Mathematician either. Who knows? Those sort of questions are undecidable.
"[Aristotle] claims that each science studies a unified genus, but denies that there is a single genus for all beings". You're applying tautology without really understanding the construction of your own question.
The study of being qua being; or science qua science; or
mathematics qua mathematics; or X qua X for any X.
Metaphysics.
Or as it is commonly referred to in computer science: function self-application. One example of which being the Y combinator (as in the name of this very forum).
I am applying a tautology in exactly the mathematical sense of a tautology; and I understand my construction just fine.
Had you been more charitable you would’ve addressed my argument; not your strawman of my argument.
I'm charitable by trying to teach you with examples.
> Or as it is commonly referred to in computer science: function self-application
No, that's recursion, not metaphysics.
Is computer science the same as programming, no. Computer science is the study of programming, not programming. You learn this in your first year of CS.
If you're smart enough to _really_ understand what a Y combinator is, this should be a piece of cake.
I am struggling to spot the charity in all your condescension.
metaphysics
/ˌmɛtəˈfɪzɪks/
noun
the branch of philosophy that deals with the first principles of things, including abstract concepts such as being, knowing, identity, time, and space.
First principles? Like logical/mathematical axioms? Sprinkle abstraction. Identity? f(x) = x ?
Time? Space? Spacetime? Minkowsky space?
On a fuzzy-match that sounds ludicrously similar to the sort of stuff the formal sciences concern themselves with. Almost as if the distinction between science and philosophy is non-existence given the demarcation problem.
What makes this assertion? The literal definition of the term.
Are you the kid who thinks he's edgy by saying in philosophy class: "It depends on the meaning of the word 'X'" every time someone tries to explain X to you?
and you hit one of the main objections to the theory of falsifiability as the criterion of science. There are also other more serious ones, like the obvious fact that actual science does't seem to actually work this way. The idea is more to explain observations in a coherent way rather than to be falsifiable for example. One example is the big bang theory being proposed by a Catholic astronomer who didn't like the then prevaling idea that the universe did not have a beginning or end because it went against his religious beliefs. Or Kepler looking for planets at locations in accord with musical harmonies because he though it was consistent with the existence of a God
> One example is the big bang theory being proposed by a Catholic astronomer who didn't like the then prevaling idea that the universe did not have a beginning or end because it went against his religious beliefs.
That’s simply not true.
Fr. LeMaitre developed the theory to explain observed red shifts of galaxies (deriving Hubble’s Law prior to Hubble.) He felt his theory (and science in general) had no connection or contradiction to his faith.
> One example is the big bang theory being proposed by a Catholic astronomer who didn't like the then prevaling idea that the universe did not have a beginning or end because it went against his religious beliefs.
This actually came from the skeptics [0]. They were unwilling to believe a Catholic priest proposing a scientific theory too similar to his religious beliefs, about God creating the whole universe in an instant.
When the cosmic background radiation was discovered in 1964, the Big Bang was accepted by (mostly) everyone.
The reason that falsifiability is a core requirement of science is because if there is no way a proposition can possibly be falsified, then there is no way to objectively assess whether or not it is true.
This is not to say that the proposition is false. It's possible that things can be unfalsifiable and true nonetheless, but those things would still exist outside the range of the scientific method (at least until/unless our understanding of reality expands enough to devise a test). That's an intentional trade-off, in order to gain greater confidence in the truth of the things we can test.
Personally, I am weary of the notion of “actual science” since science is not a well-defined term. The demarcation problem isn’t solved; and philosophers like Fayerabend in his book “Against method” suggest that science is more of an anarchic enterprise than any particular set of methods.
Take any criterion and apply it too strictly and there is some scientific discovery/progress in history which violates the rules and wouldn’t pass for “science” given any definition…
Take any given methodological approach - and you will always find counter examples in scientific history.
sure, but I mean, it's pretty hard to call flat earthers or proponents of voodo unscientific if we have to admit that we haven't solved the demarcation problem. Also more importantly for Popper, he wanted to oppose communists' ideas of "scientific materialism."
There does seem to be this thing that good scientists are doing. Popper did seem to touch on some good aspects of it, like the willingness to be proven wrong.
I think maybe that's the part Popper got right, maybe Science is about an unbiased search for knoweldge with no other agenda other than a genuine curiousity. And maybe that's why demarcation is so hard, it's hard to tell a person's motives.
I donno, just throwing stuff out there. . . Still I mean at least we have a test that communists obviously fail which should make Popper happy.
I mean, the non-schizo proponents for flat earth does approach it with scepticism. It's just that the their required level of proof are unreasonable. Any experiment, no matter how genuinely designed, is exempt from flaws. Science works because the detractors doesn't have the energy to waste decades in the academic apparatus, unlike true curiosity.
> … he wanted to oppose communists' ideas of "scientific materialism."
> … that’s the part Popper got right, maybe science is about an unbiased search…
> …at least we have a test that communists obviously fail…
i could be misunderstanding what you’re implying and if so apologies, but Popper wasn’t some anti-communist nutbag, in fact, if he “wanted to oppose” communism, that would have been fundamentally counter to his ideal of keeping things “unbiased”
Popper was very open how much he admired Marx, he even tended to agree with Marx’ analysis of capitalism. where he disagreed was that 1) we were destined to be slaves to be servants if we 2) don’t have violent revolution.. He was quite clear that the state should absolutely be heavy handed to protect the lower classes from the wealthy’s constant tendencies to abuse the poor. Again, he agreed with much of Marx’s writing but where Marx thought it would require violent revolution, Popper believed we could use other methods such as “social engineering” to counter the rich. He was also concerned that so many people agreed about violent revolution being the only way out. He wrote about this admiration for Marx quite a bit:
> …a grandiose philosophic system, comparable or even superior to the holistic systems of Plato and Hegel. Marx was the last of the great holistic system builders.
and
> [Marx] made an honest attempt to apply rational methods to the most urgent problems of social life… His sincerity in his search for truth and his intellectual honesty distinguish him…
Popper was concerned that under unrestrained capitalism:
> ..the economically strong is free to bully one who is economically weak, and to rob him of his freedom,… Those who possess a surplus of food can force those who are starving into a ‘freely’ accepted servitude.”
Philosophy Now sums it up well, “Throughout his scrutiny of Marx, Popper treads a thin line between admiration and apprehension.” [0]
again, apologies if i misinterpreted what you were implying, just wanted to clarify that Popper wasn’t some kind of nutbag mccarthy style rabid anti-communist or whatever. he just thought we could “social engineer” our way away from psychotic nationalism and unchecked capitalism rather than requiring full blown revolution.
He wrote an autobiography - he was a young man who was a communist because he believed in scientific materialism. He later recanted after some of his friends were shot and killed by the police.
Popper said he noticed that scientific materialism proposed by communists or Freud's theories was very different from the lecture he heard by Einstein - Einstein looked more like science.
Communism whatever anyone thinks about it is obviously not science. They claimed to be science at first and proposed scientific materialism as the future.
Today even communists seem to have recanted this idea instead preferring to criticize capitalism and present themselves as the only alternative. We all know today it's not science.
I don't want to debate politics only to say Communism was never science, it's politics - Popper noticed that quickly and it was one of the imputus for his ideas based in his own autobiography
He also dedicated his book the poverty of historicism to the countless men and women who lost their lives to fascism and Communism and their false belief in historical destiny
Open society and it's enemies also contains a long critique of Marx and the idea that history follows certain laws that must play out a certain way.
The problem with all ideas is always their reification. Computers may be deterministic, but humans aren't. The same software/idea produces wildly different understandings; and behaviour in differnt humans.
What always seem like great ideas in theory, innevitably has to cope with the (mis)understanding; (mis)interpretation; and (mis)application of said idea by the mass population.
Because they have worked over time, empirically as opposed to a lot woo woo stuff proposed by religion, spirituality, metaphysics, mentally ill, etc. which can never be disproved but which really don't have any value in those areas where we apply science like technology and attempting to understand natural processes
You seem to be speaking from a place of greatly diminished self-awareness.
Notice how you are constantly appealing to abstract unobservables to make your claims. No shame in that - all science does it. Quantities, numbers, fields, processes etc. etc. etc.
That is precisely the metaphysical woo woo you are busy criticising. Formalism is all about turning that woo-woo into well-defined concepts.
What's a "process"? Show me one.
Only way I know how is to give you more metaphysical woo woo.
The central dogma of computational trinitarianism holds that Logic, Languages, and Categories are but three manifestations of one divine notion of computation. There is no preferred route to enlightenment: each aspect provides insights that comprise the experience of computation in our lives.
If you want to believe in fairy tales then enjoy them. I prefer materialism. We will never agree on this. You can't prove a God exists, so I simply don't care about the topic other than how it affects civilization negatively by promoting magical thinking and religious fanaticism/intolerance. I tolerate people who are religious, I don't wish them any harm; the opposite is quite untrue for a large proportion of the religious world for atheists/"infidels".
The deep irony in valuing matter more than valuing values is never wasted on me.
You still haven't figured out that "matter" is yet another man-made concept? An abstract idea. A collective noun. Itself a (very useful) "fairy tale".
A substance which posesses "rest mass" in a universe where nothing is ever at rest sure sounds like magical thinking (to me). And what do you even make of point-like particles in physics? They have no volume - so they are not matter. And what about anitmatter?
You haven't yet come to the self-realization that you are committing the reification fallacy by promoting a man-made concept to a totalizing/generalizing/all-encompassing ontological status.
Matter is your God. It's the abstraction you worship.
You are right in saying that we'll never agree; for if I were to agree with you I too would be wrong.
Science clearly works because humans have a genuine curiosity to find truth. While every person is flawed and prone to error, the curiosity points in the same direction, rising above the noise.
We've only recently had to even discuss this after the deployment of large scale truth poisoning making the errors non-uniform.
Deutsch also heavily argues for explanations as first-class existence on the level of matter. For him, predictions are simply a practical concern. I still fondly remember his dialogue between Socrates and Apollo in dream, where he convincingly makes the point that even there he can find true knowledge about the world.
Deutsch really argues for Popper's "conjecture and refutation" model of science (or as he renamed it, "conjecture and criticism") and one of the great points he makes is that both those processes can be done entirely in the mind. Physical evidence may be necessary for some kinds of criticism, but sometimes reason alone can produce useful criticism.
I also like his point (in conversation with Sean Carroll) that we have rigorous proof that the notion of explanation can't be formalized, yet explanation is at the core of the scientific endeavor anyway.
"Hard to vary" is, I think, some sort of isomorphism of "easy to refute". I found it difficult when reading Deutsch to pin down exactly what made something hard to vary.
Let's say we have 16 distinct phenomena. All of them are currently explained by a rooted tree T of explanations of depth 4. Say, we come across 8 new phenomena. Suddenly you can't rejig or shake your tree T anyhow. You have to preserve the original structure of T or at least relationships while constructing a new tree R such that it not only explains old 16 phenomena but also the new 8 as well. We might construct a new tree R such that tree T is a subtree of it. This new tree covers more ground and has more depth. In this sense, new explanations are constrained by whatever that came before and they are tasked with explaining more on top of that. There are very few ways to do it. In physics, they call the more general law to be valid at all time scales, energy scales and length scales, velocity scales and so on.
Not all useful tools for thinking must be considered science. I think it's quite fair to say the Drake equation isn't "science".
However, I think Popper's "conjecture and refutation" is a more interesting model for thinking about the increase of knowledge than falsifiability is.
Through that lens, you might say the Drake equation was a conjecture intended to stimulate refutations, which might lead to further productive theories.
Popper's essays on Parmenides are really interesting on this. Popper frames Parmenides' philosophy as essentially a conjecture that "change is an illusion", and the history of western science since then as various "research programmes" in response to that. I'm summarising terribly, but I think he illustrates well the value of conjectures that aren't necessarily "scientific" in stimulating thought.
The Drake equation is falsifiable. All you have to do is plug the values (which are all things that could technically be measured) into the equation, then count the number of civilizations in the Milky Way, and see if that's equal to what the equation predicted. it is possible that it's not, so the equation is falsifiable.
That it's impractical for us to actually do the experiment just means that we cannot perform the experiment and so it doesn't help us in practical terms. But it is a thing that could be done, strictly speaking.
That said, it's also a thought experiment, not a scientific postulate, and isn't meant to be "scientific".
Certainly useful for thinking, but its lack of falsifiability, predictive power or validation is indeed why there's so much controversy around it and why it remains a hypothesised model rather than widely accepted scientific fact. The best you can do is test the assumptions of the model itself, or its constituent terms.
What are you implying? Scientists can't make stars either, but they can make models describing stars and make new observations about these stars that test the validity of these models. Plenty of climate models have been disproven by this exact way. What has been remarkably consistent is that man-made greenhouse gases are causing climate change. For example, models predict that man-made global warming would cause the stratosphere to cool, which is exactly what they measured. By contrast, increased solar activity would have also heated up the stratosphere.
Why?
Climate science works with quantative models and makes concrete predictions.
So if the predictions don‘t come true the model is false.
E.g. this random blog I found googling compares IPCC predictions with actual outcomes: https://johncarlosbaez.wordpress.com/2012/03/27/the-1990-ipc...
[the link is just supposed to show how climate science is falsifiable. I have no idea whether the numbers in this specific source are trustworthy]
More comparisons here[0]. As is important, it's noticed that models even from the 80's made fairly accurate predictions. That's the real kicker: models accurately predict observable outcomes 20-40 years later (and have only improved since then).
I'm not sure why there's so much contention about climate science given that we've been observing this trend for nearly 50 years now. Scientist makes prediction about something 50 years later and 50 years later the prediction is shown to be accurate. It feels weird to not trust a source with such a good and observable track record...
The thing is - there are so many predictions made and a lot that don't come through, it's hard to find which are the "real" ones.
I also read somewhere that there are several different scenarios available for the IPCC climate models and a lot of the more "scary" predictions are based on the least likely model.
Finally - we saw the limitations of modelling in COVID in the UK, with adverse consequences.
Yes because you could simulate a small-scale atmosphere, and notice how changing the composition of the "atmosphere" affects average temperature or other features. John Von Neumann actually predicted climate change from first principles and described it as potentially more debilitating than nuclear weapons.
Well that's more evidence for it, not an attempt to falsify.
If we want to falsify, we should ask questions like: Was there a time in the past when the atmosphere contained even more CO2 than it did now? What was the temp and did life survive?
What evidence or experiment would convince you the theory is false? Or is that impossible?
>If we want to falsify, we should ask questions like: Was there a time in the past when the atmosphere contained even more CO2 than it did now? What was the temp and did life survive?
Do you believe that question isn't already asked and answered? Just going off the top of my head, I understand historical considerations to be a routine observation as part of your typical course or even presentation on the history of climate change. If you believe this counts, then your answer to your own question about the falsifiability of climate change is yes.
I also don't think that's the only kind of example - I think the routine things talked about on a day to day basis, including year by year and decade by decade temperature predictions would qualify, and these data are quite frequently discussed and quite visible.
I also also don't think the distinction between "evidence in favor" and "attempts to falsify" makes a whole lot of sense. These are two different sides of the same coin, and evidence in favor of something is the same as surviving a test of falsifiability.
It's pretty disingenuous to cut the graph at the beginning of the ice age cycles. This tricks people into believing that the current level of CO2 has never been seen before.
So I'd ask what parts of climate change are falsifiable?
- Is the climate changing? Of course, but it has also been changing for all of earth's history.
- Are humans and other organisms contributing? Yes but organisms have contributed to climate change for all of earth's history. Blue-green algae put oxygen into the atmosphere, plants sequestered huge amounts of carbon.
- Are we at a historically unprecedented level of carbon? No, the earth was previous at something like 200x the atmospheric CO2.
- Will 2x the atmospheric CO2 cause mass death? Probably not, we know life thrived in the past on a much warmer and more CO2 rich globe. Many more humans die of cold rather than heat.
Climate change is not a fundamental theory. It is derived from thermodynamics, the Navier Stokes equation, quantum mechanics and probably quite a few assumptions on what is negligible. Falsify any of those and you falsify climate change.
Depends on how valid you consider statistics and climate modelling. We're seeing increasing numbers of events that are incredibly statistically unlikely in a climate unaffected by anthropogenic climate change, so - can we say with absolute certainty that X was caused by climate change? That's hard. But we say that X was incredibly, unbelievably unlikely without climate change, and that's usually been a sufficient standard for accepting a hypothesis.
Certainly. E.g., set up a model greenhouse system and show that increasing CO2 doesn't increase the temperature.
Or demonstrate that convective phenomena aren't affected by changing temperature.
"Climate change" comprises a causal chain of well studied and testable hypothesis. Including disproving alternate hypothesis like "solar radiation is increasing".
The problem is climate change has become a motte and bailey (https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy). If we do come up with evidence that rising temps aren't all that correlated to CO2, then we just change the target to 'yeah but it causes ocean acidification' or 'natural disasters are increasing' or 'yeah but it kills coral reefs'.
At this point, I'm not sure there is any possible experiment or evidence that would actually change most people's mind on climate change. It has become a cause to support rather than a theory.
If I understand this paper correctly, it compares the temperature and CO2 levels on geologic time scales:
> Atmospheric CO2 concentration is correlated weakly but negatively with linearly-detrended T proxies over the last 425 million years.
> To estimate the integrity of temperature-proxy data, δ^18O values were averaged into bins of 2.5 million years (My)
It makes sense. Over long time scales, other factors are more important.
Does the same apply to shorter time scales, like thousands, hundreds, or tens of years?
> At this point, I'm not sure there is any possible experiment or evidence that would actually change most people's mind on climate change.
One possible way would be to continue releasing the CO2 at the current or larger rates. If then temperatures and the climate reverse to the state from 200 or 300 years ago then it means it's part of a natural cycle and humans had nothing to do with it.
> At this point, I'm not sure there is any possible experiment or evidence that would actually change most people's mind on climate change. It has become a cause to support rather than a theory.
Aren't you (your way of thinking about this) part of the problem here, or even the whole problem?
You've reduced climate change to politics - certainly there are many people (myself included) who don't agree that it is political.
There are people that say "we need to think about the climate" purely because they are liberal, but they can be ignored, and their views don't change the reality.
Yes. Make predictions. Wait. Observe. Measure difference between predicted outcome and actual outcome.
We've had scientists making predictions since the 80's, so we have some long term observations. I'll save you the read, historical models are fairly accurate and have become more accurate over time.
I'm not sure why anyone considers this a debate. It's not a debate. It's willingness to look at data or not.
Predictions have been made, and we can check if they were correct, but maybe not at the moment.
Some past short-term predictions, like predictions for 2020 made in 1990, may or may not be correct. We can look those up. Many other predictions are for what will happen in the coming decades, and we won't be able to assess with certainty whether they were correct until those future dates.
This article has good advice for logical thinking, but just one nit pick:
> having just lost both of his parents, he reflects on the all too human allure of promises of supernatural reunions in the afterlife, reminding us that falling for such fictions doesn’t make us stupid or bad people, but simply means that we need to equip ourselves with the right tools against them.
I feel like grouping all religion in with psychics and con artists is a bit extreme. Sure, it's impossible by contradiction for all tenets of all religions to be simultaneously true.
But... there were, are, and will yet be many intelligent people of different religions that, for various empirically-unprovable reasons, feel there is a God - even if they know at an intellectual level that their current church or belief system isn't entirely consistent or correct.
It's important to respect their religious liberty even if you disagree. Many people (including on this very forum) have had spiritual experiences that seem to confirm the existence of a God at a visceral level. It's hard to explain unless you've had a spiritual experience, but they can include moments of emotional connection to deceased loved ones, "gut feelings" that helped you do something important or avoid something bad, or unexpected and unusual ideas or thoughts (in my case, sometimes even programming related) that aid in overcoming life's obstacles, just to name a few.
> But... there were, are, and will yet be many intelligent people of different religions that, for various empirically-unprovable reasons, feel there is a God - even if they know at an intellectual level that their current church or belief system isn't entirely consistent or correct.
No one should be denied the freedom to follow these feelings, within the boundaries of not infringing on other people's rights and liberties.
Having said that, please accept mass delusions do happen, smart and intelligent people do get caught in them, and so this is no proof of a god. You need material proof, and there is none.
I agree with most of your comment. However, when you say
> You need material proof, and there is none.
I would like to add the the old adage holds true - absence of evidence is not evidence of absence. (Also, this is more for the god as an almighty person in the sky type of case. For a person who considers the natural elements as god, it is simply different)
> I would like to add the the old adage holds true - absence of evidence is not evidence of absence.
Oh but it is - it's not proof of absence but it is evidence. Any piece of evidence must have two sides - if observing A would increase your confidence that B, then observing not A must reduce your confidence that B.
That's like monkeys being observed by scientists behind one way mirrors saying they have evidence that there is no one watching them because they've never seen any scientists. But that would be a mistaken assumption because the scientists intentionally made themselves difficult to observe for the purposes of their research (monkeys behave differently if they know they are being watched).
I think most religious people would agree that God intentionally makes it impossible to verify his existence at a social level via empirical means... but that he does provide evidence of his existence at the individual level by speaking directly to you via spiritual experiences.
It kind of had to, otherwise modern religous belief just falls apart. At this point the only plausible explanation for a religious person why there's no hard evidence of God is that he intentionally hides his existence from us.
And to that I say, extraordinary claims require extraordinary evidence. Christianity has had 2000 years to produce extraordinary evidence. What's the hold up?
You mean, somebody wrote down "eye witness" accounts many decades after the event occurred (that contradict each other in important ways. Like, which day was Jesus crucified? The day after Passover was eaten (Synoptic gospels) or the day before it was eaten? (John)), that have absolutely no way to be verified. I don't see how that is "extraordinary evidence." Some ancients also claimed to see various pagan Gods, but I'm guessing you don't consider that to be "evidence."
> respect their religious liberty even if you disagree
Believe and do whatever you want with respect to religion, so long as it doesn't extend to telling me what I can or cannot do because of your beliefs.
You and others have my full support for religious liberty up until that line (overwhelmingly in the form of "yeah, whatever; I literally could not care less what you believe or do").
> It's important to respect their religious liberty even if you disagree
Respect in what sense?
My stepsister is religious so I "respect" her by not talking about this subject.
But if she ever talk with me about religion I would say that I'm an atheist who believe that all religions are a kind of self deception to make believers feel better..
Which no one in this thread has done? The article even has advice on this matter quoted at the root of this thread: "reminding us that falling for such fictions doesn’t make us stupid or bad people".
That's sad. I'd assume your love for you sister would make you find common ground in doubt and humility, virtues of both science and religion. Every single human tells themselves lies to feel better, we just call them Justice or Democracy. We're just self-conscious meatbags thrust into existence against our will, after all.
> It's important to respect their religious liberty even if you disagree.
I respect other people's right to have strongly-held beliefs about roughly anything, even if they're wrong. I'd rather they were wrong, than that they "believed" something that is beyond belief and disbelief.
In the case of "beliefs" that are unverifiable in principle, well I don't count those as beliefs; and equally, I don't think it's possible to disagree with such "beliefs". I'm with Wittgenstein on that; these are things that you can state (or deny) completely grammatically, but whose assertion (or denial) is meaningless.
I also like the idea that if a thing can't, in principle, make a detectable difference to the world, then it doesn't make sense to speak of it as real or unreal.
Exactly this. If a belief can't generate a testable prediction, even one not necessarily testable by us right now, but testable in principle, then that belief is meaningless. There's no basis to claim its true or false, it has no impact on reality whatsoever. It's a waste of effort to even think about it.
It says that Dennett and the "New Atheists" assert "that God does not exist". That perfectly exemplifies my point: you can't use non-verifiability to prove that God doesn't exist. What it proves is that neither an unverifiable claim nor it's contradiction is meaningful.
From the wiki article, it looks to me like my position isn't Scientism at all. The article presents Scientism as a kind of fallacy: attempting to apply some version of scientific method to everything. What I'm on about is more like ternary logic: a thing can be True, or it can be False, or it can be neither. It's closer to Not Even Wrong.
Sure, whatever. Call it religion if you like, but try and define something existing or being true in a way that doesn't involve making testable predictions.
The simulation hypothesis is in principle testable in that, if we were in a simulation, we may be able to find bugs in it, possibly even escape the sandbox. Right now, we don't have any "angle of attack", and nothing we could suspect of being a bug (which includes e.g. being able to perceive artifacts of some optimization in the simulation) - so there's no point in working oneself up about this, or assuming it's true. However, because this is testable in theory and there's a possibility it'll become testable in practice, it doesn't qualify as fundamentally "meaningless" in my book.
The article did not advocate for disrespecting liberty.
It actually very hilariously directly talked about exactly your post:
“Gut feelings” and “people experiencing things” are not independently verifiable facts. In fact, every time religious experiences are tested, they are not repeatable or verifiable.
>It's important to respect their religious liberty even if you disagree.
I respect people's religious liberty for the same reason I respect any other kind of freedom of belief.
I do not know what it means to respect a religion. Either a prophet actually witnessed what they claim to have witnessed, or they are lying, or they are delusional. Perhaps Jesus and Moses and Siddhartha Gautama get a pass because we don't know exactly what they said or did, but how can a non-believer explain Joseph Smith or Mohammad as anything except insane or malicious? What does respect for a religion look like when I believe that its most sacred figures are the most contemptible of con men?
I am respectful in that I don't go around insulting people. I am respectful in that I understand life is hard and people find comfort wherever they can. But respect for the victims of a scam does not imply respect for the scam itself. I can understand the fear of poverty, and I can understand the temptation of wealth, but the people who write me saying I will inherit $15 million still belong in prison.
To me, "respect" means only a base level of tolerance that we should give to people believing in things. Respecting someone else's religious belief doesn't mean validating or endorsing it. And it doesn't prevent one from criticizing it. Just because someone's belief is religious doesn't give them a free "get out of criticism" pass.
We should treat religious belief just like we treat belief in other supernatural things or in other types of extraterrestrial life.
What are the chances a child born to christian parents will be a christian vs a buddhist?
Why as the access to knowledge has been increasing more and more people are becoming non-believers?
Why have religious beliefs in the sun, lighting, fire, etc. vanished as we developed full understanding of the phenomena?
What makes you think you will be able to justify sticking to your "gut feelings, therefore god" position as we develop a greater understanding of the brain?
You’d have to read the book to get more context on his sadness of losing his parents and his reflections on being susceptible during that time. A famous quote is:
“Science is not only compatible with spirituality; it is a profound source of spirituality.”
― Carl Sagan, The Demon-Haunted World: Science as a Candle in the Dark
Carl Sagan was very wise. I find his insights thought provoking and salient.
However, the cynic in me is sad that the default of human nature is to be gullible, tribal, delegate fact checking to authority figures, and prone to confirmation bias.
I would love to become better at rational thought and discernment. But, I feel that I (and frankly, 95% of the people on this site) are already better than the vast majority of people out there in this regard.
In other words, these messages are preaching to the choir. I’m not sure what media or messenger could bring these (or similar) lessons to the masses, or whether they could have an enduring impact. The optimist in me hopes that this will be possible some day.
I think a good place to start would be to teach it during highschool. Especially since a lot of bad science gets spread around social media. College may be a bit too late since that's when you really need to start applying those critical thinking skills
The thing that bugs me a bit about Occam's razor [paraphrased] - All other things being equal, the simplest explanation is usually the right one - is I've seen too many people smugly quote the second half, leaving off the, "All other things being equal" bit.
That's not Occam's fault. Razors are meant to be sharp, and people cut themselves with far blunter implements.
These are excellent but I think #8 (statistics of small numbers) is one that, with some nuance, you can get more personal mileage out of if you're willing to tolerate being wrong some of the time.
For example, I've noticed that of the managers I've had, the good ones and bad ones are cleanly separated by one trait or property. This is a sample size of maybe 7 or smaller.
Depending on your priors, my P(Hypothesis | Data) is maybe 0.6-0.7 which is terrible from a research and scientific point of view.
I'll take that level of certainty for day-to-day decision making anytime, though. Make enough of these weakly-backed hypotheses and you can avoid a lot of trouble.
Isn't this essentially the basis of decision theory and statistical hypothesis testing? The hard part is to determine an acceptable probability of being wrong in any given situation.
Yes, and my only commentary here is that the threshold for academic decision science is a lot higher than the one I have for managing my day-to-day life.
"Arguments from authority carry little weight — “authorities” have made mistakes in the past. They will do so again in the future. Perhaps a better way to say it is that in science there are no authorities; at most, there are experts."
That's the most controversial, because a lot of people want to be authorities and don't want to be questioned.
For instance, the origins of Covid... or really lots of things about the pandemic were pushed by authorities with little justification other than authority. It turns out many of those things were subsequently reversed or reasonably doubted.
I agree that that particular point is one of the biggest problems when it comes to translating scientific understanding to public policy.
Can there be "settled science"? What is considered an acceptable level of skepticism?
Governing society is largely based on rough estimates of ROI and optimizing for certain value systems based on minimal firm data at best. Often under time pressure.
The problem is that it's easy to generate bullshit skepticism to lock up the process (to benefit the status quo) whereas it takes lots of effort to scientifically disprove and debunk the bullshit. We generally use appeal to authority as a shortcut to overcome this effort asymmetry.
The thing we need to be clear about is when/how that is a valid tool for public policy, but not science itself.
The bullshit skepicism should usually be identifiable by applying the same set of rules to evaluate its claims. Take the claim that Ivermectin was effective in treating COVID. Some doctors did support this, but that's Argument by Authority, and probably also Statistics of Small Numbers if not several others.
> lots of things about the pandemic were pushed by authorities with little justification other than authority.
Whatever people's feelings about how this was handled, please don't include the scientists in the issue. The authorities mandating things were the political folks, not the scientists.
Maria Popova is an extraordinary essayist; I highly recommend exploring themarginalian.org site. The breadth of topics, and the keen intelligence coupled with such warmth and humanity... it's a treasure trove.
Good thoughts overall. I will assume Sagan has deeper thoughts on religion than what he writes here, but I found those parts superficial and overly simplistic. The problem I have with his perspective is that we know from Godel's incompleteness theorem that no formal system can explain its own consistency in the language of the system. As a human, this means that there are things in life we have to accept as true but are fundamentally unexplainable. Religion does not explain the unexplainable but it provides a language and framework in which certain experiences of being alive can be felt and communicated. Both science and religion (to the extent to which these are even different concepts) have positive and negative sides. I personally think the dark side of science is just as dark as that of traditional religion but I don't deny the light or dark contained in both.
While Sagan's advice is great, it's still putting burden on the individual to handle bullshit.
The more I've thought about bullshit as a phenomenon, the more I think the only real solutions are social ones, where non-bullshitters need to coordinate to make bullshitting a bad scenario. Things like shaming, publicly crying bullshit, coordinated dismissing bad actors.
If handled individually, it'll always be asymetrically easier to bullshit than to defend against bullshit.
See also Jim Lehrer's rules for journalism. That is, when you see these rules broken someone is very likely wanting to bullshit you and/or as a "journalist" has failed to use critical thinking.
I've long been thinking that society is badly missing technology-enabled social tools for finding consensus because, unfortunately, the current social networks and their monetization principles drive exactly the opposite - i.e. polarization.
Carl Sagan's Rules seem like a candidate for a framework for such a tool, or at least, a starting point.
Slight OT, but his suing of Apple was such a douchery move, I will never be able to understand how such brilliant and humble human being would want to entangle himself in such radicicolous petty and frankly baseless lawsuit.
> misunderstanding of the nature of statistics (e.g., President Dwight Eisenhower expressing astonishment and alarm on discovering that fully half of all Americans have below average intelligence);
Right, but that's less and assumption about basic statistical literacy than it is about a particular distribution that happens to describe the data. Many things in society and nature are not symmetric around their means.
Also a bad statistical statement, because the way it is phrased implies that the average is being taken from the whole world, rather than just America.
Thinking is hard. Learning to change your opinions/beliefs is even harder. It takes deliberate and consistent practice to get better at these things (join your local Skeptics/Humanists group if interested).
To quote Bertrand Russell: “Many people would rather die than think; in fact, most do.” ;-)
It's a subtle topic, but I think we're often conflating extrapolating from our current mental model with thinking.
Often when I get into a dead end, I realize, all I had to do was to walk my knowledge slower, and try to poke holes at it, flip it backward, toy with it, do games with the problem at point .. it's more of a poetry seeking game than hard thinking. That's why (IMO) most eureka/new solution feels like a loosening, the hard part was going too hard in the wrong direction.
After failing a few too many times you get the habit of not rushing the bad mode of thinking and rather play with ideas.
A thing of note - to wield such a kit, one must be a an expert in the baloney detection, even if a just a little. Example:
It is year 2020 and there are these new and suspicious vaccines for the new illness and you want to understand what's going on. Smart people say that multiple hypothesis need to be evaluated, authorities are unreliable, unquantified promises bad and so on. So an unprepared person might simply drown in the sea of baloney hypotheses, like invermectin treatments, electrophoresis treatments, etc.
If a person lacks critical thinking facilities which actually work, and aren't simply contrarian kindergarten style, then adhering to authorities would be the optimal path.
The point 9 about falsifiability needs a lot more nuance. The problem is that we cannot get away from unfalsifiable hypotheses. No matter how far we probe into some phenomenon, there is always the question "why?" beyond which we don't know.
When we are left with a "why", our only choices are (1) take an agnostic stance and decline to make hypotheses, (2) make unfalsifiable hypotheses. If we choose (2) there are unfalsifiable hypotheses that are superior and those that are inferior. The superior ones are ones which postulate less. The best unfalsifiable hypothesis is the one which postulates no hidden variables. The answer to "why" (why things are the way they are, and not in some other arrangement) is that (a) other arrangements for a universe are valid and exist, but we are biased because we are intelligent beings which live in a particular arrangement and only that one is observable to us (the "anthropic principle"); and (b) in fact, why not: all the other possible universes exist, just not "here". (If they don't, then we need to come up with a hypothetical reason why they don't, which is more complicated than assuming there is no reason: by Occam's Razor, we just trim that out.)
What we don't want to be doing is spouting some unfalsifiable hypothesis as if it were unvarnished truth; but nevertheless, we can pick the best one and defend it against the others, when they come up.
I should add that there is nothing wrong with postulating theoretical entities and processes; doing so has served us well, and in many cases, additional discoveries lended reality to the postulated entities. E.g. atoms were originally a philosophical construct based on the idea that subdividing matter into smaller pieces has to bottom out at some indivisible particle; and that's where we got the atom word. Subatomic particles were postulated initially too. These are not exactly the same as hidden variables because their existence is falsifiable. We have evidence that there exists an electron, for instance, even though originally it was just postulated.
Know the rules and then break them. If you are criticising a physicist, you should be competent enough to know their jargon and conclusively point out gaps in their knowledge. E.g. as Sabine is doing with lost in math.
Just stating scientist could be wrong isn't helpful.
Not sure why you're getting downvoted. Sagan himself said to be skeptical of those in authority.
"Who is running the science and technology in a democracy if the people don't know anything about it?"
https://www.youtube.com/shorts/bZ0x3cQ0PnU
This. That's a good way to put it. I've mentioned Fred Hoyle's line on that, "Science is prediction, not explanation" (which is from one of Hoyle's novels), but Sagan's line is better.
Key point: unfalsifiable theories do not lead to useful technology. Engineering requires predictability.