Hacker News new | past | comments | ask | show | jobs | submit login
The “backfire effect” is mostly a myth, a broad look at the research suggests (niemanlab.org)
125 points by hhs on June 26, 2019 | hide | past | favorite | 98 comments



This is like one of those Knights and Knaves logic puzzles we used to do at school:

Amy believes the backfire effect is real, whereas Ben believes it is fake. Amy reads an article saying the backfire effect has been disproven, then passes her new opinion on to Ben. What will be Ben's revised opinion of the backfire effect be if 1) the backfire effect actually exists, 2) the backfire effect doesn't exist.


I find this premise amusing... I find it amusing because my ex (whose name really is Aimee) often had opinions that my life experiences and education suggested to me were suspect. Aimee used to do a bunch of reading, but the material she read was often from sources or authors I didn't find credible and thus while their arguments seemed plausible, they weren't enough to invalidate my original opinion.

My name really is Ben. My thoughts and opinions on such matters were largely unaffected by hers in many cases, unless she presented credible evidence that led me to re-evaluate my opinion, which did occasionally happy. Often though, it was just "someone else agrees with me, thus I'm right," which does nothing to sway my opinion.

Two people having the same opinion doesn't make them right and doesn't negate my opinion. It just means they both disagree with me. Humanity also believed in the geocentric model of the universe at one point. That doesn't make it right. It took one person to change the entire opinion of the world to the heliocentric model. Now admittedly, I'm not Copernicus, but we're all capable of finding evidence to support or refute our opinions on the world. It's just that very often people use the opinions of others as support for their own instead of exercising critical thinking and forming their own opinions.


The thing is, forming your own oppinion is _hard_ and you need a great deal of insight to reach some form of truth.

I, for example, if I lived a couple of hundred years back, would have probably believed the world is flat, since, well, it's the easiest oppinion to have by just looking around. The model we have now is incredibly sophisticated and not at all intuitive, we just think it is because we're used to it. Things that we are used to seem the most obvious and simple in the world.

We are limited with how much we can dig deep into stuff. So many times we have to reffer to other people that know what they are doing. I, sometimes, even reffer to myself at some different point in time. If I know at some time I looked deeply at some problem and reached a conclusion then, I will just take it for granted and not always retrace my thought process. It's a useful mental shortcut, but one that can leave me making oppinions on old or incomplete facts.


People did not really believe the world was flat hundreds of years ago.

Similarly the “new” world was in regular contact with the old for the last several thousand years. It’s only 53 miles from North America to Asia, even less if you include the two islands between them.

PS: We have a lot of ideas about how wrong people used to be. But, I suspect most people never really formed an option on most of this stuff because they never really considered the question.


If they were in regular contact, why did smallpox have such a devastating effect on Americans?


Population density. Smallpox dies out with lot’s of tiny isolated communities with minimal contact to each other.


Regular contact here means "there were people who could have made the journey between Asia and Americas, yet we don't have any proof and its effect on both societies is pretty much non-existent"


Specific people no, but genetics shows connections beyond culture.


"Similarly the “new” world was in regular contact with the old for the last several thousand years. It’s only 53 miles from North America to Asia, even less if you include the two islands between them."

That's an interesting idea. Do you have a reference or materials that talks about this?


> I, for example, if I lived a couple of hundred years back, would have probably believed the world is flat, since, well, it's the easiest oppinion to have by just looking around.

It's really not, which is why people have generally known that the Earth is round for thousands of years, and even had calculated it's approximate size before Christianity.

I know that there is pro-Columbus mythology that everyone thought he was insane because of a popular belief that the Earth was flat rather than the truth that everyone thought he was crazy because they knew how far away the East Indies actually were in the direction he was proposing to sail, but...


Anyone with even the slightest maritime background will immediately know the earth is round -- you can't see all the way across the atlantic (though this could be written off as being due to fog, etc), and more importantly, ships in the distance sink into the horizon until they are no longer visible.


Sure, but let's think about it. The average person didn't receive much schooling, and if they did it would have probably been on more practical matters.

It's quite possible that the average person could very likely never have even been exposed to the question of whether the earth is flat or not.


The solution is to remember how certain you are of any "facts", and speak and act appropriately.

If I've personally dug into X and have formed a very high level of certainty that X is true, then I will say "X is a fact" to my friends.

If I've merely read or heard assertions of X, from a source I somewhat trust, then I will tell my friends "Bob told me X" / "I remember reading X" / "I have the impression that X" / etc.

The vast majority of facts (e.g. did George Washington have a set of wooden teeth, or ivory teeth, or what?) are irrelevant to any decisions I or my friends might make. I will be fine leaving my level of certainty in the "I think I vaguely remember reading X" category indefinitely.

If X is a crucial fact impinging on some important decision that I have some part in, then I will prefer to raise my certainty about X to the "X is a fact" level, or at least to "several trustworthy sources have told me X". If, for some reason, this is impractical and I have to make a decision right now, then I guess I'll make the decision from the information I have (which may well be a cautious decision based on a 25% chance X is wrong), and I'll probably feel uncomfortable about it.


> you need a great deal of insight to reach some form of truth

And even then, it's often a matter of perspective where both people's experiences tell them that their truth is the correct truth, while still holding opposing truths from one another.


The entire universe is possibly flat[1] and if it is then the Earth, being in it, must be too.

[1]https://m.phys.org/news/2017-01-reveals-substantial-evidence...


That doesn’t hold even without the equivocation on the meaning of “flat”. The universe may have a flat average curvature but in the neighborhood of a black hole it is very much not flat (both usages if “flat” to be taken in the differential geometry sense).


> differential geometry

Flat in the sense that at the largest scales the universe is extremely well-approximated by a dust that solves the Friedmann equations given an expanding Robertson-Walker background whose parameter k = 0.

If we zoom in on the “dust” we see that it is grainy/clumpy, and can resolve the grains/clumps into distant galaxies and clusters, and can in particular spot spiral structures that maintain a consistent set of shapes at various distances (or, if you like, at different solid-angle sizes, redshifts, and brightnesses of all of the gas clouds, fast pulsars, and supernovae in them). If we set k to the other allowed values, spiral galaxies at higher redshifts would distort compared to nearby spirals.

Within the clumps — clusters and galaxies themselves — the matter does not resemble a perfect fluid dust, and the background’s permitted trajectories are not those of Robertson-Walker with any value of k. The Friedmann equations thus are no help. To deal with that, we can use a Swiss Cheese model, in which we treat these large “clumps” as holes in the expanding Robertson-Walker cheese, and as necessary use Israel junctions to trace trajectories out of these holes, which are well-modelled by collapsing matter and typically an LTB metric. We can do this for select clumps, while leaving the rest as “grains” of Friedmann-solving dust.

However, unless you flush the Copernican principle (and have good evidence that it’s wise to do so), then we inhabit a nice flat universe in the sense above at length scales above Megaparsecs.

The flatness is technical and spatial. Galaxy clusters’ motion through the strongly curved spacetime gives us orientability: we can say “future” is the direction in which galaxy clusters develop greater mutual RADAR distances. But a pair of typical spiral galaxies at arbitrary RADAR distances will look like typical spiral galaxies to one another for many billions of years into the past or into the future.

Technically the flatness only applies where the expanding “cheese” metric in a Swiss Cheese model applies, and that’s far from large masses of collapsing matter.

Inside the “holes” one might find other holes and yet other holes and so on all the way down to stars and planets and people, all the non-relativistic examples of which look roughly like Schwarzschild from a great distance. In particular, the interaction between the generators of the “in-the-hole” metric and the “in-the-cheese” metric are amenable to perturbation theory in that the deviations from the respective metrics are pretty much vanishingly small. The expansion doesn’t stop galaxy clusters from collapsing inwards, and the galaxy clusters don’t stop the expansion.

Of course, if the hierarchical analytic approach turns you off so hard that you can’t bring yourself to appreciate a Swiss Cheese model as an effective description of the observed universe, you can always dive into non-perturbative inhomogeneous cosmology. In such models backreaction is the effect of inhomogeneities of matter and geometry (“… in the neighborhood of a black hole …” as you put it) on the average evolution of the cosmos. It’s been trendy for years to argue about whether backreaction is relevant (in the Wilson sense) or not, and if it is, whether it’s the traceless part or the trace of the effective stress-energy tensor that matters (pardon the pun) most. However, (a) you won’t rack up many real nerd points in HN comments sniping down comments like your parent’s because, for example, “it should be totally obvious to you that the universe is NOT flat R-W spacetime because there are procedural problems with contracting Ricci curvature functionals into a dust!”, and (b) someone who knows better, and there is allllways one of those, will just nuke you from a higher orbit.)


I don't believe in the backfire effect - but there is something here that is interesting.

Most people can't actually defend their opinions - I like arguing with people, and something that really puts a limit on how much arguing can happen is that people just don't have deep webs of reasoning ready if their superficial beliefs are challenged.

Even if I think their beliefs are obviously correct it is likely that their reasons are superficially copied from someone else and don't really stand up to scrutiny because they don't fully understand them. So, if I listen to someone representing a view I disagree, it is likely that their arguments are bad even if their ultimate position turns out to be on the money. I've seen a tiny handful of people who I think can really defend a capitalist position, for example, and I'm not particularly sure I'm one of them.

Believing that nobody arguing a viewpoint has good reasons would lead to something like the backfire effect, but it is a reflection of reality rather than a human irrationality.


I call this “argument by URLing” its literally the least convincing form of argument if someone cant make the argument in their own words.


depends. one cannot spend the whole life dissecting and acquiring all facts, sometimes we just delegate thinking, but the result is that it leaves us unable to defend the opinion because it wasn't our to begin with, regardless of how much the case built by the third party was compelling and fact based

so for example I've read an interesting opinion about game of thrones and why the latter season were less interesting, and it made the case that the story shifted from sociological to psychological drama and it set a completely different tone - some day later there was a similar debate on a forum but while the position was interesting I didn't had the ability to articulate it because a) it isn't my field of expertise and b) I didn't spent time dissecting the issue myself building an opinion, it was delegated.

and so I just wrote down two lines - hey guys check out what this author wrote on the subject - because it was interesting and it seemed well checked and I wanted to see if somebody was able to point out faults in the reasoning I didn't catch because of my limited expertise in the matter

sometimes who arguments by linking doesn't want to convince anyone, but wants to proxy debate an opinion piece by collecting and contrasting other opinion on the subject


If you cant articulate it, why is it important for you to argue? If it really means that much that you will spend time looking for proof, why not just learn the proof? And if you cant be bothered learning why have the argument to begin with?


not everything is about winning arguments. sometimes it's about sharing pleasant experiences, and reading interesting opinions might be pleasant on it's own, and yes even if the opinion is wrong it might be interesting to share it to get a discussion rolling, heck there are so many reasons, and sometimes there's not even a definite proof to learn or conclusion to reach, say, ever talked about films?


Thats different kind of discussions than debates. Of course you should share links that delight. I am talking about arguments not sharing a graph or a specific piece of data. I am referring to when the argument is the link, not when the link offers facts :)


True. However, relating this back to the software industry, I have often found myself in conversations where somebody says "according to so-and-so / this blog / this author, we should be doing this" - I feel like we don't really know what we are doing with software in the grand scheme of things (although things are getting better) thus many in the industry fall back on the ol' argument from authority. It seems to hold its weight in conversation in my experience. This is bad, everyone's opinion should be questioned and doubted.


The right answer is that Ben doesn’t change his position, right? The backfire effect only applies when the claim ideologically aligns.

I know you’re joking, just indulging the idea.


That was my conclusion, yes. Presuming Amy and Ben are ideologically wedded to their respective backfire beliefs: If the effect doesn't exist, Amy changes her mind, so won't try to Ben out of his disbelief. If the effect exists, the negative article will reinforce Amy's belief in it, but her positive argument will in turn only reinforce Ben's negative belief.



Could this be considered a kind of self-disproving or meta-falsifiable theory? Not sure if that would be technically correct.


This link isn't terribly blogspammy or anything, but the original article has more information: https://fullfact.org/blog/2019/mar/does-backfire-effect-exis...

And the original paper: https://fullfact.org/media/uploads/backfire_report_fullfact....


It's more readable, better organized, and shorter, too.


So someone named Amy Sippet who works at a charity called Full Fact decided it's mostly a myth, not after conducting studies or independent research but after reading just 7 previous studies on the backfire effect and noting that some studies didn't seem to her to support it and that most of the studies she read were conducted in the US. The article links to and quotes from her twitter account.

This major breakthrough in the social sciences shares space in the very same article with an observation about how you can use Instagram to find "the internet’s darkest corners.” and some random stuff about Apple.

Maybe it's just the backfire effect here, but I'm not really convinced.


Limiting a literature review to 7 studies based on frequency of citation is pretty understandable, especially for a relatively young research topic. Of the 2 studies which showed a significant backfire effect, the findings of the first were considered "overstated and oversold" by its own authors and the second only found evidence of the effect in one of four situations. The second was also partially replicated in a later study which found no evidence of the backfire effect.

That being said, the author of the review probably would not want you to be convinced by a single pop-sci piece covering her work. This "major breakthrough," as you describe it, does not in fact share space with any tweets or mentions of Instagram, but can be found here: https://fullfact.org/media/uploads/backfire_report_fullfact....


In an age when nothing seems to replicate, this wouldn't surprise me. But

1. There is an obvious conflict of interest in having a fact checking organization claim to "debunk" studies showing that fact checking is ineffective.

2. This "refutation" is a press briefing, not a peer reviewed study, and doesn't have a lot of facts to back up the claim, other than looking at the N difference in only 7 handpicked studies.

This is an amusing example of what happens when so called authorities are held to account. Let's say someone reports that fact checking organizations are a waste of taxpayer dollars. Then the fact checking organization comes out with a 'Fact Check: False'. It seems there should be some prohibition about being an authority on criticisms of yourself.


I've always struggled with the phrase 'replication crisis.' Really we have a 'mindset to approach science crisis.' The replication crisis is a couple of things to me...

1) A problem with how research is published and treated as 'new knowledge' - some of which is being solved (slowly) by efforts such as the OSF. Other efforts include registered reports, journals for nonsignificant findings, etc. The problem is journals seek to publish a 'contribution' and define contribution in a way that misrepresents science.

2) A problem with the promotion and rewarding of academics...see 1 and also the drive to produce work in academia results in rewarding minuscule steps that don't have larger coherence because the paper not the impact is rewarded. People get tenure for publications not scientific discoveries. This obviously links to 1

3) Often replication failures are a feature not a bug - but fundamentally we think about and talk about science in way to absolutist of a frame. This is especially a problem in social sciences which try and perform 'science' to gain equal treatment to 'hard science.' They aren't worse or better...they are different, their science is harder because context matters - but it is still science. There are myriad studies in psychology performed on white men from Harvard in the 40's (forgive my facetiousness). If I rerun the experiment and it 'fails to replicate' is the theory 'wrong', 'invalid', 'incomplete', or context specific? The choice of term there is critical to a set of philosophical beliefs about science that can actually imperil science by making different experimental results into an inherently bad thing.


This all depends on how you measure it. Take Roe v Wade for example. Abortion pre-Roe v Wade was really not that big of a political issue. Just yesterday I was listening to the radio on a long drive and a company called "Patriot Network" was advertising a cellphone network for "Patriots". The reason? Verizon and other big phone companies donate to Planned Parenthood.


Part of the reason it wasn't a very big issue pre-Roe v Wade is that on-demand abortions were illegal in 46 states prior to it.


"In 1971, delegates to the Southern Baptist Convention in St. Louis, Missouri, passed a resolution encouraging “Southern Baptists to work for legislation that will allow the possibility of abortion under such conditions as rape, incest, clear evidence of severe fetal deformity, and carefully ascertained evidence of the likelihood of damage to the emotional, mental, and physical health of the mother.” The convention, hardly a redoubt of liberal values, reaffirmed that position in 1974, one year after Roe, and again in 1976." https://www.politico.com/magazine/story/2014/05/religious-ri...


There is an enormous amount of nuance in that statement. The real change that occurred was that nuance evaporated from the discussion of the subject.


"Not that big of a political issue" is a pretty interesting take on a massive turning point in both women's rights and the drift of conservative America increasing to the right.


Patriot Mobile is, of course, an MVNO running on the Sprint network.


Meta -- but does anyone else get tired of the word 'myth'?

These days, instead of opinions and arguments being countered, we have 'myths' busted.

It's even weirder when it is 'myth' you've never heard of.


Yes, it's tiresome. Not refering to this one in particular, but many internet champions of truth aggrandize their Goliath to post shitty myth busting articles. In a recent post I chose to say "mistaken belief" to avoid sounding like captain fact-checker.


It's like the questions no one ever asked that end up in "FAQ"


To paraphrase:

> The effect: "An ideologue believes a (wrong) thing. You tell them, no. They double down. Your effort to refute has backfired." This is actually rare.

It doesn't matter, because we've all been there. We've all experienced clashes of opinion, and none of us like being wrong, so we resist adjusting.

Lots of us do it. It's a natural part of childhood. Being afraid of the dark. Believing in ghosts. Santa Clause. The Easter Bunny. The Tooth Fairy. How did you cope?

What about other aspects of belief? Religion? What aspects of religion does one regard as history? How many people remain religious in the face of contrary evidence?

Suddenly, this effect seems less rare.


The backfire effect is obviously not universal, otherwise it would be impossible to teach for example Newtonian physics. So if you present strong enough proof people will change their minds.

I think the problem is that people don't really learn what a strong proof is, so when two people who each have a weak understanding of the topic tries to argue both will see that the other is ignorant but not that themselves are ignorant. After the discussion both will have correctly refuted a lot of claims from the other side, thus "strengthening" their own views. If this often happen to you then you are likely not as well informed as you think you are.

In short, I believe that "you can't reason someone into a position that you didn't reason yourself into" is true, while the popular "you can't reason someone out of a position that they didn't reason themselves into" is wrong.


Not necessary. Science advances when entrenched proponents of the old theory die out.


I don't know. Having read it only makes me believe more strongly in the backfire effect.


"Fact finding organization finds that fact finding matters, despite some rumors that facts don't matter."

Right.


“Read not to contradict and confute; nor to believe and take for granted; nor to find talk and discourse; but to weigh and consider.” -Francis Bacon


The term used for this by philsophers at least after hegel is "the dialectic". I've always thought that many philsophers uncritical acceptance of a dialectical process within the world was absurd. I am happy to see some evidence from science indicating this to be the case.


Belief in the backfire effect confirms the bias many people have towards believing their political opponents are idiots, immune to rational thought.


That’s only true if you believe your own side and yourself aren’t just as vulnerable.


I didn't say one side experiences this more than another.

There are rational people on all "sides". Rational people may come to disagree with each other if they have different information available to them, or if they simply have different priorities.

Anybody who thinks all of their opponents are irrational, by virtue of being their opponent, are themselves irrational in that respect. However that says nothing about the political distribution of this particular cognitive bias. I have made no claims concerning that.


"I didn't think the backfire effect was real, but now I'm not so sure."

Is there a catchy name to describe the effect of disbelieving social science research?


Not sure, but social science research is currently in the midst of a replication crisis. And not necessarily due to anyone acting in bad faith: you can run an experiment twice a few months apart and get opposite results. No idea on how to fix, but probably needs to invest in more robust experiment design, larger samples, and perhaps more advanced statistical methods. Sadly all of those demands cut against the "publish or perish" mentality in academia which often favors quantity over quality. A shame, since most social science researches want to do the best work possible but are constrained by $$$ and the broader culture of expectations in academic research.


It's in a replication crisis because pretty much none of it is science ( no replicable testing possible - hypothesis, experiment, theory ). It's why Richard Feynmann associated social science with pseudoscience.

https://www.youtube.com/watch?v=tWr39Q9vBgo

Because real science destroyed the credibility of religion and religion in much of the world is no longer a credible social control tool, the elites needed a new form of religion to control society. That new religion is social "science". Whereas religion controlled everything from economics, schooling, family, culture, society, law, etc, now they all fall under the pseudoscience/religion called social "science".

---------------------------------------------

Reply to ziddoap.

Considering you tossed around "illumati-esque", I doubt you are interested.

I consider social science to be a pseudoscience for the same reason richard feynmann did. Did you bother watching what he had to say?

Social "science" is a humanities. It belongs in the category with philosophy, ethics, literature, religion, etc.

Just because I said it is a pseudoscience doesn't mean that I think it is useless or bad necessarily. No more than I think literature, ethics, philosophy or even religion is bad.

I just think social "science" is a "religion" trying to latch onto the good name of real science. Just like creationism "science" or all the other fake "science" trying to gain credibility by associating itself with science.


I see this position somewhat often, almost unavoidably accompanied by a reference to Feynman. The position is, of course, pure nonsense if you take a few moments to think it through.

Not only has the world moved on drastically from when Feynman, a non-expert in the area, wrote that essay, but it is also ludicrous to claim that a part of existence is unamenable to scientific study. If it exists and has an effect, it can be studied. There is no reason to believe human behaviour and thought is beyond this.


>Not only has the world moved on drastically from when Feynman...

You're right, social science got even less replicable and less scientific.

>If it exists and has an effect, it can be studied.

Yes, you're right. But that doesn't mean that you can ground it in empirical evidence or effectively apply the scientific method of inquiry. Philosophy is a method of studying human behavior -- it is not, however, science. And for substantially the same set of reasons the social sciences are also not science.


> You're right, social science got even less replicable and less scientific.

You'll need to substantiate this claim, of course.

> Yes, you're right. But that doesn't mean that you can ground it in empirical evidence or effectively apply the scientific method of inquiry.

Why not?

> Philosophy is a method of studying human behavior -- it is not, however, science. And for substantially the same set of reasons the social sciences are also not science.

You are simply repeating the old misconception I've hinted at: that human behaviour is off-limits to scientific inquiry, even though it is real and physical. I fail to see why this would be the case. We are, after all, talking about measurable, quantifiable things inputs and outputs regarding human behaviour.


You're arguing that everything can be subject to exploration via scientific method, and he's arguing that some people reject this anyway. Here is a quote about an ideological split that happened in the anthropology community:

>The divide is trenchantly summarized by Lawson and McCauley (1993) who divide between ‘interpretivists’ and ‘scientists,’ or, as noted above, ‘positivists’ and ‘naturalists.’ For the scientists, the views of the ‘cultural anthropologists’ (as they call themselves) are too speculative, especially because pure ethnographic research is subjective, and are meaningless where they cannot be reduced to science. For the interpretivists, the ‘evolutionary anthropologists’ are too ‘reductionistic’ and ‘mechanistic,’ they do not appreciate the benefits of subjective approach (such as garnering information that could not otherwise be garnered), and they ignore questions of ‘meaning,’ as they suffer from ‘physics envy.’

cite: https://www.iep.utm.edu/anthropo/#SH4b


>We are, after all, talking about measurable, quantifiable things inputs and outputs regarding human behaviour.

I don't think it's true. Looking at behavior is like trying to guess at the internals of a black box piece of software that's very well obfuscated + randomized.

The empirical way to do it is more along the lines of neuropsych -- taking a look at the physical processes involved. I think human psychology and behavior is a lot like the plumage of a peacock; pretty, loud, but ultimately an abstraction above what is really going on.


> The empirical way to do it is more along the lines of neuropsych -- taking a look at the physical processes involved. I think human psychology and behavior is a lot like the plumage of a peacock; pretty, loud, but ultimately an abstraction above what is really going on.

There is no denying it is an abstraction. I would in fact claim the converse: all human scientific study to date has dealt with abstractions of varying degrees.

To offer a counterargument, we treat a great deal of systems like (semi-)black boxes and ultimately manage to derive useful statements about those systems. Examples include economics, black-box software analysis (like fuzzing), biology (e.g. we've learned many useful facts about the human body before we even knew of the existence of the cell), even basic physics (like the physics of gasses, as an epitome of something randomized, but still following broad rules). There is no reason to assume human psychology is different in this particular aspect.

In fact, we cannot assume so, as a lot of psychological knowledge we've gleaned demonstrably works, despite other results proving to be irreproducible due to the usual reasons[1]. As an example, it's hard to deny the existence and predictive power of modern human personality trait models.

It should also be telling that many modern, advanced statistical tools were invented by none other than psychologists, for use in psychology.

Neuropsych is definitely a worthwhile approach and I wouldn't separate it from the rest of psychology. Ultimately, we cannot really hope to derive everything using the bottom-up approach any time soon so a variety of approaches are needed. Also, to continue the above example, many studies of personality traits have shown connection to underlying genetic and environmental factors, such as this one: http://doi.org/10.1016/S0191-8869(01)00137-4

[1]: Science is hard, humans like and many times need to cut corners, the black-box system is extremely complex (as you noted).


Because most humans behave sufficiently different from each other. Even if you experiment on a subset of humans and get knowledge about this subset, a different subset of humans could react completely different. It's so bad that even the same subset of humans could react completely different if you do the same test 50 years later.


> Even if you experiment on a subset of humans and get knowledge about this subset, a different subset of humans could react completely different.

They could, but that does not mean they do. There are quite obviously rules and patterns to much of human functioning. Denying so seems like human hubris.

Even if each human displays unique behaviour for a particular trait, knowing that it is so for that particular trait is useful and therefore still amenable to scientific exploration. Even if humans reacted randomly in some situation, the random behaviour would be subject to a probability distribution and knowing it would be useful.

It's hard for me to see where exactly the leap to "it's impossible to study human behaviour scientifically" is necessary, particularly when we have so much evidence to the contrary.


It's not science if you don't reliably get the same output if you provide the same input. It's useful, sure. But it's not science.


Sorry, but this just sounds like a deepity. Science is a process, not a result.

It holds for most of science most of the time that you don't reliably get the same output if you provide the same input (because you don't know all the variables or the entire set of equations). Only when a phenomenon is completely known does this stop being true.

But when is a phenomenon completely known? After all, for a long time we've known classical mechanics to be completely known... Except it wasn't. And during the time we thought it was, you could get into exactly the type of situation you describe above: for the "same" input, you could get a different output, depending on the components of the stress-energy tensor you were not aware were relevant. The effect was subtle there of course, but there are many examples where it's not (e.g. the entirety of biology and medicine).

So I disagree with this description of science.

EDIT: Also, it completely slipped my mind the first time around because it's such a stupidly strong counterargument, but by your definition the entirety of modern physics (quantum mechanics, quantum field theory and beyond) is not science.


Science is useful because it has the power to predict. It gains this power from getting the same output when providing the same input. If what you are doing doesn't have the power to predict it's not science. You can still apply the scientific method to what you are doing and if you're applying that method you might as well call yourself a scientist and what your doing science, but then again, a few hundred years ago scientists didn't yet know that things like alchemy weren't science, so they applied the scientific method to it and figured out that it isn't useful.


> Even if you experiment on a subset of humans and get knowledge about this subset, a different subset of humans could react completely different.

The same is true of chemicals, rocks, or lots of other categories of things subject to scientific inquiry. In fact, interesting scientific results tend to come from how the behavior of different subsets of categories like that behave in similar conditions, rather than being made impossible by such differences.


> But that doesn't mean that you can ground it in empirical evidence or effectively apply the scientific method of inquiry.

Yes, it does. The scientific method and empirical investigation apply to any phenomena in the material universe, including human behavior.

Such investigation might lead to models with irreducible areas of randomness if there is some.ares of human behavior not controlled by deterministic laws, but our best models in certain areas of the physical sciences also have irreducible randomness, so that's not a distinguishing feature.


> pretty much none of it is science ( no replicable testing possible - hypothesis, experiment, theory )

By this standard, neither is most preclinical cancer research as so few of them replicate (11% - source: https://www.nature.com/articles/483531a). Even social science puts up better numbers than that (~60%)


There is a difference between bad science and "not being a science" ( aka pseudoscience ).

The difference is that in one you can formulate replicable science. In the other, by its nature, you can't. Because on deals with "natural law" and the other with society.

There is no "replicable scientific test" to determine whether capitalism or socialism is the best economic system. There is no "replicable scientific test" to determine whether to have the death penalty or not. So on and so forth. Much of it is pretty much a "religious" endeavor. Pretty much those with power decide and social "science" is used to justify whereas in the past religion was the justification.


"... Because on deals with "natural law" and the other with society..."

Society isn't... natural?

"There is no "replicable scientific test" to determine whether capitalism or socialism is the best economic system. There is no "replicable scientific test" to determine whether to have the death penalty or not. "

I wasn't aware that social science even attempted to answer these questions? Like most sciences, and this study, it attempts to study phenomenon as they occur. In this case this meta-study was trying to see if the backfire effect actually exists... which isn't making any moral or societal debate or opinion, just trying to verify a phenomenon existing. Which sounds pretty scientific to me?


Studying behavior is not equivalent to studying, say, astrophysics.

It is of course "natural" in the same sense that everything is natural, but the lack of the ability to ground research in empirical evidence means that the fields of economics, social science, psychology, and especially evolutionary psychology etc. are not actually engaging in science.

The form, structure, tendencies, or beliefs of a human culture are not at all analoglous to say studying a distant and ancient star by observing its emission spectrum or investigating the nature of reality by investigating subatomic particle interactions. This fundamental difference means that these fields will never be as dependable as true science; to be clear that doesn't mean they're useless, only that they're not science.


Not sure if you'd see my edit so I thought I'd reply directly.

>Considering you tossed around "illumati-esque", I doubt you are interested.

I am interested.

>Did you bother watching what he had to say?

I have.

>Just because I said it is a pseudoscience doesn't mean that I think it is useless or bad necessarily. No more than I think literature, ethics, philosophy or even religion is bad.

I never said that you think it's useless, or bad.

>I just think social "science" is a "religion" trying to latch onto the good name of real science. Just like creationism "science" or all the other fake "science" trying to gain credibility by associating itself with science.

I actually agree here, to be honest.

>the elites needed a new form of religion to control society.

This is literally the only thing I took issue with. I was, and am, genuinely curious on all the other stuff. I was hoping you would expand on it. I, however, thought it prudent to mention that I'd not be interested in reading it from the "elites controlling society" position.


>the elites needed a new form of religion to control society. That new religion is social "science".

Source?

I'd be interested in hearing more about why you think all social science is pseudoscience. However, if it's going to be the illuminati-esque, I'll take a pass.


Not exactly a source, but from my person experience I have seen people believing in social science results with a fervor that matches religious people believing in religious material. Questioning a study, even with valid reasoning, causes one to receive treatment that compared to questioning religious teachings. Bringing up an alternative study, if it disagrees with the person's own leanings, is comparable to quoting the wrong religious book to a religious individual. Having significant experience in both religious communities and the social sciences, there feels to be a lot of overlap and I personally see nothing wrong with seeing it as serving as a replacement religion for those who have left the classical ones behind.

Now to clarify, I am not saying it is psedoscience or some conspiracy by the elites. I think it happens, to give an overly summarized summary, because religion fills a spot in the average's human psyche that when empty people seek to fill with something else and social sciences are similar enough to serve as a good replacement. As for the reliability of the science, there is a reproducibility problem and the social sciences are plagued with issues to a far greater extent than the hard sciences. That doesn't mean it is fake, but that studies, especially those with little variations and replications, need to be taken with a measured serving of salt.


Interesting perspective for sure, and I think I can agree that people seem to get incredibly invested in some of these studies and subsequently defend them, as you described, with a religious fervor.

My bigger issue with the parent comment was more aimed at the matter-of-fact "it's the elites" statement, which I have little patience for. Using a conspiracy to justify your thoughts that something else is a conspiracy is a bit circular.

Thanks for the thoughtful reply!


People believe in "science" with the same fervor. The scare quotes are because someone will point to a study as definitive - the final answer - and use it as a blugeon (even if the subject is far from settled or the science itself questionable).

Science is a method of inquiry, it's not a tool to "win" a debate. I think you are just observing human nature, in this case reflecting in the so-call social sciences but can manifest in relation to hard science as well.

In both cases people are inclined to support what already matches their world view (which they project with their thoughts and actions and are reflected back filtered through their belief systems).


'Patience', 'Stability', 'Reason'. Taking most results at face value seems unwise when contradictory results are so often found shortly after.


"Skepticism"?



That thing you thought was true all your life is actually false.

1 year later: That thing you thought was false for the last year was actually true.

At every turn trying to make us feel stupid. It's almost like somebody is trying to drive us crazy.


Understanding is a process. Believing what we have the best evidence to believe is not stupid. If the evidence changes and you shift your belief you are not stupid.

I think the reality is that humans are very complex, especially how we relate to each other. Studies that try to boil down our behavior and provide actionable results are highly demanded, but are very difficult to prove.


Believing the best evidence is not a good idea if the evidence is weak. One thing that many science journalists are not very good at is conveying how strong the evidence is.


It's a good idea as long as the belief is proportionately weakly held.


But his is not the way most people think about things. For most people it's white or black, it is or it isn't. Speaking of degrees of belief makes you look like an indecisive fool at times.


If I were running for office I might be a bit careful about how I expressed such things, but having a degree of belief proportional to the degree of evidence appears the opposite of foolish to me. I hold that belief moderately strongly.


Do you change your diet based on weak science? If you tell someone you weakly believe a paper about diet science, do you think they might change their diet?


If, a priori, food A and food B seem equivalent to you, and you're currently eating food A, and then you get a weak report that food B is better, then it seems like an ok decision to switch to B. If your prior belief was that food A was a little bit better, then you might switch or not depending on the exact magnitudes of "weak" and "a little bit". If your prior belief was that food A was moderately or significantly better, then the weak report would not be sufficient evidence to switch (though it could be good reason to do more research).


You have a lot of faith in people! What I observe is that people change their diets all the time based on weak science.


There is a subtle false dilemma there. That we have to believe one way or the other.

A better policy would be to assign some degree of belief to both possibilities.


> The current conventional wisdom is that checking backfires.

And a not so subtle imputation which probably has a much larger effect on the willingness to look at facts in the first place. But I want everyone not sharing my opinion to be irrational too.


Be like water. You can believe, just don't attach yourself to those beliefs. Simple in concept, not so much in execution. Worth practicing.


I agree, but "attachment to belief" is not solely in the mind. Your beliefs are also "attached" to all of the decisions you make based on those beliefs. Those decisions alter the "position" of your life- with that position being highly dimensional (the city you live in, the kind of house you have, how your transport yourself, how many kids you have, what kind of diploma you have, how you raise your children, etc). And changing position often comes at a great cost.

Assume you have a belief that fluctuates frequently (perhaps due to changing scientific or expert consensus). Also assume that this belief is attached to the position of your life in a highly costly manner. It is entirely reasonable to insulate your beliefs, and thus the cost of altering the position of your life to match the external belief, from the frequent external fluctuation of this belief. I'd imagine this is the fundamental (non-politicized) driving force behind conservatism.

I see this strategy playing out as illustrated in the graph below. The straight line is your internal belief. The fluctuating line is external belief. Effectively, you are running a filter on external belief, making you slow, but eventually able to react to the changing external belief. However, since your sentiment fluctuates less than external sentiment, your costs are kept low.

  s
  e|
  n|    /\      /\      /\
  t|  _/__\____/__\____/__\__
  i|  /    \  /    \  /    \
  m| /      \/      \/      \
  e|___________________________
  n       time
  t
Everyone practices this strategy to some degree. Some are more mindless about it than others. Some practice it with a greater emphasis than others. Let's call this strategy an "insulated belief system". Conservative viewpoints are traditionally associated with an insulated belief system since insulating oneself implies confidence in oneself as an isolated individual. Liberal viewpoints do not associate with an insulated belief system since liberals tend to rely less on oneself and more on the community. Despite this, it isn't impossible to be a liberal who owns an insulated belief system. It would just be rare.


Yes, but there's always narrative driving the change of thought.

Once we go back and forth a few times, it strains the credibility of those driving the narrative.

So Vietnam War, 'Low fat' diets of the 1980's (which led to high carb/sugar) diets etc..


When dealing with humans, the likely answer is that the relationship is far more complex and based on variable that are beyond our ability to adequately grasp and measure. So what appears to be the same study done multiple times can get different results. As time progresses, social change can lead to what was once consistent result of outcome A becoming inconsistent results and eventually becoming consistent results of outcome B.

Also, many studies are done primarily on college students, so trends that may well be true for them may not apply to the general population. So as studies become better at getting a representative cross section of the population results can appear to change despite early studies being correct.

A large part of this is that results are extremely overstated in the social sciences. If not by the scientists, then by the media and popular culture.


Hopefully at the end of the day you believe less conclusively in things that are not on scientifically 100% certain.


Well, both get clicks and impressions... Water still wet, does not.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: