Hacker News new | past | comments | ask | show | jobs | submit login
Why Most Published Research Findings Are False (2005) (plos.org)
206 points by Michelangelo11 18 hours ago | hide | past | favorite | 239 comments





There is at least one thing wrong about this. This is an essay about a paper published a simulation based scenarios in medical research. It then try to generalize to "research" and avoid this very narrow support to the claim. I think this is something true and it should make us more cautious when deciding based on single studies. But things are different in other fields.

Also this is called research. You don't know the answer before head. You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process. Unfortunately common science books talks only about discoveries, results that are considered fact but usually don't do much about the history of how we got there. I would like to suggest a great book called "How experiments end"[1] and enjoy going into details on how scientific conscious is built for many experiments in different fields (mostly physics).

[1] https://press.uchicago.edu/ucp/books/book/chicago/H/bo596942...


I think the best way to view this paper is as a sort of meta-analysis of a wider literature around null hypothesis testing and p-values. That literature goes back at least to the 1970s with the work of people like Paul Meehl and Gene Glass. But you can push it further back, like the 1957 Lindley Paradox that Ioannidis cites.

Part of the reason this paper was impactful is that it was short and punchy, took aim at all of medicine rather than a smaller subfield, and didn't require as much mathematical understanding as other papers.

I knew some of the big names that were hit by the replication crisis. And before that I spent some time trying to talk to psychology researchers at a top school about the problems with statistical testing. But they had limited knowledge of stats and didn't want to go out on a limb when everyone else in the field seemed okay with the status quo. A paper like this can be read by everyone and makes a forceful argument.

> It then try to generalize to "research" and avoid this very narrow support to the claim

This is a good point. The methods in medicine and the social sciences are especially weak and prone to these sorts of criticisms. In the physical sciences, often you can run enough iterations of the experiment to overwhelm any prior.

> You have limitations in tech and tools you use. You might miss something, didn't have access to more information that could change the outcome. That is why research is a process.

I totally agree. Science is basically a control system, or a root finding algorithm, or gradient descent. At any time t there is a gap between the best known science and the truth. But the point is that science converges to the truth over time, whereas no other alternative does.


The scientific method converges on the truth. But science need not.

If the foundational assumptions are wrong or impossible to challenge, time t can extend indefinitely. Additionally, it is surely possible that whole sections of science is waylaid and diverges from truth on account of funding, legislation, big personalities, etc.


> If the foundational assumptions are wrong or impossible to challenge, time t can extend indefinitely.

This is an important point, but fortunately nature has provided us with a solution. Namely that teenagers are defiant and look for ways to distance themselves from their elders.

This is Planck's principle that science advances one funeral at a time [0]

> A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it ...

> An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth. — Max Planck, Scientific autobiography, 1950, p. 33, 97

The biggest impediments have historically been multi-generational organizations whose power requires limiting access to science. Famously the centralized church in the time of Galileo. More recently the cigarette and oil industries. Things like big personalities and legislative priorities tend to have much shorter time scale and usually allow for science to ratchet forward generationally. Big personalities die, legislators turn over every few years in the best case or in a few generations in the worst case.

[0] https://en.wikipedia.org/wiki/Planck%27s_principle


It's not only teenagers. Lots of people are defiant through their entire lives, and those people do disproportionately like to work in science.

Intra-generational power structures are a much larger impediment. Funerals take way too long to happen.


>The scientific method converges on the truth.

There's actually quite a big assumption in here -- namely, that the truth is constant over time and throughout space. (This can possibly be weakened slightly, but that's the gist.)

I personally think the laws of physics probably are unchanging, and I certainly hope they are (because that means the scientific method converges). But whether they actually are is not only unknown, but a question that cannot be answered by empirical science.


This is an interesting point and one that physicists are aware of.

But from a technical perspective, the math is perfectly capable of describing theoretical universes where the laws of physics change in time or space. For example, where physical constants are smoothly evolving or manifolds that don't look the same in all directions.

The math can tell us what observations we'd expect in those situations, and so far we haven't observed anything to indicate we should relax those assumptions.

If we did observe that the laws of physics were changing, that observation would be considered science in the traditional sense. So it's not that "truth" is shifting, it's that "truth" is a family of equations indexed by some parameter rather than a single equation. That's a similar flavor to the jump from Newtonian physics to relativity.

If you're interested in this topic, a relevant key word is "cosmological principle": https://en.wikipedia.org/wiki/Cosmological_principle


>There's actually quite a big assumption in here -- namely, that the truth is constant over time and throughout space.

The current models include these assumptions for the same reason they include any other assumptions, because models where these assumptions aren't included don't do any better a job explaining any experimental results. If new experimental data shows these assumptions fail in some cases, then the models can be updated to handle them, same as any other failure has been handled in the past. (Granted each step becomes computationally more complex, so eventually we might hit a limit where it is too complex for a human to understand well enough to operate on.)


Or stops at a local minimum :D

Yes this is a problem and it can (and has) lasted for centuries.

I think this is the best way to talk about the difference between science as a process vs science as what people in white lab coats do.

People have a tendency to conclude that we should abandon science if something isn't quite right. It's better to think in terms of what will get us un-stuck from a local minimum/maximum.


>time t can extend indefinitely

Isn't that generally what convergence means? At least in the math sense, we are talking about t at infinity, meaning that in any real world time limited application, there is always a gap.


But what is “truth”?

Useful models that allow us to make predictions.

The models are just an approximation to the truth. The truth is objective reality itself.

Objective reality doesn't allow us to make predictions itself, it is an nonce. Unfortunately we don't have a time reversing machine to run entropy in reverse and see the thermodynamic truth. All we have is models, your consciousness being one of them.

To say of that which is, that is is, or of that which is not, that it is not

Indeed. There is no independent arbitor. One can imagine that the collective, consensus answer is best, or not. If it is possible to steer that answer towards something that is beneficial to someone somewhere - why wouldn't vested interests do that?

I think it's clear that this paper has stood the rest of time over the last 20 years. Our estimates of how much published work fails to replicate or is outright fraudulent have only increased since then.

[Please consider the following with an open mind]

Just because a study doesn't replicate, doesn't make it false. This is especially true in medicine where the potential global subjects/population are very diverse. You can do a small study that suggests further research based on a small sample size, or even a case study. The next study might have a conflicting finding, but that doesn't make the first one false - rather a step in the overall process of gaining new information.


I think it's much, much more powerful to think of "failure to replicate" as "failure to generalize."

In lieu of actual fraud or a methodological mistake that wasn't represented/caught in peer review, it's still extremely difficult to control for all possible sources of variation. That's especially true as you go further "up the stack" from math -> physics -> chem -> bio -> psych -> social. It is absolutely possible to honestly conduct a very high quality experiment with a real finding, but fail to account for something like "on the way here, 80% of participants encountered a frustrating traffic jam."

Their finding could be true for people who just encountered a traffic jam, and lack of replication would be due to an unsuccessful generalization from what they found.


Dislike being a pedant but the stack was missing math up front

Math isn't a science, it is a tool we can use to construct coherent arguments. We can do this about our world, which science aims to do, but we can do this about many worlds. We can consider correct deductions within a mathematical system as fact, but they do not represent facts in the "real" world.

Elegant explanation of why I felt it didn’t belong! Thanks for writing :)

> but they do not represent facts in the "real" world.

In what sense do they not? On the assumption that there can be other "worlds" for which math, but not physics, holds?


One I always go back to: You can represent a perfect impulse (think electrical signals, a switch from 0 (low voltage) to 1 (high voltage)) mathematically, but it's impossible to physically create.

In our universe there are some number of physical constants that cannot be determined from maths alone, but only measured. If you go about changing these physical constants then we don't have physics as we know it (change 1/137 to 1/140 and electromagnetics no longer works). You get some totally different physics of which there may be nothing more complicated than hydrogen, or maybe hydrogen doesn't even exist at all.

https://en.wikipedia.org/wiki/Fine-structure_constant


Yes, you can make perfectly valid mathematical systems that have zero anchoring to any physical reality we experience (such as, trivially, n-dimensional geometries).

A geology that isn't anchored to our physical reality seems intrinsically invalid.


Haha, math strikes me as a bit different from the others... but I'll add it just for you ;)

Dislike being a pedant but the stack is missing philosophy up front.

> Just because a study doesn't replicate, doesn't make it false.

But it also doesn’t make it not false. It makes the null hypothesis more likely to be true.


That is certainly one possible interpretation.

The other is the introduction or loss of critical cofactors or confounders that radically change environment and context.

Think of experiments of certain types before and after COVID-19.


"Just because a study doesn't replicate, doesn't make it false."

This is a subtle point, but truth or falsity isn't really the issue. The problem with a non-replicable study is that the rest of science can't build on it. You can't build your PhD on top of a handful of studies that turn out to be non-replicable, and so on. It is true you can't build science on outright false statements, either, but true statements that aren't adequately reproducible are also not a solid enough foundation. That may seem counterintuitive, but it comes down to this truth not being binary; even if a study comes to a nominally true conclusion it still matters if it didn't do it via the correct method, or is somehow otherwise deficient in the path it took to get there. Studies are more than just the headline result in the abstract.

But the whole process of science right now is based on building up over time. How could it not be? It has to be, of course. But non-replicable studies mean that the things you're trying to build on them are non-replicable too. It doesn't take all that much before you're just operating in a realm of flights of fancy where you may "feel" like you're on solid ground because of all the Science you're sitting on top of, but it's all just so much air.

However, it is also true that non-replicability is a signal of falsity, and that is simply due to the fact that the vast, vast, vast, exponential majority of all possible hypotheses are false. As is another subtle point, a scientist engaging in science properly should probably not come to that conclusion and may not want to change their priors about something because of a non-reproducible study very much, but externally, from the generalized perspective of "what is true and is not true" where science is merely one particularly useful tool and not the final arbiter, I may be justified in taking non-replicable studies and updating my priors to increase the odds of the hypothesis being false. After all, at the very least, a non-replicable study tends to put an upper bound on the ability of the hypothesis to be true (e.g., if someone studies whether or not substance X kills bacteria Y, and it turns out not to reproduce very well, the lack of reproducibility does fairly strongly establish it can't be that lethal).


Outright research fraud is probably very rare; the cases we've heard about stick out, but people outside of academia usually don't have a good intuition for just how vast the annual output of the sciences are. Remember the famous PhD comic, showing how your thesis is going to be an infinitesimal fraction of the work of your field.

Research fraud is likely very rare but it's not about a few stories that show up about unreplicable studies that stick out. There was a study a few years where they tried to replicate a bunch of top cited psychology papers and the majority of the experiments were not replicated. Then people did the same for other disciplines afterwards and, while it wasn't as bad as psychology, there were plenty of papers they couldn't replicate.

Every time this topic comes up I'm reminded of what Stefan Savage, a hero of mine, said about academic papers ("studies", in the sense we're discussing here): they are the beginnings of conversations, not the end. It shouldn't shock people that results in papers might not replicate; papers aren't infallible, which makes sense when you consider the process that produces them.

That is a generous interpretation. But in many cases we try our best to dress up studies and tell good stories—-preferably stories with compelling positive statistics and with slick figures. The story telling often obscures the key data.

Yes, papers start conversations, not end them. Replication issues are part of the academic process.

> Outright research fraud is probably very rare

Not sure what rare means in this context. The more important research is, the more likely there is fraud involved. So in terms of size of impact, it's probably very common.

And then if you combine this with poorly done, non repeatable, or inconclusive research being parroted as a discovery...You end up with quite a bit of BS research.


Is incompetence fraud? Or just incompetence? I'm asking because a fair number of the molecular biologists who get caught by Elizabeth Bik for copy/pasting images of gels insist they just made honest mistakes (with some commentary about the atrocious nature of record-keeping in modern biology).

I alter Ionnides's conclusion to be instead: "Roughly 50% of papers in quantitative biological sciences contain at least one error serious enough to invalidate the conclusion" and "Roughly 75% of really interesting papers are missing at least one load-bearing method detail that reproducers must figure out on their own" (my own personal observations of the literature are consistent with these rates; I was always flabbergasted at people who just took Figure 3 as correct).


> Is incompetence fraud? Or just incompetence?

Fraud requires intent; it's a word that describes what happened, but also the motivations of the people involved. Incompetence doesn't assume any intent at all; it's merely a description of the (lack of) ability of the people involved.

Incompetent people can certainly commit fraud (perhaps to try to cover up their incompetence), but that's by no means required.

> ...insist they just made honest mistakes

If they're lying about that, it's fraud; they're either covering up their unrealized incompetence with fraud, or trying to cover up their intended fraud with protestations of mere incompetence. If they really did make honest mistakes, then it's just garden-variety incompetence. (Or just... mistakes. To me, incompetence is when someone consistently makes mistakes often. One-time or few-time mistakes are just things that happen to people, no matter how good the are at what they do.)


The legal phrase I like is "knew or should have known". If there is a situation where you should have known something was wrong, it's as bad as if you really knew it was wrong. To hold otherwise incentivizes willful blindness and plausible deniability.

People often use incompetence as an excuse for what were actually intentional bad decisions. Never attribute to malice that which is adequately explained by stupidity.

Maybe someone was incompetent but also knew they were cutting corners. Should they get a pass because they claim they didn't mean to do it? We should hold people accountable regardless of intent.


People should be held accountable for the impact of their decisions

There is no one hovering over scientists all the time ready to stick a hot poker in them when they make a mistake or get careless. I was in academia and my impression is there is a reluctance to double and triple check results to make sure they are right as long as the results match your instincts, whether it's time pressure, laziness, bias, or just being human.

At least in my own mental model of publishing a paper (I've published only a few), I'd want my coauthors to stick hot pokers in my if I made a mistake or got careless. But then, my entire thesis was driven by a reproducible Makefile that downloaded the latest results from a supercomputer, re-ran the whole analysis, and wrote the latex necessary (at least partly to avoid making trivial mistakes). It was clear everything I was doing was just getting in the way of publishing high prestige papers.

All too easy to understand your situation. NIH is finally but slowly waking up and is imposing more “onerous” (aka: essential and correct) data management and sharing (DMS) document. Every grant applicant now submits following these guidelines:

https://grants.nih.gov/grants/guide/notice-files/NOT-OD-24-1...

Unfortunately, not all NIH institutes understand how to evaluate and moderate this key new policy. Oddly enough the peer reviewers do NOT have access to DMS plans as of this year.


Is this a process whereby the researcher is forced to submit the thesis (null, etc) of the research, ahead of the study and its findings?

> I'm asking because a fair number of the molecular biologists who get caught by Elizabeth Bik for copy/pasting images of gels insist they just made honest mistakes

You're talking about (almost certainly) fraudsters denying they committed fraud. The vast majority of non-replicable results have nothing to do with these types of errors, purposeful or not.


I remember criticism back from when this paper first came out: it went something like 'all this shows that using maths it is possible to construct a world where most published research findings are false.'

> Unfortunately common science books talks only about discoveries, results that are considered fact but usually don't do much about the history of how we got there.

The journey to discovery is often just as fascinating as the results themselves. The process (the false starts, debates, even the dead ends) can be incredibly instructive and inspiring.


It is also key to building the idea that we don't have truth, we have the best model we have so far not found reason to reject (and sometimes one that is rejected as truth, but useful for some problems to be kept around). I see too many people, haven taken some science classes, feel that they know what the 'truth' of the universe is, which locks down their ability to question and break current models in search of newer, even better, models.

Could one filter for the true research, by searching for "outliers" which are not outliars, aka data that does not fit the ruling narrative, but at the same time has no narrative of its own?

> But things are different in other fields.

Everyone claims its different in their field


Strangely, we don't in my field!

This paper, almost 20 years old, has plenty of follow-up work showing the claims in this original paper aren’t true.

One simple angle is Ioannidis simply makes up some parameters to show things could be bad. Later empirical work measuring those parameters found Ioannidis off by orders of magnitude.

One example https://arxiv.org/abs/1301.3718

There’s ample other published papers showing other holes in the claims.

https://scholar.google.com/scholar?cites=1568101778041879927...

Google scholar papers citing this


re: the arxiv link

Why is it that microarray true positive p-values follow a beta distribution? Following the citations led to a lot of empirical confirmation but I couldn't find any discussion of why.

More to the point of this rebuttal, though: why would we expect the amalgamation of 70k micro-array experiments' abstract-reported p-values to follow a single beta distribution? And what about modeling the bias-induced bump of barely-significant results?

If there's some theoretical reason why the meta-study can use the beta-uniform model, then I could see this being only a mild underestimation of the proportion of false positives (14%), but otherwise I'm confused how we can interpret this.


> In this framework, a research finding is less likely to be true [...] where there is greater flexibility in designs, definitions, outcomes, and analytical modes

It's worth noting though that in many research fields, teasing out the correct hypotheses and all affecting factors are difficult. And, sometimes it takes quite a few studies before the right definitions are even found; definitions which are a prerequisite to make a useful hypothesis. Thus, one cannot ignore the usefulness of approximation in scientific experiments, not only to the truth, but to the right questions to ask.

Not saying that all biases are inherent in the study of sciences, but the paper cited seems to take it for granted that a lot of science is still groping around in the dark, and to expect well-defined studies every time is simply unreasonable.


This is only meaningful if "the replicaton crisis" is systematically addressed.

Related. Others?

Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=37520930 - Sept 2023 (2 comments)

Why most published research findings are false (2005) - https://news.ycombinator.com/item?id=33265439 - Oct 2022 (80 comments)

Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=18106679 - Sept 2018 (40 comments)

Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=8340405 - Sept 2014 (2 comments)

Why Most Published Research Findings Are False - https://news.ycombinator.com/item?id=1825007 - Oct 2010 (40 comments)

Why Most Published Research Findings Are False (2005) - https://news.ycombinator.com/item?id=833879 - Sept 2009 (2 comments)


As clarification, the article linked in the subject is dated 2022 BUT it is actually just "a correction" of the very famous 2005 article. The correction is eentsy-weentsy - just a missing pair of parenthesis if you click through:

    There is an error in Table 2. A set of parentheses is missing in the equation for Research Finding = Yes and True Relationship = No. Please see the correct Table 2 here.

As I’ve transitioned to more exploratory and researchy roles in my career, I have started to understand the science fraudsters like Jan Hendrik Schön.

When you spent an entire week working on a test or experiment that you know should work, at least if you give it enough time, but it isn’t for whatever reason, it can be extremely tempting to invent the numbers that you think it should be, especially if your employer is pressuring you for a result. Now, obviously, reason we run these tests is precisely because we don’t actually know what the results will be, but that’s sometimes more obvious in hindsight.

Obviously it’s wrong, and I haven’t done it, but I would be lying if I said that the thought hadn’t crossed my mind.


> When you spent an entire week working on a test or experiment that you know should work

I thought the whole point of doing experiments was to challenge what we "know" so we can refine our understanding?


Sure in la-la-land where science isn't conducted by humans.

In reality, scientists are highly motivated (i.e. biased) individuals like anyone else. Therefore science cannot be done effectively by individuals.

The system that derives truth from experiments - the actual scientific system - is the competitive dynamic between scientists who are trying to tarnish each others' legacies and bolster their own. The scientific method etc. primarily makes scientific claims scrutinizable in detail, but without scrutiny they are still highly liable to produce false information.


A bit of a nitpick, but...

> The system that derives truth from experiments - the actual scientific system...

Yes!

> ... is the competitive dynamic between scientists who are trying to tarnish each others' legacies and bolster their own.

Hm. To some degree, sure, that is one dynamic, but (a) this leads to/presupposes a truckload of perverse incentives and (b) this is not inherent in the system if we rearrange incentives


Do you have an idea for a better one? It is pretty darn close to natural selection, which while ugly, does produce surprisingly good results in many domains.

Of course the implementation is far from perfect. For example, the interaction between impact factor and grant funding produces pressure toward ideological conformity and excessive analytical “creativity”. But the underlying principle of competitive scrutiny is probably a desirable one.


> Do you have an idea for a better one? It is pretty darn close to natural selection, which while ugly, does produce surprisingly good results in many domains.

Cooperation is also an extremely fit behavior in natural selection.


Not by itself it’s not. The “selection” part of natural selection is inherently competitive, even if some things cooperate as a competitive strategy. Obviously scientists can and do cooperate within the broader framework of competition.

How do you eliminate the personal incentive to have found a meaningful result? I don’t think that can be changed without redesigning the human psyche.

I think the desire to do something meaningful can easily exist outside of a "competitive dynamic", which was the thing that felt off for me.

> Sure in la-la-land where science isn't conducted by humans.

If someone has a large bag of money laying around the plan is this:

There are lots of companies that will run material A though machine B for you. There are a lot of science machines. One is to put a lot of them into a large building and make a web page where one can order the processing of substances in a kind of design your own rube goldberg machine.

It can start with all purchasable liquids and gasses, mixing, drying, heating, freezing, distilling etc and measure color, weight, volume, viscosity, nuclear resonance etc, microscope video, etc. Have as much automation as possible, collect all the machines. A robot cocktail bar basically.

Work your way up to assembling special contraptions all ordered though the gui.

Jim can have x samples of his special cement mixture mixed and strength tested. Jack can have his cold fusion cells assembled. Stanley can have his water powered combustion engine. Howard can have his motor powered by magnets. Veljko can have his gravity powered engine. Thomas can have his electrogravitics. Wilhelm can have his orgone energy.

or not... hah....

If any people are involved they should not know what they are working on.

It wont be cheap but then you get an url with your nice little test report and opinions be damned.


I think that might end up with "Oops! All Smallpox."

Ill be the last one to say the idea doesn't come with some serious challenges. Someone some day will think it funny to try blow up the place.

But if you want to without human error/bias there is nothing close to removing all the humans.

Things that are controversial, unbelievable or unlikely may have big implications and risking your career on it is usually not a good idea - for you.

Though automation one might drive the prices down to make the brute force approach viable but with somewhat intelligent machines one could also make educated guesses in volume.

You could auto suggest similar experiments while the researcher types their queries complete with prices.

The original question was: How can we do more research without increasing the number of scientists.


And yet, it is still the best we got for also producing highly reliable and correct information.

Personally, I think the “highly” in your statement is quite over exaggerated. Humans can be convinced to produce bad science, for sure, and there are even journals set up by religious orgs that specifically exist to do just that.

But at the same time, science landed humans on the moon.


> But at the same time, science landed humans on the moon.

That was engineering. Closely linked to science, but not the same process of inquiry.


This is what makes me troubled regarding medical science. I've heard tons of things about fraud and unreproducible results but new wonder drugs (that actually worked!) are deployed every year.

Clinical trials in general are extremely, extremely above board. The level of scrutiny is extreme, and the stakes are unbelievably high for pharma companies and the individuals involved. There are better ways for an unscrupulous pharma co to gain an edge.

That said, wonder drugs are few and far between. The GLPs are at least a once-in-a-decade breakthrough, so that’s probably most of the noise you’re hearing (there are a lot of brand names already).


What about Vioxx?

> in general

No one is under the illusion it’s perfect or ungameable. A drug slipping by every few years is bad and often tragic, but IMO nowhere close to indicative of a systematic problem. It is a system that is worthy of a high degree of trust.


I'm unfamiliar with Vioxx and whether its approval really was a result of mistakes.

Shouldn't we expect some small percentage of failures in these processes given that they are driven by statistics and confidence intervals? Is that even a failure of the process, or is it a known limitation given how much resources and time we are willing to allocate to the discovery process?


> Shouldn't we expect some small percentage of failures

Yes, and this is really solved at a more local level. Doctors aren't prescribing new drugs like candy. They, too, are skeptical of their success and will reserve those prescriptions for the most desperate cases. Over years, we (and the doctors) learn how effective these drugs are and what potential side effects they have.


Yes we should expect some small number of failures and so far I agree, I don’t see evidence of a problem that needs fixing.

As patio11 says, the correct amount of fraud in a financial system is not zero, and the correct amount of false positives in drug approvals is not zero.


Voixx was an unknown unknown problem.

Cardiovascular safety was tested in the original trial. It passed. Nothing in the data during development suggested it was an issue. But trials can’t detect everything.

It wasn’t until it got to market did a safety signal pop up. Then retrospective analyses of large data sets proved it.


You’re hearing loads about fraud because the anti-intellectual bots are here to make sure you hear about them all the time.

Republicans and Russian bots WANT you to hate science and academia and they have frequent pushes across social media platforms to make sure you do.


> Personally, I think the “highly” in your statement is quite over exaggerated.

Except that the entire point of the article here is that it's not exaggerated.

> But at the same time, science landed humans on the moon.

Cherry-picking a highly successful, well-known example doesn't prove a point.


> Cherry-picking a highly successful, well-known example doesn't prove a point.

There must be hundreds, if not thousands, of successful scientific discoveries that went into something as complicated as the moon landing, and if you still don't think that's convincing, just look at the world around you - which looks just radically different from the world of, say, just a couple hundred years ago.


> Cherry-picking

As if our lifespans and quality of life haven't been drastically improved by modern medicine.

I mean, we can cut people open and replace entire parts of them and they're fine. They don't even get sick anymore - thanks germ theory and aseptic technique! Do you not understand how much of a marvel that is?

Before that, people used to get cuts and scratches and just... die. We can now fully rummage inside an arbitrary person's internal organs.

And don't even get me started on long-term illnesses. High blood pressure and cholesterol has been killing humans since forever, and we have medicine that just fixes that. And now, we're getting medicine to rewire our brains to prevent addiction in the first place (semaglutide)


No, we have better systems now

Yeah. Like “just do your own research” man.

Tell me of a better method to get to the truth. Go on.


In theory, but it is extremely easy to get into the mindset that your hypothesis is absolutely true, and as such your goal is to prove that hypothesis.

I’ve never fabricated numbers for anything I’ve done, but there certainly have been times where I thought about it, usually after the fourth or fifth broken multi-hour test, especially if the test breakage doesn’t directly contradict the hypothesis.


Maybe it's different in other fields, but from my background in physics it seems like if your hypothesis is wrong that is usually way more interesting than it being right. As long as it isn't just because of some contamination in the data.

Although, contrary to what I was taught in elementary school, most of the experiments in the physics department of my university didn't even really have a hypothesis. They were usually either of the form "we are going to do this thing, and see what happens", or "we're going to measure this thing more accurately than anyone before".


Not just the mindset. Our social setting can deem some hypothesis must be true and any disagreement is blasphemy of the highest order. The 'softer' a science, the more beliefs like this that exist. Sometimes you can even see scientists deeply studying something adjacent to one of these beliefs start to question it and how delicately they have to dance around the issue until enough other scientists also question that they have the safety in numbers to begin directly questioning the belief. A reoccurring example of this is the research around the labeling of certain behaviors as abnormal psychology which eventually lead to an update in the DSM.

Thanks for staying your point so clearly. I’m a bystander to this discussion, but agree with you about the reality of this.

Setting up real experiments in a lab is super hard e.g. is all equipment properly calibrated, is the way I am measuring actually right, are my reference measurements correct, are my samples "clean". A lot of things can go wrong that it is even sometimes challenging to replicate experiments that are 100% known to work. So, it takes some discipline to not cheat in the sense that e.g. one cleans up the data a bit too much.

There are externally motivated scientists who are in it for the prestige or awards. Some fields are more like this than others, but they show up in all fields.

Plus these days there's a lot of pressure to run universities more like businesses. To eat, academics have to hit certain numbers, so you see behaviors common in business like faking the KPIs.


That's a valid way to look at it, but Fisher (who all but invented hypothesis testing) took a different perspective. To him, most things we know because of informal experiences. Only when trying to find small effects or when we have insufficient experience do we conduct experiments, which are effectively experience meticulously planned in advance.

A significant result in an experiment, according to Fisher, is just an experience to add to the mental pros-and-cons list. It is not defintive proof of anything.


Because of that, backing up a claim with research adds weight to the claim.

If the claim is false, though, you can still sometimes get research to support it. If you or the researcher stands to profit from the false claim, then there is a conflict of interest.


I think that’s what the parent is acknowledging in the end of the second paragraph.

Well, that depends. What are you paying the guy to do?

Only a week? The stakes are higher my friend. It's usually months at a minimum.

Rapid outcomes should not be a priority

I’ve been in a meeting with government research officials where a director of the primary global institution in that field described how when she writes or does research and writes papers she draws a graph she needs to support her research or a point she is trying to make and then goes to look for the data to create that graph.

Maybe I’m missing something, but I do not believe that is the way it is stopped to go. Btw, she has a PhD and failed up into a global scale.

I’ve been meaning to find out if there are any open tools to evaluate someone’s dissertation.

It was equal part stunning and seemingly a bit traumatizing to me considering I still remember it as if it had happened earlier today. I think what surprised me too was her open admission of it, even with external parties present.


So she establishes a hypothesis (draws a graph or picks a point to make) and then tests it through experimentation (looks for data to support the hypothesis)? Isn't that just the scientific method worded another way?

Wait until the GP knows about how scientists generate Monte-Carlo (MC) simulation data to see what a positive results looks like and then do meta analysis for both real data and MC.

You should also understand that there are external forces here, like state sponsorships that monetarily rewards for scientists to simply file enough research findings.

The startling rise in the publication of sham science papers has its roots in China, where young doctors and scientists seeking promotion were required to have published scientific papers. Shadow organisations – known as “paper mills” – began to supply fabricated work for publication in journals there. https://www.theguardian.com/science/2024/feb/03/the-situatio...

The number of retractions issued for research articles in 2023 has passed 10,000 — smashing annual records — as publishers struggle to clean up a slew of sham papers and peer-review fraud. Among large research-producing nations, Saudi Arabia, Pakistan, Russia and China have the highest retraction rates over the past two decades, a Nature analysis has found. https://www.nature.com/articles/d41586-023-03974-8

That's why a recent article https://news.ycombinator.com/item?id=41607430, where the measurement of China leads world in 57 of 64 critical technologies was based on number of journal citations, was laughable.


Talking with some Chinese colleagues in the past, they were talking about having a 'base' salary which was not enough to have a family on. For every published paper they'd get a one-time payment. So you'd have to get a bunch of papers out every year just to survive; no wonder people start to invent papers.

Of course the same thing is happening in the 'Western' world too, with a publication ratchet going on. New hire has 50 papers out? OK! The next pool of potential hires has 50, 55, 52 papers out, so obviously you take the 55 papers-person. You want outstanding people! Then the next hire needs 60 papers. And so on.


...an effect known as "wonkflation".

I think there are maybe two separate issues here.

Paper mills are bad but mostly from the perspective of academic institutions trying to verify people's credentials/resumes. Paper mills aren't really that much of a concern in the sense of published research results being false in the way the article is talking about because people aren't really reading the papers they publish. In that sense it doesn't really matter if there are places where non-scientists need to get one paper published to check some box to get a promotion, because nobody is really considering those papers part of established scientific knowledge.

On the other hand, scientists intentionally (by actually falsifying data) or unintentionally (as a result of statistical effects of what is researched and what is published) publishing bogus results in journals that are considered legitimate which aren't paper mills actually causes real harm as a result of people believing the bogus results, and unfortunately the pressures that cause that (publishing papers quickly, getting publishable results, etc.) exist everywhere, and definitely not just in China, nor did they originate in China.


[flagged]


Since when is this kind of blatant racism acceptable on this site? “Gutter oil”? Wtf is wrong with you?

still happening in China in 2024

Foreigner caught a Chinese couple scooping up gutter oil https://www.reddit.com/r/interestingasfuck/comments/1eo2wmy/...


Restaurants in china are legally required to use oil traps like that and the oil must be removed. It is usually reprocessed to be used for industrial purposes. The fact that those people were possibly possibly illegally collecting it to sell to a company that reprocesses it does not at all mean that it's going to be used as "gutter oil" in restaurants any more than someone collecting empty cans from a trashcan means they're going to reuse those cans in a restaurant.

Gutter oil used to be a major issue in China but the Chinese government cracked down on it a lot a few years ago.

I recommend watching this video about it: https://www.youtube.com/watch?v=G43wJ7YyWzM


There are 1.4 billion people in China. You’re showing me a couple of people doing who knows what in a clip of unknowable provenance. This is not the hill to die on, my man

Obviously there are way more occurrences than this video. Also, the lady in the video acted like nothing was wrong and admitted no shame, which means there is a culture/common practice of using gutter oil.

Wikipedia says that today this carries the penalty of decades in prison and a suspended death sentence. I very much doubt it’s as prevalent a practice as you suggest. To suggest that this crime is a “normal part” of Chinese culture is simply wrong.

There was no penalty/death sentence for the recent public incident of the oil tank truck that was found transporting both toxic industrial oil and cooking oil, without cleaning in between. Which apparently was a wide-spread practice, as confirmed by netizens. Instead the officials just hand waved and said it's an isolated incident, and they're looking into it. And no news of it since.

They said "cultural", you decided to insert "race", presumably to stoke more outrage.

This is what happens when Silicon Valley execs, trying to make their employees more replaceable, call for more STEM education; suddenly, tons of funding and institutional resources go into STEM research with no real reason or motivation or material for this research. It's like an gerbil wheel: once you get on the ride, once you get tricked into becoming a "scientist" just because a few billionaires wanted to cut slightly thicker margins, there's no stop. Bullshit your way through undergraduate education, bullshit your way through a PhD; finally, if you're good enough at making up statistics, you get a job training a whole host of other bullshitters to ride the gravy train.

> tons of funding and institutional resources go into STEM research with no real reason or motivation or material for this research.

I do believe that there exists an insane amount of (STEM) questions where there exist very good reasons to do research on - much, much more than is currently done.

---

And by the way:

> This is what happens when Silicon Valley execs, trying to make their employees more replaceable, call for more STEM education

More STEM education does not make the employees more replaceable. The reason why the Silicon Valley execs call for more STEM education is rather that

- they want to save money training the employees,

- they want to save money doing research (let rather the taxpayer pay for the research).


repeating what user u/randomdata said already,

> - they want to save money training the employees,

> - they want to save money doing research (let rather the taxpayer pay for the research).

means they want to offload costs to the public in order to increase profits, which is what I said above.


Offloading costs is a different thing than making employees more replaceable.

Employees are more expensive because they are less replaceable. A company must invest a certain amount of money into labor to make a profit; however, if that company learns it can invest less money into endeavours to make the same profit, then it can decrease the amount invested into labor. The only way to do so is to create some sort of technology, or social relation, that makes the price of individual workers cheaper. Thus, any reduction of cost of labor that increases profit is something that makes employees more replaceable.

> - they want to save money training the employees,

So what you're saying is that they push for STEM education to make their employees more replaceable...?


> So what you're saying is that they push for STEM education to make their employees more replaceable...?

A general rule of thumb is rather that better education and/or specialized knowledge makes employees nore productive, but also less replaceable.


>less replaceable

Only when they are the only ones that have that knowledge, not when teaching it becomes rote.


A decent rule if considered in a vacuum, but perhaps you missed some necessary context related to this particular discussion?

> - they want to save money training the employees,


Something that continues to puzzle me: how do molecular biologists manage to come up with such mindbogglingly complex diagrams of metabolic pathways in the midst of a replication crisis? Is our understanding of biology just a giant house of cards or is there something about the topic that allows for more robust investigation?

This kind of report always raises the question for me of what the existing system's goals are. I think people assume that "new, reliable knowledge" is among the goals, but I don't see that the incentives align toward that goal, so I don't know that that's actually among them.

Does the world really want/need such a system? (The answer seems obvious to me, but not above question.) If so, how could it be designed? What incentives would it need? What conflicting interests would need to be disincentivized?

I think it's been pretty evident for a long time that the "peer-reviewed publications system" doesn't produce the results people think it should. I just don't hear anybody really thinking through the systems involved to try to invent one that would.


My favorite contemporary physicist is doing a mundane job at NASA and does all the interesting theoretical research as a side project. I think this should be the default.

This published research is false.

All published research will turn out to be false.

The problem is ill-posed: can we establish once and for all that something is true? Almost all history had this ambition, yet every day we find that something we believed to be true wasn't. Data isn't encouraging.


Maybe the real truth, was the friends we made along the way.

One study tried to replicate 100 psychology studies and only 36% attained significance.

https://osf.io/ezcuj/wiki/home/


Please note the peerpub comments discussing that it appears that followup research shows about 15% is wrong, not the 5% anticipated.

https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...


I've implemented several things from computer science papers in my career now, mostly related to database stuff. They are mostly terribly wrong or show the exact OPPOSITE as to what they claim in the paper. It's so frustrating. Even occasionally, they offer their code used to write the paper and it is missing entire features they claim are integral for it to function properly; to the point that I wonder how they even came up with the results they came up with.

My favorite example was a huge paper that was almost entirely mathematics-based. It wasn't until you implemented everything that you would realize it just didn't even make any sense. Then, when you read between the lines, you even saw their acknowledgement of that fact in the conclusion. Clever dude.

Anyway, I have very little faith in academic papers; at least when it comes to computer science. Of all the things out there, it is just code. It isn't hard to write and verify what you purport (usually takes less than a week to write the code), so I have no idea what the peer reviews actually do. As a peer in the industry, I would reject so many papers by this point.

And don't even get me started on when I send the (now professor) questions via email to see if I just implemented it wrong, or whatever, that just never fucking reply.


This is a common failure mode when people outside academic CS read CS papers. They take the papers too literally.

Computer science studies computation as an abstract concept. The work may be motivated by what happens in the industry, but it's not supposed to produce anything immediately applicable. Papers may include fake justifications and fake applications, because populist politicians decided long ago that all publicly funded research must have practical real-world applications. But you should not take them at face value.

Academic CS values abstract results over concrete results, because real-world systems change too rapidly. Real-world results tend to become obsolete too quickly to be relevant in the time scales the academia is supposed to operate.

If you are not in academic CS, you should be careful when reading the papers that you understand the context. Most of the time, you are not in the target audience. Even when there is something relevant in the paper, it's probably not the main result, but an idea related to it. And if you start investigating where that idea came from, it probably builds on many earlier results that seemed obscure and practically irrelevant on their own.

Peer reviewers usually spend a few hours on a single review (though there is a lot of variation between fields). A week would be so expensive that most established academics would have to stop teaching and doing research and become full-time reviewers.


> Academic CS values abstract results over concrete results, because real-world systems change too rapidly. Real-world results tend to become obsolete too quickly to be relevant in the time scales the academia is supposed to operate.

This isn't true. When I'm implementing a paper, I usually go for JUST implementing what they describe, usually by hand. Like if it is a new SQL syntax, I will write a custom recursive descent parser, and hand-roll the query planner, for just the new stuff and hard code some other parts, just as a demonstration. I'm not interested in the industry application part, I'm interested in replicating their work. Once I can replicate it, assuming it is correct, then I will factor the work into a production system.

It's this first part that I am frustrated with, not the full implementation in production software.


I’m sorry, what? The best computer scientists I know, Djikstra, Tony Hoare, Turing, Knuth, and more, all developed practical algorithms and concepts that are still used today! Like the parent commenter said, it’s really not difficult to provide code that works and supports your conclusion. I’ve also read computer science papers that bury the lede. It’s apparent when you try to replicate them that the author was disappointed that the work didn’t support their hypothesis, so they made it look good. It’s sad because a true scientific pursuit finds the knowledge valuable whether it works or not! They just need to state it instead of hide that fact.

Your whole comment reads as some sort of weird gate-keeping where people without the “proper” education could never fully understand a “true” paper. We can, and we do. There’s a reason we learn from truly great computer scientists like I listed above in college, and not the people posting unreproducible work.


For papers with code, I have a seen a tendency to consider the code, not the paper to be the ground truth. If the code works, then it doesn't matter what the paper says, the information is there.

If the code doesn't work, it seems like a red flag.

It's not an advantage that can be applied to biology or physics, but at least computer science catches a break here.


> For papers with code, I have a[sic] seen a tendency to consider the code

My favorite for those is to search the code for "todo" and look up that part of the paper. Usually, these are the most complicated parts of the paper. Not always though, sometimes they are just trivial things.


It is also frustrating when a papers summary says one thing, you pay for the full thing only to see it is a complete opposite of the claims. Waste of time and money, bleh!

Scihub. Use scihub. What use is it that you pay the publishing company for the paper? The researchers doing the work won't see any of that money.

If it is as bad as you claim, it would be interesting if you could back this up with a falsification report for the papers in question.

I am working on implementing a paper right now (surprise, surprise, it has issues). Maybe I will do blog posts with a peer review of the paper, and do that for all future papers as well.

Wow, sounds awful. Help the rest of us out - what was the huge paper that didn't work or was actively misleading?

I'd rather not, for obvious reasons. The less obvious reason is that I don't remember the title/author of the paper. It was back in 2016/17 when I was working on a temporal database project at work and was searching literature for temporal query syntax though.

If most Published Research Findings are false, does not this mean this article is likely to be false as well ? :)

i wonder if science could benefit from publishing using pseudonyms the way software has. if it's any good, people will use it, the reputations will be made by the quality of contributions alone, it makes fraud expensive and mostly not worth it, etc.

People have uses for conclusions that sometimes don't have anything to do with their validity.

So while "if it's any good, people will use it" is true and quality contributions will be useful, the converse is not true: the use or reach of published work may be only tenuously connected to whether it's good.

Reputation signals like credentials and authority have their limits/noise, but bring some extra signal to the situation.


what's missing from this paper is a probability using its own model that it too is false. counter to the headline, it implies that by its own probable falsehood, most published research is in fact true.

I admit to missing the joke in the first reading.

pseudonyms may prevent the abuse of invalid papers by removing the ability of the authors to front institutional reputations for partisan claims.

the movement for science and data to drive policy outside of their domains sounds nice until you find that the science and data are irrepreducible, and the institutions have become laundering vehicles for debased opinions that wash the hands of policymakers. as though the potential for abuse has become the value.

maybe it's a rarefied kind of funny, but the kernel of truth it reveals is that it could be time to start using pseudonyms in some disciplines to make the axis of policymakers and academics more honest.


How broad a range is this result supposed to cover? It seems to be mostly applicable to areas where data is too close to the noise threshold. Some phenomena are like that, and some are not.

"If your experiment needs statistics, you ought to have done a better experiment" - Rutherford


Arguing why this paper is false is ironic in agreeing with the papers point

It has been said that "Publish or Perish" would make a good tomb stone epitaph for a lot of modern sciences.

I speak to a lot of people in various science fields and generally they are some of the heaviest drinkers I know simply because of the system they have been forced into. They want to do good but are railroaded into this nonsense for dear of losing their livelihood.

Like those that are trying to progress our treatment of mental health but have ended up almost exclusively in the biochemicals space because that is where the money is even though that is not the only path. It is a real shame.

Also other heavy drinkers are the ecologists and climatologists, for good reason. They can see the road ahead and it is bleak. They hope they are wrong.


(2005). I wonder what's changed?

Over on peerpub there has been some discussion of studies on the topic.

https://pubpeer.com/publications/14B6D332F814462D2673B6E9EF9...


From my experience, my main criticism of research in the field of computer vision is that most of it is 'meh'. In a university that focused on security research, I saw mountains of research into detection/recognition, yet most of it offered no more than slightly different ways of doing the same old thing.

I also saw: a head of design school insisting that they and their spouse were credited on all student and staff movies, the same person insisting that massive amounts of school cash be spent promoting their solo exhibition that no one other than students attended, a chair of research who insisted they were given an authorship role on all published output in the school, labs being instituted and teaching hires brought in to support a senior admin's research interested (despite them not having any published output in this area), research ideas stolen from undergrad students and given to PhD students... I could go on all day.

If anyone is interested in how things got like this, you might start with Margret Thatcher. It was she who was the first to insist that funding of universities be tied to research. Given the state of British research in those days it was a reasonable decision, but it produced a climate where quantity is valued over quality and true 'impact'.


I only read the abstract; “Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true.”

True vs false seems like a very crude metric, no?

Perhaps this paper’s research claim is also false.


So whenever someone gives me a detailed argument with cited sources I can show them this and render the truth into an unobtainable objective.

Have LLMs cross check papers and point out experiments to be repeated.

LLMs would not be very useful in this instance, since truly novel (and correct) findings would not have formed part of their training datasets.

... including the junk pushed by Ioannidis. His completely trashed his credibility during COVID.

By being less wrong than almost everyone else. Since everyone else was wrong together they shunned him (as science dictates), and now agree to not talk about how wrong they were.

He used his reputation and statistical expertise to mislead the world as to the true prevalence (COVID infection rate) and supported the fantasies of Bhattacharya, Kulldorff and Gupta. It is hard to estimate what effect his misinformation had on COVID control measures, and there was no shortage of attention seeking clowns, but he stepped up to the plate and he can take credit for some of the millions of deaths. It was scientific misconduct but his position shields him from consequences.

It’s a matter of incentives. Everyone who wants a PhD has to publish and before that they need to produce findings that align with the values of their professors. These bad incentives combined with rampant statistical errors lead to bad findings. We need to stop putting “studies” on a pedestal.

I think unpopular to mention here but John Ioannidis did a really weird turn in his career and published some atrociously non-rigorous Covid research that falls squarely in the cross-hairs of "why...research findings are false".

This only applies to life sciences, social sciences right? Or are most papers in computer science or mechanical engineering also false?

It's very bad in CS as well. See e.g.: https://arxiv.org/abs/1807.03341

IIRC there was also a paper analyzing how often results in some NLP conference held up when a different random seed or hyperparameters were used. It was quite depressing.


It's mostly in medicine and psychology.

In topics where there is less reliance on relatively small numbers of cases (as is typical for medicine), there is also less reliance on marginal, but statistically "significant", findings.

So areas such as biochemistry, chemistry, even some animal studies, are less susceptible to over-interpretation or massaging of data.


Oh the irony

2022

Yeah, when you try new things, you often get them wrong.

Why do we expect most published results to be true?


Because people use published results to justify all sorts of government policy, business activity, social programs, and such.

If we cannot trust that results of research are true, then how can we justify using them to make any kind of decisions in society?

"Believe the science", "Trust the experts" etc sort of falls flat if this stuff is all based on shaky research


> If we cannot trust that results of research are true, then how can we justify using them to make any kind of decisions in society?

Well, don't.

Make your decisions based on replicated results. Stop hyping single studies.


I agree with this but it is very harsh on a person to doubt too much, gotta believe something. So there doesn’t seem to be a real solution for this kind of thing

> Stop hyping single studies.

This right here really. The reason people go "oh well science changes every week" is because what happens is the media writes this headline: "<Thing> shown to do <effect> in brand new study!" and then includes a bunch of text which implies it works great...and one or two sentences, out of context, from the lead research behind it saying "yes I think this is a very interesting result".

They omit all the actual important details like sample sizes, demographics, history of the field or where the result sits in terms of the field.


After decades upon decades of teaching Western society to “Trust The Science”—where “Science” means “published academic research papers”—you can't unteach society from thinking this way with a simple four-word appeal to logic.

The damage has already long since been done. It's great that people are starting to realize the mistake, but it's going to take a lot more work than just saying “stop hyping single studies” in this comments thread to radically alter the status quo.

I once knew a guy who ended his friendship of many years with me over an argument about “safe drug use sites”, or whatever they're called—those places where drug addicts can go to “safely” do drugs with medical staff nearby in case they inadvertently overdose. Dude was of the belief that these initiatives were unequivocally good, and that any common-sense thinking along the lines of, “hey, isn't that only going to encourage further self-destructive behavior in vulnerable members of the populace?” could be countered by pointing to a handful of studies that supposedly showed that these “safe shoot-up sites” had been Proven To Be Unequivocally Good, Actually.

I took a look at one of these published academic research “studies”—said research was conducted by finding local drug dealers and asking them, before and after a “safe shoot-up site” was constructed, how their business was doing. The answer they got was, “more or less the same”—so the paper concluded (by means of a rather remarkable extrapolation, if I do say so myself) that these “safe shoot-up sites” were Provably Objectively Good For Society.

After pointing this out to my friend of many years, he informed me that I had apparently become some flavor of far-right Nazi or whatever, and blocked me on all social media platforms, never speaking to me again.

You're not going to get people like him to see reason by just saying “stop hyping single studies” and calling it a day. Our entire culture revolves around placing a rather unreasonable amount of completely blind faith in the veracity of published academic research findings.


I was intrigued and took a Quick Look at the top studies on this subject and the metrics used are things like relative overdose deaths in an area, crime statistics, and usage of treatment programs. They say that by virtue of a number of epidemiological metrics that safe consumption sites appear to be associated with harm reduction in terms of overdoses, while not increasing crime stats. I don’t see outsized claims of objective truth being made, more of the standard, “here’s how we got the numbers, here’s the numbers, they appear to point in this direction.”

I’m not doubting your claim but I’m wondering how that very weird paper you’re citing bubbles up to the top, when there’s some very middle of the road meta analyses that don’t make outsized claims like access to objective truth.


It's not that the paper itself made the claim of having access to objective Truth, it's that papers like these make conclusions, and these conclusions get taken in aggregate to advance various agendas, and the whole premise is treated (in aggregate) as being functionally identical to building a rocket based on conclusions reached by mathematics and physics research papers—because both situations involve making decisions based upon “scientific research”, so in both situations you can justify your actions by pointing to “Science”.

Your response to "stop hyping single studies" is... a single anecdote.

So what do you suggest?

Philosophy has all sorts of different ways to study this complex, multifaceted problem. Too bad it got kicked to the curb by science and is now mostly laughed at.

As ye sow, so shall ye reap, IRL maybe.


No idea—all I know how to do is recognize patterns and program computers.

But admitting to the existence of a problem is the first step toward fixing it, and, judging by the downvotes on various comments on this story here, we still have a ways to go before the existence of the problem is commonly-accepted.


You are threading into one of those areas that seem to replicate very well.

The difficulty or risk of using drugs does not appear to be a bottleneck on the amount of it people use. This probably does not hold all over the world, but I'm not aware of anybody actually finding an exception.


> people use published results to justify all sorts of government policy, business activity, social programs, and such.

That would be a reason to expect those results to be false, not a reason to expect them to be true.


If government used science to back up policy, we would most definitely not be having a huge portion of the problems we currently have.

because people believe that peer review improve things but in fact not really. its more of a stamping process

Yes that a misconception that many people think that peer-review involves some sort of verification or replication which is not true.

I would blame mainstream media in part for this and how they report on research and don't emphasize this nature. Mainstream media also is not interested in reporting on progress but likes catchy headlines/findings.


So is this paper false too? .. infinite recursion...

Most probably.

Most? Really?

Imagine if tech billionaires, instead of building dickships and buying single-family homes, decided to truly invest in humanity by realigning incentives in science.

Damn people are getting pretty good at manifesting these days

Check out ResearchHub[1], it's a company founded by a tech billionaire that's trying to realign incentives in science

[1] - https://www.researchhub.com/


Heh, thanks.

On a livestream the other day, Stephan Wolfram said he stopped publishing through academic journals in the 1980's because he found it far more efficient to just put stuff online. (And his blog is incredible: https://writings.stephenwolfram.com/all-by-date/)

A genius who figured it academic publishing had gone to shit decades ahead of everyone else.

P.S. We built the future of academic publishing, and it's an order of magnitude better than anything else out there.


He created his own peer reviewed academic journal and founded a corporation to publish it: https://en.wikipedia.org/wiki/Complex_Systems_(journal) That's a little different than just putting stuff online.

Oh wow, that's amazing. I missed that.

This is incredible: https://www.complex-systems.com/archives/

"Submissions for Complex Systems journal may be made by webform or email. There are no publication charges. Papers submitted to Complex Systems should present results in a manner accessible to a wide readership."

So well done. Bravo.


But it's not a reputable journal at all. An Impact Factor: 1.2 makes it close to useless.

Genius? The one who came up with a new kind of science?

Do you think judging someone by your least favorite work of their's is a good strategy?

Do you also say, "Newton a genius? The one who tried to turn lead into gold?"


He still parades that around as his magnum opus in his latest blog post:

https://writings.stephenwolfram.com/2024/08/five-most-produc...

Yeah, I think it's fair to judge him by it.


Fair enough.

Like all of his work, I thought it was an incredible book, if you just randomly sample 10% of it. I never understood why he doesn't cut more, as he has genius ideas that get really watered down with lots of less relevant details.

I would love if he started doing 1 page tldr's for all of his works.


If we were all Stephan Wolfram, perhaps that would be possible. But very few academics have either the notoriety or the funds to self-publish and ensure their work isn't stolen in their highly competitive industry.

There is a lot of academic work that is very obscure and only becomes important later, sometimes decades later, maybe even centuries, to someone else doing equally obscure work, but it always goes somewhere, and the goal is not to "move fast and break things," but create bodies of scholarship that last far beyond any specific capitalist industry or company.


> ensure their work isn't stolen in their highly competitive industry.

If you published your work online backed by git with hashes with a free public service like GitHub, how could someone steal it?

If you are an academic and don't know git, why can't you pick up "Version Control with Git" from your library or buy a used copy for $5 and spend a couple days to learn it?

> the goal is not to "move fast and break things,"

Who said that was the goal?

Why would you want to remain wrong longer?

If you want to move slower, why not take slower walks in the woods versus adding unnecessary bureaucracy?


>Why would you want to remain wrong longer?

It's not about "remaining wrong longer," academics don't care that much about being right. Opinions on works change throughout the years, and its hard to keep track of who did what if nobody can make proper attributions.

>If you published your work online backed by git with hashes with a free public service like GitHub, how could someone steal it?

I'll tell you something, because you have very much outed yourself as a dweeb with this comment: there are physical libraries in the world that are over a thousand years old. GitHub is 16 years old. I would much rather have my work stored in a physical library.


[flagged]


In 2022 there were ~57k PHDs awarded in the USA [0].

In the same year, there were ~500k immigrant visas [1].

If every single PHD was for immigration, it'll still only be ~10% of the total. And even if the thesis was "forced", just getting to PHD level means you're very much in the top echelons of education, and even then takes multiple years - as someone who immigrated with "only" a BSc I suspect they'll be able to use one of the many other paths instead with less effort.

I'm not sure the logic holds.

[0] https://www.forbes.com/sites/michaeltnietzel/2024/02/05/numb...

[1] https://www.bal.com/perspectives/bal-news/united-states-us-v...


That’s a lot of mental hoops you just leaped through. Are you saying you think immigration policy is the main contributing cause of poor research quality? Because that is a wild claim without evidence. Also doesn’t really make sense, immigration policy varies from state to state and there isn’t just one single country producing research.

Some huge percentage of STEM graduate school is just for immigration

These people are not seeking "truth"


Aren't they required to have previous schooling in their home country, pass an aptitude test against others, and maintain good grades to stay in the program in order to stay in the country?

I don't see how you can avoid actually doing the education part here. And I'm sure lots see immigration + better education as a win/win, which I don't see a problem with.

Where are you getting the huge percentage from? Do you have sources for your claim? Even news articles?


The numbers I find say about 20% of grad students in the United States are international students. I am not sure that is a huge percentage?

What does that statistic look like for other nations?

>These people are not seeking "truth"

Careful with those absolutes, there. And the whole "these people" thing, too, probably.



Evidence?

Especially given the context.


What kind of evidence would sway your belief on this matter?

Perhaps a published academic research paper on the topic?


This must be a satire piece.

It talks on things like power, reproducibility, etc. Which is fine. There are minority of papers with mathematical errors. What it fails to examine is what is "false". Their results may be valid for what they studied. Future studies may have new and different findings. You may have studies that seem to conflict with each other due to differences in definitions (eg what constitutes a "child", 12yo or 24yo?) or the nuance in perspective apllied to the policies they are investigating (eg aggregate vs adjusted gender wage gap).

It's about how you use them - "Research suggests..." or "We recommend further studies of larger size", etc. It's a tautology that if you misapply them they will be false a majority of the time.


It's not satire. Ioannidis has a long history of pointing out flaws in scientific processes.

(Edit: spelling.)


What part about the genuine statistical arguments made in the article would make you believe it is satire?

I've found the reaction to this article can be pretty intense. We read this in a journal club many years ago and one of the mathematicians who was kind of new to the idea that research papers (in other fields) didn't more or less represent 'truth' said this article was _dangerous_.


It's ironic that the paper has a correction. The title and abstract sound highly editorial, especially given that "false" is never defined. They talk about pre study odds being an enhancement but I didn't see them include those in their own paper. The fact that a study is small or the impact is minor doesn't make the results false, especially when these limitations are called out and further research is requested. You could even have a case study n=1 be valid if the conclusion is properly defined. The main problem is people generalizing from things that don't have that level of support.

This is a classic and important paper in the field of metascience. There are other great papers predating this one, but this one is widely known.

Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.


Ha how meta is this comment because the obvious inference one makes from the title is "Why Most Published Research Findings on Covid Are False" and that goes against the science politics. If only he had avoided the topic of Covid entirely, then he would be well regarded.

It is pretty meta I guess.

> Why Most Published Research Findings on Covid Are False

Well, that's why there was so much focus on replication, multiple data sources and meta-analyses. The focus was there because the assumption is each study is flawed and those tools help extract better signal from the noise of individual studies.

> and that goes against the science politics

I don't think I follow you here. Are you referring to the anti-science populism? That's really the only science politics I'm aware of now that creationism and climate skepticism have been firmly put to rest.

> If only he had avoided the topic of Covid entirely, then he would be well regarded.

I think it's more that his predictions were bad and poorly reasoned and he chose to defend them on right wind media outlets instead of making his case among scientists.

He's not the first well-regarded scientist to go off on a politically-fueled side quest later in his career. Kary Mullis is a famous example.


Ha ha, I suppose as long as he goes on left wing media outlets and make bad and poorly reasoned predictions that follow left wing politics, then he would be well regarded.

And please, don't pretend like there is no left wing aligned science politics that is as much based on science as flat earthers. I assume you haven't been hibernating during the covid times. All the doctors who did exactly that are doing fine with regard to their reputations.


I think you're fighting a culture war I'm out of the loop on.

Calling Ioanidis a "covid conspiracy theorist" is carrying the flag at the head of the culture war. Playing dumb doesn't make you look above the discussion, it makes you look dishonest.

I am a science dude. I read mostly science and talk to other science people. That's how I got my covid info. I wasn't on social media until recently. I have no idea what fringe political groups were into during the covid era. I also have no idea what flat earthers have anything to do with it.

Your comment reminded me to listen to 2112 from Rush. Thanks.

Can you point to his statemetns that were conspiracy theory?

I know about Barrington and many of his other claims, but I don't recall him actually saying anything that I would classify as conspiracy theory. Certainly in my world, a credentialled epidemiologist questioning the accuracy of government statistics during a world health crisis, and suggestion that perhaps our strategy could be different, is not conspiracy theory.


He published an estimate of SARS-CoV-2 antibody seroprevalence in Santa Clara county, claiming a signal from a positivity rate that was within the 95% CI for the false-positive rate for the test. Recruitment was also highly non-random.

https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaw...

Such careless use of statistics is hardly uncommon; but it's funny to see that he succumbed too, perhaps blinded by the same factors he identifies in this paper.

Beyond that, he sometimes advocated for a less restrictive response on the basis of predictions (of deaths, infections, etc.) that turned out to be incorrect. I don't think that's a conspiracy theory, though. Are the scientists who advocated for school closures now "conspiracy theorists" too, because they failed to predict the learning loss and social harm we now observe in those children? Any pandemic response comes with immense harms, which are near-impossible to predict or even articulate fully, let alone trade off in an unquestionably optimal way.


During covid, people got so hyped up about trusting authorities, they threw science out the window. Well I guess they never understood science in the first place but wanted to shame anyone who disagreed with or even questioned whatever arbitrary ideas their government proposed. It was disgusting, and those people are still walking around among us ready to damage society next time some emergency happens.

> Certainly in my world, a credentialled epidemiologist questioning the accuracy of government statistics during a world health crisis, and suggestion that perhaps our strategy could be different, is not conspiracy theory.

I fully agree. (Well, with some caveats. I think credentials matters less than facts. And I think epidemiology is still in its infancy, so I personally don't put much faith in any single epidemiologist.)

Maybe conspiracy theorist is the wrong term. What he did was show a very political concern with public policy (especially IIUC his opposition to lockdowns) and very little concern about the quality of his research or the people it affected.

This article seems pretty decent at containing details: https://www.buzzfeednews.com/article/stephaniemlee/ioannidis...

You mention Barrington, from the Wikipedia article https://en.wikipedia.org/wiki/Great_Barrington_Declaration

> The World Health Organization (WHO) and numerous academic and public-health bodies stated that the strategy would be dangerous and lacked a sound scientific basis.

So I guess maybe less "conspiracy theory" and more "recklessly dangerous" or "abandonment of the Hippocratic oath".


I don't think anything he did or said was recklessly dangerous. In fact I think he believes he was acting in the US's best interest. I would be curious what the outcome would be if we had followed his approaches (which evolved during the course of the epidemic). I think he would have been much more succcessful if he worked the back channels and never been so public in twitter.

I saw a lot of "epidemiological immune system" activity during COVID- if you didn't toe a specific line, the larger community would attack you, right or wrong. My guess is that this is mainly from historical experience with vaccines and large-scale disease outbreaks, where having a simple, consistent message that did not freak out the population is considered more importantly than being absolutely technically correct.


We essentially did follow his policy and we have a sense of the impact.

The Lancet estimated that about 40% of US covid deaths could have been avoided if the administration had better policies. That's a bit over 400,000 deaths. That's about the same number of Americans lost during WWII.

Not all of that can be directly attributed to John Ioannidis's advocacy against lockdowns, but it at least gives us a sense of how big a blunder it was.


>Unfortunately the author John Ioannidis turned out to be a Covid conspiracy theorist, which has significantly affected his reputation as an impartial seeker of truth in publication.

Ad hominem attacks against ideas can safely be ignored.


Ioannidis published a journal article with what many considered an ad hominem attacks against a graduate student. He later withdrew that portion of the paper, (somewhat) in his defense.

(1) I don't think you can have an ad hominem against an idea?

(2) I'm not opposed to any ideas in this paper. I think the paper stands on its own merits.


>(1) I don't think you can have an ad hominem against an idea?

You've tried your best.

>(2) I'm not opposed to any ideas in this paper. I think the paper stands on its own merits.

You're just preemptively setting a limit to how much thinking we can do. After all you made a post in this very thread:

>>So I guess maybe less "conspiracy theory" and more "recklessly dangerous" or "abandonment of the Hippocratic oath".

Which is odd, since medical research has no Hippocratic oath or recklessly dangerous caveats. After all, they were doing gain of function research on coronaviruses in the very city where covid-19 started. Unless geography is now a reckless pseudo science which we must sensor for the good of all.


I honestly can't follow what you're trying to say.

But John Ioannidis is a physician and has served in a number of medical organizations. Even non-medical researchers are bound by IRB boards for research on humans and more generally are bound by all sorts of codified ethical standards.


Judging by the downvotes on your post, ad hominem attacks against ideas are A Good Thing, Actually—I'm sure there's a published academic research study somewhere that quite conclusively proves this to be the case.

Is he a conspiracy theorist? I googled it, and here's the interview where he explains his views on COVID: https://www.medscape.com/viewarticle/933977?form=fpf Nothing in there looks like a conspiracy theory.

Maybe conspiracy theorist is the wrong term.

It's more accurate to say that his ideas were dangerous and fringe and unsupported by the science. And while he was derelict in the science, he was very active promoting his opposition to lock downs to the White House and to conservative media.

It would be more accurate to say that he heavily fueled the conspiracy theorists rather than he was one himself.


> It's more accurate to say that his ideas were dangerous and fringe and unsupported by the science. And while he was derelict in the science, he was very active promoting his opposition to lock downs to the White House and to conservative media.

I may have misunderstood your tone, but it sounds like you think it's a good reason to have a bad opinion of him as a person or a scientist, or even prevent his ideas from being heard? I wouldn't want to live in a society like that.

One of the main things that fuelled conspiracy theories the most were draconian measures against dissenting opinions which were perpetrated by social media platforms. Silencing wrong ideas by force damages trust in science much more than engaging with them, and gives these ideas much more credibility.


I'm generally radically non-judgmental toward people and a little harder than most on problems.

I don't have a bad opinion of him as a person, although I think he acted dangerously and in a politically motivated way. I think lots of folks were freaking out at the time and their reactions are understandable. I don't expect anyone to be super human. But lots of people were also advocating policies that would (and in some cases did) result in mass death. And I do expect professionals to check themselves and try to prove themselves wrong before they embark on a political mission like he did.

I don't have particularly bad opinion of him as a scientist either. I've known several big name scientist types and usually they're very bright but only really reliable in their established field. You sometimes get to be a big scientist by taking a large contrarian bet, and I would guess he has a natural contrarian streak that served him well in some of his research. The problem with being a contrarian is that you're reactive. Your gradient isn't toward truth it's away from what you perceive as the current central tendency. So you more often end up more wrong than everyone else.

While I don't have a bad opinion of him as a scientist, I do think this episode will make me read his papers much more carefully for conclusions he's reached by contrarian intuition rather than careful reasoning. And it does to me call into question whether his motivation was to find truth or rather to offer a Marx-style criticism of everything to show how much better he is. I don't think the criticize-everything approach has proven productive in the long run.

> One of the main things that fuelled conspiracy theories the most were draconian measures against dissenting opinions which were perpetrated by social media platforms.

I wasn't on social media, so I can't say. It does suck to have your opinion dunked on. On the other hand, social media was full of inorganic influence campaigns. I don't have all the solutions, but I think it's reasonable to have some counterpressure to misinformation.

> Silencing wrong ideas by force damages trust in science much more than engaging with them, and gives these ideas much more credibility.

I'm not sure about this. The campaigns to damage trust in science were quite pervasive and organized. I'd think they wouldn't have spent all that money if the public policies did it just as well without spending on the influence campaigns.

The ideal would be if everyone were educated enough to consume the science directly. But for various reasons mass education is considered political so there's a political divide over who has the foundations to understand it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: