Hacker News new | past | comments | ask | show | jobs | submit login

An alternative headline would be "Most Published Studies Are Wrong"

Am I wrong for considering this not-quite-a-crisis? It has long been the case that initial studies on a topic fail to get reproduced. That's usually because if someone publishes an interesting result, other people do follow up studies with better controls. Sometimes those are to reproduce, sometimes the point is to test something that would follow from the original finding. But either way, people find out.

I mean, I guess the real problem is that lots of sloppy studies get published, and a lot of scientists are incentivized to write sloppy studies. But if you're actually a working scientist, you should understand that already and not take everything you see in a journal as actual truth, but as something maybe might be true.




Yes! This is the point that most people miss. No scientist treats published studies as gospel. Our focus shouldn't be on exact replication, and should be on how generalizable such results are. If the results don't hold in slightly altered systems, it falls into the wastebin of ideas.


Maybe not, but these studies get swept up into meta-studies, books, think tanks and special interest groups write policy papers based on that secondary layer, which then become position papers that inform policy makers about Big Decisions. There's a lot at stake when only scientists are aware of the fallacy.


The reason why meta studies get done is precisely because individual studies aren't reliable.


> Our focus shouldn't be on exact replication, and should be on how generalizable such results are.

This is how I normally look at things. If I can't easily replicate an experiment, then it's very likely to be wrong.

Sadly, it's pretty rare (and exciting) when you can easily do something based on the methods in someone's paper


Well that shouldn't necessarily be the standard. There are lots of people I know that can't do a miniprep or run a gel. It doesn't mean that those things don't work or aren't replicable.


> No scientist treats published studies as gospel.

Unfortunately that is a problem. Imagine yourself trying to create a simulation of a biological system, which has to rely on experiments. You may come up with a plan, but every little line on the plan will be either very doubtful or outright false. The problem is that many of these doubts could be dispelled if the experiment was a lot more stringent (much larger sample size, much more controlled conditions etc). That would cost a hell lot more, but it would give you one answer you can rely on.


I don't think making experiments more stringent is the answer here (btw, I've spent a lot of time trying to build simulations of biological systems). Usually, we aren't doing the right experiment in the first place; this is hard to figure out ahead of time. Again, read the ACSB that I linked to below... it's probably the most nuanced and interesting discussion on the subject


Yes, thanks that was a good read. It seems we need a "Map" of biological sciences, in which every study could be a 'pin' in a particular location , signifying that 'this study studies this very particular problem here'. Maybe that would help figure out where are the biggest gaps. Unfortunately, most studies broaden the impact of their results too much to the point that reading the paper abstract can be misleading. Maybe people should just publish their results, but not be allowed to make any claims about them; let others and the community make those claims.


Also, you should read the Peter Walter piece that I think is the best discussion of the topic: https://www.ascb.org/newsletter/2016-marchapril-newsletter/o...


> No scientist treats published studies as gospel.

Why then do so many scientists in so many different fields insist "the science is settled"?


Which scientists? About what issues?


>That's usually because if someone publishes an interesting result, other people do follow up studies with better controls.

No, no, a thousand times no!

Most studies do not have follow up studies that confirm/refute the original. Often such a followup study is hard to publish. If you manage to reproduce it, you cannot publish unless it presents a new finding. If you fail to reproduce it, it often doesn't get published either. And no one writes grant applications that are for replication studies. The grant will likely go to someone else.

When I was in grad school, few advisors (engineering/physics) would have allowed their students to perform a replication study.

>But either way, people find out.

I wish I could find the meta-study, but someone once published a study of retracted papers in medicine. They found that a number of them were still being cited - despite the retraction (and they were being cited as support for their own papers...). So no, people don't find out.

>But if you're actually a working scientist, you should understand that already and not take everything you see in a journal as actual truth, but as something maybe might be true.

I agree. But then you end up writing a paper that cites another paper in support of your work. Or a paper that builds up on another paper. This isn't the exception - this is the norm. Very few people will actually worry about whether the paper they are citing is true.

When I was doing my PhD, people in my specialized discipline were all working on coming up with a theoretical model of an effect seen by an experimentalist. I was once at a conference and asked some of the PIs who were doing similar work to mine: Do you believe the experimentalist's paper? Everyone said "No". Yet all of us published papers citing the experimentalists' paper as background for our work (he was a giant in the field).

Another problem not touched upon here: Many papers (at least in my discipline) simply do not provide enough details to reproduce! They'll make broad statements (e.g. made measurement X), but no details on how they made those measurements. Once you're dealing at the quantum scale, you generally cannot buy off the shelf meters. Experimentalists have the skill of building their own measuring instruments. But those details are rarely mentioned in the paper. If I tried reproducing the study and failed, I would not know if the problem is in the paper or in some detail of how I designed my equipment.

When I wrote my paper, the journal had a 3 page limit. As such, I had to omit details of my calculations. I just wrote the process (e.g. used well-known-method X) and then the final result. However, I had spent most of my time actually doing method X - it was highly nontrivial and required several mathematical tricks. I would not expect any random peer to figure it all out. But hey, I unambiguously wrote how I did it, so I've satisfied the requirements.

When I highlighted this to people in the field, they were quite open with another explanation: It helps them because they do not want their peers to know all the details. That allows them to have an edge over their peers and they do not need to race with them to publish further studies.

I can assure you: None of these people I dealt with were interested in furthering science. They were interested in furthering their careers, and getting away with as little science as is needed to achieve that objective.


>>That's usually because if someone publishes an interesting result, other people do follow up studies with better controls.

>No, no, a thousand times no! Most studies do not have follow up studies that confirm/refute the original. Often such a followup study is hard to publish. If you manage to reproduce it, you cannot publish unless it presents a new finding. If you fail to reproduce it, it often doesn't get published either. And no one writes grant applications that are for replication studies. The grant will likely go to someone else.

Sorry, let me be clear: If an interesting result is published, people will go to the trouble. Most results are of limited interest and mediocre.


> Sorry, let me be clear: If an interesting result is published, people will go to the trouble.

That's only true for a definition of "interesting" that is more like the sense most people assign to "astounding" or "groundbreaking", and even then it's not guaranteed, just somewhat probable. If it's both groundbreaking and controversial (in the sense of "immediately implausible to lots of people in the domain, but still managing to draw enough attention that it can't be casually ignored as crackpot"), like, say, cold fusion, sure, there will be people rushing to either duplicate or refute the results. But that's a rather far out extreme circumstance.


If a result opens up an entirely new paradigm, then you can bet there will be people trying to replicate the experiments.


>Sorry, let me be clear: If an interesting result is published, people will go to the trouble. Most results are of limited interest and mediocre.

If it's in a journal, it is interesting. Journal editors will require "interesting" as a prerequisite to publishing a paper. Papers do get rejected for "valid work but not interesting".

If journals are publishing papers that are of limited interest, then there is a serious problem with the state of science.

I'm not trying to pick hairs. One way or other, there is a real problem - either journals are not being the appropriate gatekeepers (by allowing uninteresting studies), or most interesting studies are not being replicated.


"Interesting" is vague and subjective. Some work is boring as hell but the results provide the foundation for things that are truly "interesting".


> When I highlighted this to people in the field, they were quite open with another explanation: It helps them because they do not want their peers to know all the details. That allows them to have an edge over their peers and they do not need to race with them to publish further studies.

I have always suspected that, but I've never heard anyone be that open about it.

At a previous job I had to implement various algorithms described in research papers, and in every case except one, the authors left out a key part of the algorithm by glossing over it in the laziest way possible. My favorite one cited an entire linear algebra textbook as "modern optimization techniques."


>I have always suspected that, but I've never heard anyone be that open about it.

Yes, they'll just say they expect their peers to be competent enough to reproduce, and a paper shouldn't be filled with such trivialities.

To get the real answer, talk to their grad students. Especially those who are aspiring for academic careers. They tend to be quite frank on why they will act like their advisors.

Oh, and citing a whole book for optimization - that kind of thing is quite common. "We numerically solved this complex PDE using the method of X" and then just give a reference to a textbook. But usually the algorithm to implement is sensitive to tiny details (e.g. techniques to ensure tiny errors don't grow to big ones, etc).


There's a really good Google talk on this subject full of many concerning statistics and anecdotes:

John Ioannidis: "Reproducible Research: True or False?" -- https://youtu.be/GPYzY9I78CI


Yes, this is well known among working scientists. Even in my theoretical field it was well known that at least 50% articles contain some mistake.


I think the hope with these systems is that eventually there will be a preponderance of evidence that proves the study in general terms, not necessarily that everything in this study is 100%. Later scientists will follow along and prove/disprove these findings. If all the studies are 50-50, then we have no idea. If it is 90-10, findings indicate some kernel of truth in these studies. An astute grad student may also be encouraged to look at the 10% that disagreed to see why or if the 90 are too susceptible to selection bias (meaning they decided their conclusion from the beginning and are massaging the data to fit).

The wrong incentives for studies are a bigger problem. I think the only way to solve that is with a higher threshold of peer review to be required before one of these "findings" is put out to the public.


> But if you're actually a working scientist, you should understand that already and not take everything you see in a journal as actual truth, but as something maybe might be true.

I believe that's called "skepticism" which makes you a heretic and "anti-science" in certain fields.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: