Hacker News new | past | comments | ask | show | jobs | submit login
We have an epidemic of deeply flawed meta-analyses, says John Ioannidis (retractionwatch.com)
106 points by danso on Sept 20, 2016 | hide | past | favorite | 49 comments



Ahhh yes, the Ouroboros of Scientific Evidence [0] strikes again. Personal opinion trumps meta-analysis. Science! You were the chosen one! Seriously, read this essay on the topic by Scott Alexander. It might change your life.

[0] http://slatestarcodex.com/2014/04/28/the-control-group-is-ou...


I can't quite work out what your comment is saying.

Are you critising the man in the article ( John Ioannidis ) for egotism because he somehow is disparaging of Further ( Meta ) analysis of his results ?

Or are you saying that Ioannidis has a point and that studies of studies are poor ?


If you read the article that chongli linked (and I second their recommendation) then it becomes self-evident that they mean the latter.


I've read that link, but it's a bad idea to have comments that can't be parsed except in context of a 4,000 word essay.


You're probably right. I was literally about to fall asleep for the night and I wanted to make a short comment that might encourage people to read the essay. I hope the cheekiness didn't annoy too many people!


But it is a fantastic 4,000 word essay. I came here to post it and was happy to see it was already the top comment. I can't recommend it enough.


Every time I read Scott Alexander, I think there's either a Youtube video out there that explains with the same level of accuracy at twice the entertainment, or an essay out there that's just as long but goes twice as deep.


I agree. Well, for the most part edutainment Youtube videos are pretty much the opposite of entertaining for me. Irritating? But I agree Scott Alexander's essay length is an issue.

I read SSC, and like it, but an editor would improve him. I think his superfans are very entertained by his lame jokes, and like his particular voice, so extra length is more of what they want, even if it doesn't contain much new information. For me, its OK, but if the essays could be 20% shorter with the about the same content, it would bump him up in my list.


I read "The Control Group is Out of Control" last night right before bed and it seemed like time stood still, it was so engrossing. So I thought I disagreed with your "essay length is an issue." But it turns out the essay is close to 5k words!

I haven't read much SSC, but something tells me I'd like his lame jokes.


Another trend in life science, which is on the path of reaching epidemic proportion, is the publication of conceptual "work". I am talking about publications in peer-reviewed scientific journals papers, about some conceptual ideas or framework that they may have had. Technically conceptual papers are appropriate when getting real observation if irrelevant. The problem is that, this is never the case in life science... So, basically they feel like they can't be bothered to obtain factual information, or they don't want to risk being scooped, which means they assume their "novel" concept is somewhat obvious.

Common features of these conceptual papers include: - coming from a high profile lab, in a ivy league school (i.e. high schmoozing factor); - said lab is typically the best equipped to actually tackle that concept; - not supported by any new data, but piggy backing on others previous data from others; - some form of deductive reasoning; - reads more like a novel; - claims for openness and future collaboration (but remember the 2nd feature about being in the lab best equipped to tackle the concept).

At best these papers can be used to study informal vs formal logic. In reality, I find these papers being used as a form of first-to-publish or lobbying material to influence NIH funding. I think they work well as a good PR factor, and this not surprising since since the appearance of doing something has more value nowadays than being the one who actually moves the needle. So my conclusion would be the same as Prof Ionnadis, that these serve primarily as self-promoting and marketing tools. They lack the rigor and quality that we should expect from scientific journals, and should not be labeled as such.


Don't forget funding agency demands. Just got a new five year project? Of course you need a deliverable after 12-18 months. Nevermind that you haven't had any chance of producing good science in that timeframe (unless you take the PhD comics version of the grant cycle as literal advice)

http://www.phdcomics.com/comics/archive.php?comicid=1431


As an aside, I have seen that grant cycle given to new grad students by a senior researcher as the way things are actually done. And I can't say I actually disagree.


As they say, "it's funny because it's true".


The problem is that you can't really prove anything with counts over evidence, a.k.a. statistics. You can to the extent that you define your own measure of proof, a.k.a. p value. And as prof. Ioannides has shown, it's perfectly possible to overrule your own good judgement and set the standard too low.

And if this is the case, then what's wrong with "some form of deductive reasoning", as you put it? It would be great if more people applied the formal rules of deduction to their conceptual papers, and the work produced this way would not be less objective, less rigorous, or of lesser quality, than work that gathers some data and crunches the numbers over it, then calls it a day- quite the contrary.

If that is not the done thing then the problem is not with the conceptual papers per se, but rather with the way they are produced and one should advocate for better methodology, not different methods.


I remember that time when I said on HN that most meta-analyses are complete garbage because of statistical/mathematical effects that weaken the result and that the conclusions are necessarily as flawed as the underlying data. I got downvoted into oblivion.


>> I got downvoted into oblivion.

Sometimes it's not what you say but how you say it.

If I say that "I believe there is an invisible world all around us" that's sort of the same as saying that most of the mass of the universe doesn't relfect light and so cannot be seen, as in Dark Matter- except a) I'm not a physicist and b) I haven't got a clue.

Or, say, you can get to the right figure with the wrong calculations. Say, 5 + 5 = 8 / 2 = 4 * 3 = 10. Whut?

So maybe you had the right idea but couldn't explain why it was the right idea and people downvoted you? Because if you can't explain why something that sounds far fetched isn't, people will tend to think you're talking nonsense. And rightly so, and so should they, as should you too.


Perhaps TFA is found more persuasive than your comment?


It's just interesting to see the opinions shift over time.


The further Science, Inc. gets from the scientific method and independent verification, the more of a joke it becomes.

Vox Day (no endorsement) makes the distinction between scientistry and scientody, which I think is an increasingly useful one.


One should note that meta-analysis is a form of rigorous independent verification.

There's also some flaws in his analysis: https://www.ncbi.nlm.nih.gov/pubmed/27620683#cm27620683_2695...


Meta-analysis is not a form of rigorous independent verification in the vast majority of cases. It is usually a statistical veneer placed, lazily, on top of a smear of vastly different experimental results, giving the appearance of rigor.

Replication is a rigorous form of independent verification.


How should the average researcher go about replicating something like 3-5 clinical trials that a drug company may have run for a human drug?

Should we replicate all that work? Or perhaps just obtain the raw, patient-level data from said company and analyze the data ourselves?

I am just curious which you view as more efficacious?


If you want to claim to be a scientist, you replicate the work.

If you want to be a statistician of increasingly obvious limited social utility, you rerun the statistics.


Question: why should we take your declarations of who is and isn't a scientist more seriously than the scientists themselves?

Lets take a typical cancer drug. It goes through safty, dosing, and efficiacy clinical trials. In lucky cases, it will show efficacy in a multi-year clinical trial across many geographic locations with hundreds of patients.

If the trial is successful, the FDA will eventually approve the drug, the drug can now be prescribed. The FDA mandates that follow up study is done continuously to learn more about the drug; better indications for use, contraindications for when it won't work, etc.

Where does "replication" come into any of this? Why in the world would somebody replicate a dosing study and generate the same data? That would be unethical, dangerous, and counterproductive. At best, one would throw out bad data that was improperly collected. At worst, one would just abandon the drug and move on to a different candidate drug.

When it comes to human studies that come at real cost to human life, not using all the best available information is unscientific, unethical, should result in civil penalties, and should probably result in criminal penalties as well.

This is the situation that the parent was asking about; by making short blanket statements about what a scientist is and is not, without considering the real issues at hand, makes it seem like you're not engaging the issue.

"Scientists replicate data" is a simple thing to say if you're looking at stars or running a particle collider or working on a new synthetic compound; taking that simple minded attitude is not appropriate for much of the most expensive research out there.

The question of what to replicate and when is a difficult one; it's the tradeoff between new discovery and making sure you're on the right path. If you can make a new discovery that simultaneously proves or disproves that you're on the right path, that's a smarter move, but it's not "replication."


THANK YOU. Nobody seems to pick up on the fact that those 3-5 trials cost (on average) in the hundreds of millions of dollars ($30-50M).

Not to mention the countless Institutional, Human Subject, and ethics review boards that must be satisfied before we can even begin to think about laying hands on a human to conduct a study of any sort - let alone one with an investigational new drug.



Indeed - the Cochrane Collaboration is of quite limited utility: http://www.cochrane.org/evidence.

They are only the gold-standard for systematic reviews of therapeutic interventions.



If you want to be a scientist whose good at their job, you re-run the statistics to put your replication in context, inform the priors you are using, etc.


> Or perhaps just obtain the raw, patient-level data from said company and analyze the data ourselves?

Data can be faked, manipulated, unless you can trace it back to its origin and ensure it was recorded properly. There are multiple cases (even recent ones) where investigators modified the results of their trials in order to make them look better than they were.

Only replication can alleviate that concern.


Though critical reading of raw data can reveal fake entry, witch is equally interesting.

I agree that replication will always be better and should ideally be mandatory. But meta-review is still way better than nothing.

And sometimes you just can't replicate for various reasons.


I don't think it's obvious that meta-review is better than nothing. It can lend credence to incorrect/fraudulent results, making actual replication appear less important.


Better to replicate it. Science ain't easy.


Then you have to summarize those replications and decide whether or not the replication studies replicated findings... which means meta-analysis.


No, you don't need to do meta-analysis. Without combining the results of multiple trials together, you can see if an observed effect in one trial is found in other trials through independent analysis.


But explaining whether or not "We didn't get the same answer" is the result of variation in the effect estimate, or because of a genuine failure to replicate, is one aspect of meta-analysis.

The purpose of meta-analysis goes beyond pooling. Indeed, one of the first steps in meta-analysis is "Is pooling even a good idea?"


It can be rigorous and useful. Let's not throw the baby out.


One should note that meta-analysis is a form of rigorous independent verification.

It most definitely is not. Meta-analysis performs aggregation of existing research data in an attempt to validate specific research results for a larger group than initially tested. As such, it is primarily an inductive proof. That it can also validate or invalidate prior research is more of an artifact than a goal.

A rigorous independent verification can be performed deductively. No inference required.


There are steps in properly performed meta-analysis that are very much about verification of a field of research as a whole, rather than a single study. For example, methods to detect likely publication bias, or heterogenity statistics that will indicate whether or not a given literature is in a state to be pooled at all.

Both are methods to reveal "Something is wrong here..."


Vox Day is literally a creationist (if an old-earth one), so might not be a great name to invoke for any aspect of thinking about this.


Vox Day is many, many things, few of which I'd want to be associated with.


Soooo a meta-analysis of meta-analyses?


> The increase (in the number of meta-analyses) is a consequence of the higher prestige that systematic reviews and meta-analyses have acquired over the years, since they are (justifiably) considered to represent the highest level of evidence

No. The increase is because it is cheaper to do meta-analysis that it is to design and conduct experiments. They also carry less reputational risk.


There are other, non-nefarious reasons.

1. They make excellent student projects. Part of this is cheapness, sure, but part of it is that meta-analysis can be done relatively quickly. Some observational studies will take years to complete - in the meantime, your Masters student needs something to do.

2. They are often "Step 1" of a number of study designs. For example, if one is eliciting priors for a Bayesian analysis, or in my case trying to parameterize a theoretical model, "Is there a meta-analysis on this, and if not, can we do one?" is one of the first questions asked.

3. It allows participation in a field. For example, I have thoughts about some aspects of clinical medicine. I am unlikely to ever run a clinical trial, what with not having a position in a medical school. I can however perform a meta-analysis of trial data as well (or possibly better) than the people performing the studies. Running a study and conducting a meta-analysis are not necessarily the same skill set.


> There are other, non-nefarious reasons.

Cost is a nefarious reason?


There's an undertone in this thread that meta-analysis is just what you do if you can't run a study. I wanted to note that there are scientific reasons to perform one in addition to logistic ones.


Review articles do tend to get a higher number of citations that standard papers. That could be another motive.


Until the incentives change in academia...


In many ways, the title is an example of a deeply flawed meta-analysis.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: