Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maliciousness, incompetence, and accidents all look EXACTLY THE SAME from a replication perspective. We can't tell the researchers' intent.

Until the "industry" (defined vaguely as scientists, their institutions, universities, funding entities, etc, etc) cleans house and punishes those researchers, we're quickly approaching a time where we'll have to take EVERY study skeptically until it can be replicated.

* Punishment could range from "no, we won't publish your stuff without data+methodology" to ratcheting back funding to "we publicly document your lying/incompetence" (hardest and riskiest) to a variety of other things.



> we're quickly approaching a time where we'll have to take EVERY study skeptically until it can be replicated

I've always felt like this should be the norm. Why would you trust something before it can be replicated? Even if it's unintentional, people make mistakes.


> I've always felt like this should be the norm. Why would you trust something before it can be replicated? Even if it's unintentional, people make mistakes.

If you are close enough with a scientist, generally they will admit they don’t trust a single study

Some fields also have guardrails, such as the LIGO being two separate detectors a with two independent teams


Individually, yes I've been there a LONG time.

Unfortunately, we have a media and political structure that uses the most recent study/model/whatever to advocate for, design, and enact policy before that review.


> Maliciousness, incompetence, and accidents all look EXACTLY THE SAME from a replication perspective.

Why do you think that? If something fails to replicate it, you can investigate the original paper, and then there may be very clear evidence of fraud.


There isn’t really a way to “clean house” on a large scale, everyone has to somehow be more virtuous and not lie to themselves when p hacking or publishing data they know someone wouldn’t be able to reproduce.


I do think that you can tell apart these cases. Outright fabricated data is very different from p-hacking, which is very different from a meaningless garbage paper.

On the other hand, convenient laboratory errors might be hard to tell apart from fabrication (though not always, sometimes people get caught using photoshop), and statistical incompetence might be hard to tell apart from p-hacking. So I agree this is why it's hard to punish fraud.


no they don't. sometimes incompetence can stumble on the correct answer. like you could repeat the experiment, get very different results, be convinced of the charlatanism and incompetence of the original reporter, and then sigh a huge sigh of disgust because the data may have all been wrong but it points to the same overall/big-picture conclusion.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: