Hacker News new | past | comments | ask | show | jobs | submit login

The reasonable middle ground is, or probably should be, closer to the latter end than the former end unfortunately.

Put simply, science is advertised as self-correcting but in reality it's not. Representative experience documented here: http://crystalprisonzone.blogspot.com/2021/01/i-tried-to-rep...

So, the reasons people learn a generalized distrust of science are that often the sausage doesn't get made. Bad science is published, applauded, cited, breathlessly covered in the media and may even be replicated, yet the first time outsiders to the field actually read the paper they realize it's nonsensical. But then they realize nobody cares because careers were made through this stuff, so why would anyone inside the field want to unmake them?

The degrading trust doesn't come from bad results per se, but rather the frequent lack of any followup combined with the lack of any institutional mechanisms to detect these problems in the first place beyond peer review, which is presented as a gold standard but is in no way adequate as such.

For example, consider how programmers use peer review. We use it, and we use lots of other tools too because peer review is hardly enough on its own to ensure quality. Now imagine you stumbled into a software company that held a cast-iron policy that because patches get reviewed by coworkers you simply don't need a test suite, nor manual testing, nor a bug tracker, code comments, security processes, strong typing, etc. And their promotion process is simply to make a ranking of developers by commit count and promote the top 10% every quarter, and fire the bottom 10%. Moreover they thought you were nuts for suggesting that there was any problem with this. You'd probably want to get out of there pretty fast, but, that's pretty much how (academic) science operates. So of course this degrades trust.




Maybe I'm missing something, but 'self-correcting' doesn't necessarily mean 'immediately self-correcting'. I guess it's safe to assume, that incorrect studies are not cementing our world view and entirely stopping us from questioning studied topics again.

The way I see the self-correcting nature of science: the truthiness of our view about specific set of topics increases over time (in some approximation).


Self-correcting doesn't mean immediately self correcting, but it does imply self-correcting in a somewhat reasonable time period, and ideally not needing to self correct too often.

What's reasonable, well, probably not years or decades. Average people cannot make major errors that destroy the value of their job output and then blow it off with "well but the company self corrected eventually so please don't fire me". When they judge science, they will judge it by the standards they are themselves held to in normal jobs.

And what's too often, well, probably papers that don't replicate should be a clear minority instead of (in some fields) the majority. Recall that failure to replicate is only one of many things that can go wrong with a study. Even if the replication rate was 100% many fields would still be filled with unusable papers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: