Not everything gets retracted either. There's a surprisingly deep rot in many parts of science. There are strong incentives to publish, and a lot of the methods you can use to inflate statistical significance (i.e. p-hacking) are hard to distinguish from publication bias and other innocent explanations for falling outside of statistical expectations.
Preregistration might help, but it doesn't really address the misaligned incentives that are at the heart of academic fraud.
Even articles that publish legit findings tend to embellish data. I do this for a living, I often try to reproduce prominent results, and I regularly see things that are too good to be true. This is bad because it pushes everyone to do the same, as reviewers are now used to seeing perfect and pristine data.
I have been asked to manipulate data a few times in my career. I have always refused, but this came at the cost of internal fights, getting excluded from other projects for being "too idealistic", or missed promotions. Incentives are just perverse. Fraud and dishonesty are rewarded, pretty depressing.
I think academic research is becoming very inefficient, and traditional Academia might eventually become stagnant. If you don't play the game I described above, it is really hard to stay afloat. I guess industrial labs, where incentives are better aligned, might become more attractive. I have seen lots of prominent scientists moving into industrial labs recently, which would be something hard to imagine even a few years back.
Yep. Worse, p-hacking can be done by accident. I mean, the term implies intent, but a dogshit null hypothesis is problematic regardless of whether it is dogshit on purpose or merely due to lack of skill on the part of the researcher. Either way, it pumps publication numbers and dumps publication quality. If 100% of researchers were 100% honest, we would still see this effect boost low-quality research.
That number alone doesn't say much. Yes it sounds like a lot in absolute terms, but consider that four million papers were published in 2023 alone, and a bit less than 70 millions since 1996 [1].
this isn't a big deal since no one reads most of those papers, it's mostly invisible sacrifice to the metrics gods. Very prominent papers like the one discussed, on the other hand, have much bigger consequences
Yes, very serious consequences indeed. (Note, I did my best to back up my statements with high quality unbiased factual references. Please read them before disagreeing with my description of the result.)
I’m 100% in agreement that there is a massive reproducibility crisis in science and that the publish-or-perish model is broken.
But, for completeness, paper retractions can happen for many reasons, not all of them nefarious, though it could be that most retractions are from authors trying to game the system and getting called out. For example, if the terms of using a certain data set change, you could be required to retract your paper and remove that data from the analysis.
Maybe retraction is something that should be something done in competitive fashion with negatve points awarded to researchers, universities and journals.