>The reproducibility difficulties are not about fraud, according to Dame Ottoline Leyser, director of the Sainsbury Laboratory at the University of Cambridge.
That would be relatively easy to stamp out. Instead, she says: "It's about a culture that promotes impact over substance, flashy findings over the dull, confirmatory work that most of science is about."
Well, maybe I'm too much of a layman, but that doesn't quite seem to add up. Is not calling it fraud about protecting people's egos and saving face?
Or is it like if an accountant completely screwed up all his work and got the numbers wrong, but it was because they were a buffoon- not a fraudster? I guess that would need a different word than fraud.
A lot of it is people over hyping their results and cherry picking their data to fit a narrative. Can you blame them? You can literally build a career off a paper or two published in Science or Nature.
Meanwhile, no one controlling funding sources or faculty appointments cares that you did amazing, rigorous work if it leads to less interesting conclusions. This is especially true if you generate null results, even though this work may have advanced your field. This puts in place a dangerous incentive system.
Another thing which is not mentioned is that the level of detail provided in many methods sections in papers is not sufficient for adequately reproducing the work. This can be due to word limit constraints or because people forget to include or aren't aware of key steps which are impacting their results. I've been on projects where seemingly irrelevant steps in our assay prep significantly impacted the resulting experiment outcomes.
Do your motivations matter if you do incorrect work? In the example I gave, an accountant can definitely face _heavy_ pressure from his employer to "make the numbers work".
But if the numbers superficially "work" without adding up, who cares what the motivation was? That is buffoonery or fraud.
> Do your motivations matter if you do incorrect work?
YES! If I'm careless my results don't match the data and someone can catch my mistake. If I'm trying to defraud you I'll fix this problem by making the data fit, and it's much harder for people to find the mistake.
If I'm a careless accountant we can audit my spreadsheets and find the errors.
If I'm a crooked accountant I'll have deliberately hidden the "error" in shell companies or offshore accounts, and this will be resistant to lower levels of scrutiny.
It's more like all the competitors in a market lowering their safety standards to cut costs. If buyers can't accurately assess value then it turn into a bad situation for everyone.
Anyone trying to do the right thing goes out of business and someone cutting corners get's their business.
So a tragedy of the commons style "collective action" problem.
It's more like if a guy went to Vegas and came back with thousands of dollars, and decided to tell his friends he's a master poker player, not that he got lucky. Interesting findings happen in scientific experiments, and it's hard to say why, but we should stop acting as if they're always true (and stop just funding quests for even more interesting findings.)
Context: You are a young scientist, under constant evaluation that may not only fire you, but also invalidate lots of your previous work. You have to produce flashy results to grow up on your career. You _really_ do not desire to get stuck where you are now.
Now, let's decide on a new experiment to proceed. Do you choose:
a - Boring important experiment that you can't hype but will surely advance your field;
b - Flashy, risky experiment that probably won't lead anywhere, but will change your life if you got lucky;
Now, let's say you go with "b" (that's a non-brainier). Four years into the experiment your results aren't going anywhere. Do you:
a - Accept it's a failed experiment, accept the failure that will set your career back 4 years, start again;
b - Insist on getting more data. Insist on getting more data. Insist on ... oh, never mind, that last data is impressive¹, publish it and go ahead.
This quandary is a great argument for never becoming an academic researcher.
It would seem to me that science needs to eliminate "career bias" or "mortgage bias" or "ramen bias" from its results. Negative results need to be just as publishable as significant results.
If I were tyrant of the Ivory Tower, I would decree that results be blinded until after acceptance for publication. I would further decree that prestige be allocated such that the first and second replications have equal prestige to an original publication, and successive replications are worth less prestige, but are still worth attempting.
The only failed experiment is one that does not advance the body of human knowledge. Negative results still let us know that one thing was tried, and it didn't work--it crosses off one line in a process of elimination. And those that fudge data and methods to produce the appearance of a result, and those that are so flawed as to be non-reproducible, are failures in that sense, even if they still allow for the career advancement of a few people.
I don't know exactly how they work, because I'm not a tenured full professor on a committee, deciding which candidate for hiring (or tenure, promotion, or whatever else) has the best CV.
I would guess that original research showing significant results, published in a major journal, with a lot of citations by peers, earns the most points. You have to ascend higher on the prestige point leaderboard to get the career advancement achievements and character perks.
Well, maybe I'm too much of a layman, but that doesn't quite seem to add up. Is not calling it fraud about protecting people's egos and saving face?
Or is it like if an accountant completely screwed up all his work and got the numbers wrong, but it was because they were a buffoon- not a fraudster? I guess that would need a different word than fraud.