> A few years ago, John Ioannidis (RIP) published an article on conflicts of interest in nutrition research. The conflicts you get in this field, he said, are different from those in, say, Big Pharma-funded trials of new drugs. Not only are there those usual kinds of financial conflicts—some research is paid for by the food industry; nutrition scientists have diet books to sell—but there are “nonfinancial” conflicts, too. If you’re a strong adherent to the particular kind of diet you’re researching (vegan, Atkins, gluten-free, etc.), Ioannidis argued, you should disclose this at the end of the paper, so readers can be fully informed about how the research was produced.
I find it's common to see forum comments like "we know this researcher is a big advocate of X so this study is bias". I find this critique is usually very shallow and sets an impossible standard.
Where are you you going to find researchers pouring years of their life into researching a subject they have no interest in or opinions of?
If you were studying vegan or Atkins diets for example, it's an ongoing personal choice in your day-to-day life if you're going to switch to/from that diet or not, where either choice could be used to imply bias in your future studies. And after a few years of research, how could the evidence you collect not impact your personal choice? And if it doesn't, what would that say about your belief in your own research?
Researchers are trained to be aware of their biases, to devise experiments that reduce these biases, and peer review helps identify problems here also. Another part of peer review where the media jumps the gun is that other researchers should be able to replicate the studies themselves to check the results - if you become known for producing highly biased research that can't be replicated this is surely going to impact your career (e.g. less citations, less funding, distrust from peers) so there's a very strong incentive to keep this in check?
I think in general, the real problem here is the dysfunctional click-bait approach the media has with scientific reporting (exaggerated, sensationalist, premature, lacking any nuance/caveats/details) and the public's understanding of the scientific method is very broken e.g. the notion that scientists are always changing their minds, little understanding of how to evaluate how strong evidence is before accepting it as true, which includes dismissing everything about a study because they can create a story about one researcher being bias.
> If you become known for producing highly biased research that can't be replicated this is surely going to impact your career so there's a strong incentive to keep this in check?
I would strongly say no.
If the vast majority of studies in the “soft” sciences are already not reproducible, the risk to a person’s career is minimal.
This is before accounting for the pattern of prioritizing short term gains over mitigating long-term risk, especially if that risk is considered to be low.
> If the vast majority of studies in the “soft” sciences are already not reproducible, the risk to a person’s career is minimal.
Let’s not be so arrogant as to say this is isolated to the soft sciences. Soft science is only the most obvious because it’s much harder to run science experiments in the real world. Let’s not forget the fact a computer science PhD student committed suicide because of the degree of academic fakery he was being forced to participate in at an elite institution. That we still have nonsense like unreproduceable ML papers, and physics has multiple examples of older models being wrong and only being held up by the political power of the academics who built their careers off them.
I find it's common to see forum comments like "we know this researcher is a big advocate of X so this study is bias". I find this critique is usually very shallow and sets an impossible standard.
Where are you you going to find researchers pouring years of their life into researching a subject they have no interest in or opinions of?
If you were studying vegan or Atkins diets for example, it's an ongoing personal choice in your day-to-day life if you're going to switch to/from that diet or not, where either choice could be used to imply bias in your future studies. And after a few years of research, how could the evidence you collect not impact your personal choice? And if it doesn't, what would that say about your belief in your own research?
Researchers are trained to be aware of their biases, to devise experiments that reduce these biases, and peer review helps identify problems here also. Another part of peer review where the media jumps the gun is that other researchers should be able to replicate the studies themselves to check the results - if you become known for producing highly biased research that can't be replicated this is surely going to impact your career (e.g. less citations, less funding, distrust from peers) so there's a very strong incentive to keep this in check?
I think in general, the real problem here is the dysfunctional click-bait approach the media has with scientific reporting (exaggerated, sensationalist, premature, lacking any nuance/caveats/details) and the public's understanding of the scientific method is very broken e.g. the notion that scientists are always changing their minds, little understanding of how to evaluate how strong evidence is before accepting it as true, which includes dismissing everything about a study because they can create a story about one researcher being bias.