So I can't be the only one who has noticed the correlation between this and the field at question. As soft sciences like soc, psych, and medicine seem to have the most problems with it. I'm not saying hard sciences like physics don't, but it is less common.
The math for the soft sciences isn't as concrete and doesn't provide a good foundation. I think there are also major problems with the use of p values. It is too easy to manipulate and a lot of incentive to do so. Teach a science class (even the hard sciences) and you'll see how quickly students try to fudge their data to match the expected result. I've seen even professionals do this. I once talked to a NASA biologist who I was trying to get his chi-square value and took a little bit of pressing because he was embarrassed that it didn't confirm his thesis (it didn't disprove it though. Just error was large enough to allow for the other prevailing theory). As scientists we have to be okay with a negative result. It is still useful. That's how we figure things out. A bunch of negatives narrows the problem. A reduced search space is extremely important in science.
The other problem is incentives in funding. There is little funding to reproduce experiments. It isn't as glorious, but it is just as important.
>So I can't be the only one who has noticed the correlation between this and the field at question. As soft sciences like soc, psych, and medicine seem to have the most problems with it. I'm not saying hard sciences like physics don't, but it is less common.
It is a problem in physics, although a "different" problem. See my comment:
Yes, but there is a huge difference in degree of problem. That's what I'm getting at. It exists, but in the soft sciences it is much more rampant. Compound that with the weaker analysis and the problem starts becoming that you have to become skeptic of any result from the field.
Another important distinction between physics and, say, psychology is that the latter studies aren't testing a theory, they're testing an observation. A particular observation sometimes leads to the widespread assumption that a particular effect exists, but without anyone trying to shape a theory about its cause, only that it exists. In physics by contrast, it's all about fitting an observation into existing theory.
Soft sciences actually dont have a greater problem with replication. It is just more publicized, in part because those researchers are actually addressing the problem and making large scale replication attempts.
Well that depends. What kind of reproducibility are you talking about? If we talk about something like Higgs then it was definitely reproduced. Same with gravitational waves.
But the other problem is that the soft sciences have a compounding problem. The one I mentioned about the foundation not being as strong. As a comparison psychology needs a p=0.05 to publish. Particle physicists need 0.003 for evidence and 0.0000003 for "discovery". But the big difference is the later are working off of mathematical models that predict behaviours and you are comparing to these. You are operating in completely different search spaces. The math of the hard sciences allows you to reduce this search space substantially while the soft sciences still don't have this advantage. But give them time and they'll get to it. The math is difficult. Which really makes them hard to compare. They have different ways of doing things.
The huge huge huge majority of published papers aren't CERN-style monumental efforts with bazillions of repeated experiments that you can use to get insanely good stats on.
From my own experience in my PhD I've seen outrageous replication problems in CS, microbiology, neurology, mechanical engineering, and even physics on the "hard sciences" side of things. I've seen replication problems also in psychology, sociology, and political science on the "soft sciences" side of things.
People who come from a "hard science" background seem to have this belief that it is way more rigorous than other fields. I disagree. If anything, the soft sciences are actually making a movement to address the problem even if that means more articles being published saying that 40% of psych papers are not reproducible or whatever.
The math for the soft sciences isn't as concrete and doesn't provide a good foundation. I think there are also major problems with the use of p values. It is too easy to manipulate and a lot of incentive to do so. Teach a science class (even the hard sciences) and you'll see how quickly students try to fudge their data to match the expected result. I've seen even professionals do this. I once talked to a NASA biologist who I was trying to get his chi-square value and took a little bit of pressing because he was embarrassed that it didn't confirm his thesis (it didn't disprove it though. Just error was large enough to allow for the other prevailing theory). As scientists we have to be okay with a negative result. It is still useful. That's how we figure things out. A bunch of negatives narrows the problem. A reduced search space is extremely important in science.
The other problem is incentives in funding. There is little funding to reproduce experiments. It isn't as glorious, but it is just as important.