Hacker News new | past | comments | ask | show | jobs | submit login
Why Is So Much Reported Science Wrong, and What Can Fix That? (alumni.berkeley.edu)
109 points by tokenadult on Dec 20, 2015 | hide | past | favorite | 51 comments



It seems to me that the problems with science as it is done today are open secrets, and can essentially be summarized as:

1: Publication pressure. There is too much emphasis on creating publications. The end result is that people rush out unfinished work, or end up diluting their findings by unnecessarily spreading them across several papers. People are also discouraged from undertaking projects that may take a long time to "pay off" with a publication.

2: Scientific networking. In scientific fields, people get to know each other, and those relationships still heavily influence the process. Fame and reputation matter when getting published, not to mention friendships with publishers or reviewers.

3: Bias towards excitement/positive results. This, I believe, is not entirely the fault of the scientific community. If left to their own devices, I think there would be plenty of scientists willing to double check old findings, or publish negative results. However, it's hard to convince people to pay you or employ you if your work is perceived as "boring," "negative," or as some sort of failure.

There are a handful of smaller issues (e.g. excessive emphasis on "impact factor") but I suspect that they are mostly symptoms of the above issues.

I've always believed that the whole "publication model of science" is due for review. It seems to be a product of the 18th century rather than something we'd decide was a good idea today. If we do it right, I feel like the positive results bias would immediately go away.

I don't think we can "fix" issue #2 in any really meaningful way. Even if you were to take people's names off of publications, they would still chat with each other at meetings or through collaborations. I think the best thing to do is not try to prevent people from influencing each other, but simply to make the whole process more transparent.


The problem is not the publication model, it is the incentive structure that encourages scientists to rush out publications without checking their data in enough detail, or in putting them under so much pressure to generate “exciting” results that they falsify the results.

I will share an anecdote about a paper I was involved in. The work was done in collaboration with another lab within my school. One of the results suggested that one of the instrument was out of alignment and possibly giving false results. I raised this with the head of the other lab and after speaking with his post doc he said that everything was fine. I asked to see the calibration data (there was none), but I was assured that everything was fine and they were going to submit it as it was. At that point I went thermonuclear and said if this paper was submitted without the dubious values being checked I would write to the journal asking for it to be withdrawn. This caused a huge ruckus within the school including visits from the head of school and the dean telling me I was being “unreasonable”. I stood my ground and the post doc eventually ran the calibration experiments which showed the instrument was out of alignment and the results were wrong.

Of course this burned a lot of bridges for me and it would have been better for me to just shut up and let the paper go out wrong.


I'm glad you stood your ground. But what I don't get is the mindset of everyone who wanted to publish the potentially-faulty data. The moment anyone tries to build on the result, its faultiness will be known. Is that not more embarrassing?


The reason is pressure to get the papers out as fast as possible or else you won’t get any more grants. It should be embarrassing to publish garbage, but for lots of people who have succeeded in the current system it appears not to be.

Everyone involved was rather sheepish afterwards except the post doc. He had felt much the same as me, but didn’t feel he could raise the issue with his boss. Privately he was very grateful I had stood up to his boss, as he didn’t feel he could do the same and was under enormous pressure to pump out the data. All round not good.


Sometimes I wonder if I'm in a bubble. My PI thanked me when I told him I had discovered that out our data was systematically biased and it would take weeks to correct the issue.


Yes most PI’s are happy to avoid the embarrassment and only want to publish good data.

The problem is not that most scientists are not trying to do the right thing, but that they are under enormous pressure to pump out the results. It only takes a few falling to this pressure to destroy the public’s faith in science - the consequence of this are catastrophic.

We must solve this problem or we will not have science.


I know very little about academia. But, in the context of today's world of information and collaboration, it always seems to me that it should be a better time than any to (A) have a better than publication and (B) put a lot more into reproducing results independently. I mean what is the whole publication, review, reproduction system if not an early system of crowdsourcing.

To take a simplistic example: a publication norm which includes instructions for reproduction, ideally requiring as little resources as possible.


Instructions for reproduction are mandatory already. That said, you probably can't pick up any paper and reproduce the experiments exactly without asking the authors a question or two, at least in biology.

Systems are getting increasingly complex as well. In my lab, it literally takes several weeks for people to learn how to do our assay, and that's with constant feedback from an experienced user. Every step is published but that still doesn't help when things don't go according to plan. Also, not every lab has the same equipment. Even if we provided free training, a lab would still need to drop $300k to get all the machines required, which are not common.

Not every result is worth reproducing either. If someone publishes a paper that shows several lines of evidence for the same thing and they've done all reasonable controls, and it doesn't disagree with any existing models, why would you reproduce that? Outside of deliberate fraud, it's a pretty solid bet that it's true, and you can save enormous amounts of time and money by proceeding to build off of it instead of reproducing it first. And that's really what it comes down to: no one's paying you to do this kind of work. Governments would have to fund it, because you're fundamentally asking for twice as much work to be done, which will cost twice as much money (and time).

But take heart: when someone DOES make an outrageous claim, it's very common for labs to try to reproduce it (or really, disprove it). See [1] and [2] for recent examples.

[1] "STAP stem cell controversy ends in suicide for Japanese scientist" http://www.latimes.com/science/sciencenow/la-sci-sn-stap-ste... [2] "Water bears’ genetic borrowing questioned" https://www.sciencenews.org/article/water-bears%E2%80%99-gen...


If I was a fraud this is exactly what I would do. Take an existing paper and just make up the results that confirm the work with a slight twist to make it publishable. Rinse and repeat and nobody will ever catch you.


It's rare for bad results to get called out. The usual fate of bad research is a fade into obscurity.

Publishing questionable results quickly has many benefits (if you happen to be right) and few consequences. Just don't draw too much attention to yourself by over-hyping.


Yep if I had not put my foot down it probably would have slipped by and never been noticed until some poor unfortunate PhD student tried to build their research off it.

Another anecdote - a certain famous professor (since dead) at my alma mater managed to destroy at least three PhD students careers via fraud. He had faked up the initial data and then put them to work on projects based on the fake data. The whole thing only came to light (internal only as it was all hushed up) when he died and his students moved to a new supervisor.


Yes and no. What I envision is a more collaborative process where the faulty data (and your calibration concerns) could be available for scrutiny before your work was officially finished.

I feel like if we did it right, it would be possible to define "units of scientific work" to be something other than "finished publications." An incentive structure designed to maximize the new units would place more value on verification, collaboration, and negative results.


A system that enables (good) risk and tolerates failure while still clearly recognising failure is relatively rare and powerful. The first parts are relatively well understood in business but I don't think the second is.

In most emergent human systems there are mechanisms that discourage risk/failure and obscure failure. Face saving, diffusion of responsibility and the like. These exist in commercial, public and political organisations and are often really baked in to the core.

Allowing failure and burying it are very different in terms of dynamic effects. Ultimately I think success quires reward, even if it is intrinsic. That means burying doesn't help. There's a reality in science that is risky. Not everyone's talent, luck, or instinct are equal.


4: Grant bureacracy and politics. I have heard of professors and phds that spend 10% to 20% of their time on writing grant applications and fulfilling related bureaucratic requirements like writing progress reports. Furthermore, they tend to focus on topics that increase the likelihood of getting the research grant, instead of going where they honestly believe would be best. In the US, this leads many to claim their research is security-relevant, whereas in the EU, researchers try to find a sustainability-angle in their research in order to get their grants.


4. The publishing model is optimized for human peer review, not machine peer review. Publishing machine-friendly experimental evidence is intrinsically the same thing as improving reproducibility.


there is also political correctness bias as well. as in, some research cannot be easily questioned as intent can be misrepresented by those who want the research to stand or are too sensitive about the subject. The example given in the article was a perfect demonstration. Things that tend to reinforce what we believe or want others to believe we believe are easy to support.

Still as for this door too door study, the Wilder effect is a good example of bias you get when asking people sensitive questions


"Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." - Goodhart's Law

Awarding scientists for publishing works -- whether or not they've passed rigor -- is the mistake.

The monetary system for awarding grants is completely broken. Nobody wants to admit they have their hand in the cookie jar for fear of having the lid slammed down on their hand. Universities take a cut of the grant money to subsidize their staff payroll. Professors gain notoriety and a significant increase in income when their published work leads to being awarded a grant. Industry/NGOs have their own financial incentives to 'influence' scientific reporting that favors their own bias'. Nobody calls attention to the corruption because... Woohoo! Free money!

I find it very difficult to trust any of the scientific reporting related to current affairs and/or politics. The incentives for 'bad actors' who commit intellectual dishonesty are too high.


As with everything to do with smart people it is all about incentives. If you create a system where accuracy of results don’t matter and where you must publish or die then don’t be surprised if what you get are lots of crap publications with dubious or false results. Hoping for a different outcome without changing the incentives is fantasy thinking.

The positive is there is lots we could do to change the incentives, but there are some powerful forces benefiting from the current structure.

Edit. I should actual say what we could do. We need to focus on the incentives rather than the problem. Fix the incentives and the problem goes away on its own, focus on the problem and you just end up shifting it to somewhere else - the "squeezing on the balloon" effect.

The best solution in my opinion is to move from our crude "most publications = funding" model to a hurdle + lottery model. With this you have to publish enough to prove that you are capable of doing good research (the hurdle) and once you have done this you go into a lottery from which we pull out grant winners until we have used up all the funding available.

The reason this idea is not popular is it would not work in favour of the current grant winners.


Are you aware of any implementations of that lottery model in the wild?


No I am not. It gets brought up every so often as the most rational approach, but those who have climbed the greasy pole of the current system, and hence control it, are none to keen to change.

The usual argument against a lottery is it does not reward the best scientists and will give grants to third rate hacks. This is true, but it can be solved by having different lotteries with different thresholds. Publish one paper in the Journal of Useless & Pointless Results and you go into Pool A where you might have a 0.1% of getting a grant. Publish 20 articles in Nature and Science and you go into Pool H where you have a 75% chance. We would just need to be careful that we are not recreating the same perverse incentives that the current system encourages, but this is not insurmountable.


I am more cynical after seeing people I agree with cite tenuous studies as facts. All of social science is sketchy, especially studies with results that make you feel good. For example, my friend and I read an Economist article that said 10% of American doctors are Muslim. We could not verify the source, but people are going to believe it's true because it was in the Economist.


> All of social science is sketchy... We could not verify the source, but people are going to believe it's true because it was in the Economist.

It seems like your real beef here is with journalism (even the higher-than-average quality journalism that normally appears in the Economist). The fact that some news magazine doesn't include citations for claims in an article[1] has nothing to do with the sketchiness (or non-sketchiness) of "social science."

It is too bad though that the Economist doesn't provide citations for claims like this. Did they pull "10%" out of thin air? Are 10% of new medical doctorates currently earned by Muslims? Did they include all kinds of doctorates (PhD, etc) in the calculation? A simple link to the source for this number would clear that all up...

[1] http://www.economist.com/news/united-states/21679823-despite...


Both.

Paper: "We put three women and three men in a room for 10 minutes and they brainstormed 1.3 extra ideas than a room of 6 men."

Article: "Study proves gender balance leads to better meetings!"


Amusing, but I don't quite agree with this characterization of "all of social science."

But here's something interesting: I've tracked down a source for the 10% claim, and it's absolutely bonkers:

* An editorial[1] in the Detroit Free Press makes the following claim:

| "an analysis of statistics provided by the American Medical Association indicates that 10% of all American physicians are Muslims"

That certainly makes it sound like the AMA thinks that 10% of American physicians are Muslims.

* The source provided for this claim, "Muslim Doctors Abundant, But Muslim Hospitals Non-Existent"[2] is a 2008 post on a site called The Muslim Link.

Here's how that post arrived at the 10% claim:

* 113,585 Physicians in the US (2006)

* 7,000 current and retired physicians are members of the Association of Physicians of Pakistani Descent of North America

* 5.9% of African Americans are Muslim; 3.5% of physicians are African American; therefore there are 2000 African American Muslim physicians

* 7000 + 2000 = 9000; 9000 is just about 10% of 113,585

I shit you not, that's the most straightforward reading of the analysis at [2]. Now it's in the Economist. I seriously hope they got that number somewhere else.

[1] http://www.freep.com/article/20140128/OPINION05/301280121/da...

[2] http://www.muslimlinkpaper.com/myjumla/index.php?option=com_...


http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1490160/

Apparently ~2.7% as measured by this study in 2005. Surprised I could not find department of labor statistics on the question. I would say that it is very possible that the proportion has increased drastically since 2005: Many of the people who have fled Iraq and Syria since then are skilled professionals including doctors.


There are about 1 million doctors in the US and 20k new doctors every year. In order to go from 2.7% to 10% in 10 years then that means ~40% of all new doctors from 2005 to 2015 were Muslim.

As much as I hate Trump, I don't think this is possible.


I find it hard to believe that there are only 20K new doctors a year if there are 1 million doctors. This would imply that the turn over time for doctors is at least 50 years and in all likelihood greater given the growth in total doctor numbers over the last 50 years. This is just not plausible.


This says 767,000 and ~29,000:

https://www.aamc.org/download/426242/data/ihsreportdownload....

So that takes the coarse estimate down to 25 years.


This is at least a plausible number. While we still have not established if the 10% number has any basis in fact, you would only need 19.3% of new doctors to be muslim to shift the numbers. This still seems high to me, but it is possible.


You can easily google for stats. I am happy to see different numbers.


I am not doubting what you are telling us is what you have found, just that the numbers have to be wrong as it is not plausible that there is only 2% turnover in doctors each year.


> All of social science is sketchy

Not any more than any other science that relies on sampling humans (I'm looking at medicine). Both start yielding stronger results when they explore the effect of mechanisms that are already known, and have a bad record of discovering "surprising" effects.

The antidote is simple: never rely on a single study, or even two.


Scientists are seen by lots of people as secular priests or shamans. In this respect science is a victim of its own success. It now commands the attention of a lot of people who expect it to perform some of the social functions that religion used to play. "Science" is more likely to be hijacked by charlatans because of this totemic value.


That's very interesting. I think people need answers from the "answer authority", and once that authority is established, it is treated reverently and pressed to provide answers. As the "priests" are humans, too, there's no reason to believe they themselves are immune to this effect, even when they know how the sausage is made.


> Science and journalism seem to be uniquely incompatible: Where journalism favors neat story arcs, science progresses jerkily, with false starts and misdirections in a long, uneven path to the truth—or at least to scientific consensus. The types of stories that reporters choose to pursue can also be a problem, says Peter Aldhous, a teacher of investigative reporting at UC Santa Cruz’s Science Communication Program, lecturer at Berkeley’s Graduate School of Journalism, and science reporter for BuzzFeed.

Seeing that someone from Buzz Feed has this kind of sober non-delusional perspective on journalism restores my faith in the idea that the internet might not have ruined reporting.

I used to avoid Buzz Feed like a disease, a cess pool of listicles and ADD style click bait but it seems they have a real goal of producing quality content.

Respect.


One intelligent mind in the combine that is BuzzFeed, doesn't account for the horde of mindless drones who write a huge majority of click-bait material.

I don't see where do you see the support for the idea that their goal is quality articles.


Very subtle.


Speaking to scientists I know, quite a lot of the problem is university press offices. The bullshit articles start as bullshit press releases, bearing little resemblance to any actual facts in the original paper (or arXiv preprint). Not sure of a way around this one.


There is research backing this up: http://www.bmj.com/content/349/bmj.g7015

They conclude that "Exaggeration in news is strongly associated with exaggeration in press releases. Improving the accuracy of academic press releases could represent a key opportunity for reducing misleading health related news."


The solution is requiring the scientist to approve the press release before it goes out and having the power to force errors to be corrected.


Well yes, but this ignores all power relations and incentives involved. The press offices aren't sending out BS press releases for the sake of it, they're sending them out to get the university's name positive press; and the university has all the power over the scientists.


This is accurate. The corporatization of academics is a significant part of the problem.


Most journalists post email addresses. I drop them a note when I clearly see a need to improve. But most of the time they dont apply the suggestion. Therefore in any publication I sometime prejudge an article by the history of the author.


Bias against negative results are a problem. Frequentist "p values" are another, especially when samples are small and biased ("20 undergraduate students") and 0.05 is considered "significant."

Now, the system is still designed to correct itself for errors over time, and reinforcing this design is the most important meta-thing we can do. So it's not all doom and gloom. It's just that "over time" is a longer and messier horizon than we would like!


Easy to fix for a lot of comp sci papers. Require that to be accepted into a conference the papers must have a working, open source implementation that confirms the results, preferably in the form of a Docker container(s) or a VM image(s). Publish papers with reproducible results alongside.


Beautiful irony: the only part of the site that doesn't work well on my old android browser is the popup telling me my browser is out of date.


1. To many scientists treating correlation for causality

2. Pressure to publish (requirement of funded projects or because of organisations target objectives)

3. Too many scientist


"Why Is So Much Reported Science Incorrect,"

There's my contribution. This is from an illustrious institution like Berkeley?


[deleted]


You just need to use the 3-fold rule. If the study wasn’t double blinded and the effect is not at least 3-fold then it is most likely to not hold up over time.


if science is wrong then science moves is not that the goal of science?


Because journalists and audiences are incompetent. Journalists only get a press release from Big Corporation which is worded to give the best impression possible. The journalists then need to dumb it down for the masses. Then when Adam tells Bob he only repeats what he remembers and understands. In a massive game of Chinese whispers it is no wonder that "So Much Reported Science Wrong".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: