Hacker News new | past | comments | ask | show | jobs | submit login
The misleading stream of articles about scientific studies (washingtonpost.com)
72 points by timthorn on Aug 17, 2016 | hide | past | favorite | 55 comments



In the UK, the NHS website includes an excellent, regularly updated 'Behind the Headlines' section that scrutinises health stories reported in the press.

In fact the 'Behind the Headlines' site is a model of clear, factual, unfussy reporting - precisely what you won't find in the national UK press.

Sadly, the 'Behind the Headlines' site is probably far less read than the misleading newspaper stories it tries to debunk.

http://www.nhs.uk/news/Pages/NewsIndex.aspx


The NIH in the US has that also (with the same "behind the headlines" phrase, even!)

http://www.ncbi.nlm.nih.gov/pubmedhealth/behindtheheadlines/


I think we could use a "Science News Snopes" service that evaluates the science stories that goes viral while they're still being spread.

We just need a few know-it-all scientists who can write, an office and a bit of funding!



Looks like good work, but I'm thinking more of a rapid response system that will have something up within hours after "Gluten causes GMO in children" goes viral.

With the hope that the debunking can stop the growth before it's grown to full size.


It's a good idea. I suspect you will want this to be selectively applied though. Not that I know anything at all about your opinions or positions, but most people have holy cows they do not want questioned. Also, there is A LOT of BS being published these days. Very nearly all biomed papers can be torn apart and dismissed after a day or two because they contain the same statistical and logical errors over and over.


BBC Radio 4 has a show called "More or Less" I'd recomend to anyone interested in this topic. I remember an episode on the advise on drinking and how it's been a flip-flop sort of answer.

http://www.bbc.co.uk/programmes/b006qshd


Wow. I've never heard of that. I read a couple and it's really great: accurate, uncomplicated, and to the point. Definitely gonna share this around!


This is brilliant, thanks for sharing!


My alma matter is doing groundbreaking research on sexting. Never thought I'd see the day.

In all seriousness, it is a twofold problem. Yes, the media is partly responsible, but there is also a huge internal problem with the scientific and academic community (actually the article rightly brings this problem up): positive results are what net you publication; no one wants to hear negative results. However, if we actually want to progress scientifically, experiments that yield negative results are just as important, if not more important, than any positive ones. Unfortunately negative results and failed experiments just aren't sexy. They don't sell. They don't get people excited. As a result publishers don't want them and leave us with an overabundance of positive results, some of which are even gamed, which both skew the overall picture of scientific progress, as well as feed into promoting the sort of behavior on the media side this article calls out.

Another problem is, and again, the article does a good job by bringing this up, the major barrier of entry preventing the average joe from reading publications directly rather than reading the digestible media summary of their findings. I think it stems back to an education problem. In an ideal world we'd be able to educate everyone to a certain level of scientific literacy which would at least enable the everyman to decipher some of the more arcane jargon of respective scientific fields, but I'm not entirely sure an educational program that might achieve this is even possible.

Luckily, I do know, that as of at least about a year ago, this very problem was at the forefront of work in the philosophy of science, and many philosophers and practicing scientists alike are (or perhaps were) doing what they can to try and devise ways to clean up the system a bit, improve peer review practices, etc. It's not an easy problem. The scientific community in all its aspects is quite a mangy beast.


It might be helpful to mandate every submission to an academic journal to be accompanied with a maximum 800-word "layman's summary" that will be directed at the media and the public. Something more comprehensive than the Abstract/Preface without any terms that wouldn't be known to the general public.


A lot of journals are starting to do this. For example PNAS (a top ranking journal) introduced a required "significance statement" in addition to an abstract, which is meant to be understood by a more general audience. So do the PloS journals, and many others.

http://www.pnas.org/site/authors/procedures.xhtml#significan...

http://journals.plos.org/ploscompbiol/s/submission-guideline...

PLoS says "The goal is to make your findings accessible to a wide audience that includes both scientists and non-scientists", and PNAS says it is meant to "explain the relevance of the work in broad context to a broad readership."


This sounds like a good idea. But, it may not be possible in most of the hard sciences. Imagine writing an 800-word layman's summary for some results in string theory.


By 'most of the hard sciences', you seem to have chosen a sliver of one of the hard sciences, which plenty of people in that science think is just smoke and mirrors. If a 'general audience abstract' is a good idea, it shouldn't be balked at because of a single theoretical field which is quite controversial. Let other journals handle those papers :)


Unless string theory all of a sudden gained everyday applications i think i would aim for fields like biology, chemistry and similar first.


Yes, biology makes the most sense, as thats usually what gets "translated" to clickbait style headlines like "Quinoa cures cancer".


This week i read a headline that could be interpreted as saying green tea would prevent zika infections.

In reality it was a chemical of some sort found in green tea (and elsewhere) that seemed to counteract zika. But it required concentrations that you would never find in nature.


I'm personally against such an idea:

- The journalist isn't supposed to be a copy-paste robot. Even if the press release contains big claims, the journalists should be blamed if they don't do fact-checking. An example from the recent times (and not scientific): If the conman is claiming to be a bitcoin founder, the serious journalist aren't supposed to just repeat his claims.

- The "layman summary" as in "relevance" is impossible to be realistic for most of the basic research. Is research relevant if it can be applied in 10 years? In 20 years? 100 years? How can you even estimate when it can be applied? The realistic estimate of the application of the research is possible only if the research is targeted at the short-term goals. Which is then more engineering than science. But if you include long term view, then what could be "possible" is typically not what the "public" understands ("but you said we will be able to use x and where is it now you liars.")

- The scientists should actually be somehow also motivated not to exaggerate the applicability of their research, as a force against their reasonable and real motivation to do so. We should be able to let the researchers do the research even if we don't expect the immediate "use." Otherwise we don't support the science, just solving the shortest-term problems.


I can give you the layman summary for almost every study that involves any kind of statistics:

"Is this a single study or a systematic review / metaanalysis? If it's the first then ignore it. It's more likely false than true."


There was some recent analysis indicating that, for psychology, the correlation between the first study of an effect and the eventful meta-analysis result was around zero, i.e.,such that an initial result conveys zero new information about an effect. That is fucking mind boggling.


>In an ideal world we'd be able to educate everyone to a certain level of scientific literacy which would at least enable the everyman to decipher some of the more arcane jargon of respective scientific field

Is it ever accurate to say "hey, many of these papers are written in an excessively jargon-filled way, because the intended audience is very narrow?"


Steven Pinker doesn't think so (see any of his talks or articles relating to his book The Sense of Style and the curse of knowledge). It would be much closer to the truth to say that many of these papers are nearly opaque to other specialists in the same field. There is, of course, the jargon of precision, which is at least in a sense unavoidable, but most of the problem actually stems from failing to understand that most people, even in your own narrow field, haven't been recently immersed in the same small body of highly-specific literature, etc., that you've been swimming in for the past several weeks, months or years.


Maybe there should be a major journal published with purely the best studies done that had negative results.


I know the inclination is to change link baity headlines but I think the original hed, "The media is ruining science", is far more to the point than the current hed, "Scientific community making an effort to end the stream of misleading articles". The problem is most definitely rooted in the media, which not only disseminates misinterpretations, but also provides harmful incentives to scientists and journals to cheapen their work.

Furthermore, unlike what the current hed implies, there's not much evidence that there's a concerted systemic effort by the scientific community to fix this, nevermind "end" it. I'm not saying they don't want things to improve, I'm saying I don't see a concerted, organized effort to make significant improvement.


Ok, we changed the title to a phrase from the article that doesn't claim anything about who is to blame.


I had the opposite reaction, which is that the problem is most definitely not rooted in the media. The problem is with science itself, which is fundamentally broken as a community.

I agree that the media are maybe perpetuating certain problems, or are providing dysfunctional incentives, but the root problem is with the scientific and academic community, not the media.

Scientists run the journals that have gone amok, are in charge of the tenure processes that reward outdated practices, and run the grant systems that are broken. Universities' public relations departments bait media to hype their scientists' findings. Etc. etc. etc.

I was an exchange just now where this played out with a project. I don't want to get into details, but the media were pushing hard for a misleading--and blatantly unethical--way of pushing something to the public. So you could blame the media. But anyone who knows anything about the situation knows that certain PIs groom things, arguably over years, to produce this very type of outcome, in part because they can sit back and reap the benefits while claiming to wash their hands of any responsibility when this happens. They manipulate the situation, and then push the blame onto others, including the media, when it suits their interests.

Obviously not everyone is like this, but it's naive to believe it's not a systemic problem.


Scientists don't (typically) run journals, publishing companies do. Scientists just write and review papers for free, and they have to pay for access. That's hardly "running" anything.

Scientists do tend to decide who gets grants, but most people I know don't consider those decisions to be too bad per se. The main problem is that the lack of funds causes grant award rates to be so low that very good candidates can be left out, plus when resources are so low the random factors inherent to assessment become too important (with a 50% acceptance rate, a very good project may score lower than it would deserve but still get in, while with a 10% acceptance rate if a single panel member dislikes your project you are likely out).

Not saying that the scientific community doesn't have its share of the blame for its problems, but an important part of the blame is political as well. And scientists are too busy trying to survive in an extremely competitive environment to try and change things by themselves. Honestly I think most of science's problems would be solved just by allocating (significantly) more money to science. Then maybe we would actually worry about doing good science and incentivizing others to do so, rather than about finishing our reviews ASAP so we have time to hastily write another journal article that we need to score points for grants/tenure and not bite the dust. A degree of competitiveness is good, but competitiveness in science is now absolutely over the top and creating all kinds of evil incentives.


Isn't blaming everything on the media a bit too one-sided and simplistic?

We live in an era where diversity of media sources have exploded. Media companies compete fiercely with one another for clicks, readers, and ad revenue.

Perhaps the root problem is the media behaves the way it does because its consumers and the structure of the market makes it impossible for it not to.

TL;DR: click-baity and inaccurate news exists as a reflection of consumer preference


Feedback loops strike again.

The fault is in the interaction between media and the audience. People tend to react better to some content than to another, which leads to that content making more money, which leads to more of such content. The process is optimizing for exploiting cognitive biases and heuristics the audience uses.

Now the question is - what's easier to tackle? Issues with brain architecture that would require medical interventions we barely can imagine to correct, or business incentives of the media companies?


Aren't both brain architecture issues?

Corporate incentives are built around converting avarice into societal progress and a are incentivized to find the shortest path to that profit.

It's probably easier to make laws for one of those scenarios though.


A bit, yes, because the actors are humans too. But the core of the issue is more game-theoretic - shareholders cannot all coordinate to avoid taking some "shortest paths" that end up with unwanted consequences. Tackling this, hard as it is, is still easier than rocket neurosurgery.


By this would you mean things like business incentives and tax breaks?


Corporate incentives are built around converting avarice into societal progress

I don't see how you can turn that into a "brain architecture" issue.


Shareholders want to see profit increase. If other firms find ways of making more money with fewer morals, then the pressure on your firm to do the same increases, until you either succumb, or get bought over.

Shareholders are driven by the same dopamine system that normal people are, and are unlikely to sit by and change their incentives for the sake of morals.

The people pushing journalists to drop long form, in favor of revenue and attention generating articles with lower costs are shareholders, correctly responding to clear market incentives.

Theres an excellent article from mother jones which provides a possible solution to this problem (its been shared on HN but there was no discussion when I found it) http://www.motherjones.com/media/2016/08/whats-missing-from-...


I don't think the media are solely to blame, big picture wise, but for the scope of what this article covers, I think focusing on the media is appropriate. This is not to say that science can't revamp conventions and practices on their end. And there's not a perfect balance between media freedom and media scrutiny of science.

That said, one could make the argument that it's not the media's fault that in any given month, a photogenic (and perhaps blonde) woman has been kidnapped or murdered. However, it is the media's fault when they saturate the airwaves and print with such incidents at the detriment of other news and information, even if readers gobble it up.


What you're describing is therefore a failure of journalistic ethics. An important part of any set of professional ethics is not giving the consumer what they want, if what they want violates a standard of the profession; in this case, that means having a significant chance of being true.

On the other hand, that professional behavior has a difficult time competing with unprofessional behavior is unsurprising, as is that professional behavior is hard to maintain.


I think this is absolutely true, and that consumers have themselves to blame for all kinds of bad stuff they buy.

But it is also true opinion polls show journalists are trusted only about as much as estate agents. And rightly so.

Media professionals sometimes see themselves as noble experts standing outside the internet echo chambers, and able to judiciously filter info-cacaphony to the benefit of the public.

They aren't.


> Media professionals sometimes see themselves as noble experts standing outside the internet echo chambers, and able to judiciously filter info-cacaphony to the benefit of the public.

OTOH, to the extent that journalists view themselves in this light, it helps them enforce and maintain some sort of standard for themselves.


This is not just limited to science articles, and it's very serious because it leads to people walking around with incorrect and often very dangerous ideas about fitness, diet, politics, and just about every aspect of the world.

I think the solution will be media outlets that not only insist on honest reporting, maintain high standards, and admit mistakes, but that dedicate a lot of their space to directly and savagely mocking unscrupulous reporters from other outlets for their shoddy reporting or purposeful disinformation.


So much this.

I've been looking at a slew of issues, starting from the question "what are the big problems?". Without getting into what I've concluded (and find considerable agreement, as well as some disagreement) on what the big problems are, there's the dynamics behind these.

The slogan I'm blogging behind is "Progress, Models, Institutions, Technology, Limits. Interactions thereof".

It's the interactions especially between Institutions, including the media, politics, commerce, religion, and education, and our Models -- the theories and frames we carry with us as to how the world works -- that strike me as most critical.

I'm not limiting this to scientific models, but every crazy idea people have as to how and why the Universe is the way it is and does what it does.

But yes: one consequence is that we end up with people who have very incorrect and very dangerous ideas. In positions of extreme power -- heads of companies, leading popular movements, on radio and television, as heads of state.

The history of the concept of "the marketplace of ideas" is an interesting one (as well as its own interplay with concepts of free markets, and of "free market" as "completely unregulated market"). It's a tangled, tangled, tangled web.

We also find many of the same people and organisations turning up. Adam Smith. John Stuart Mill. Oliver Wendell Holmes. The Mont Pelerin Society. And more.

Makes you wonder.


Reporters generally don't mock each other. It's a tough business, and you never know who can help you get a job. Heck, after Dan Rather got caught peddling obviously phony documents to the public his colleagues gave him an award.


>There’s also the legitimate concern about what happens if methodologies come out with null results — that is, if the only result a study can produce is to prove the hypothesis incorrect. Such a paper could end up being extraordinarily boring or not answering the essential question at issue. Imagine a headline like: “We don’t know which gene puts you at a greater risk of depression, but we’re pretty sure it’s not the gene we thought it was.”

Isn't that the whole point of correcting for publication bias? The reporter saying that boring results are a problem seems to perfectly capture the actual problem.


The science news cycle, illustrated by PHD Comics: http://www.phdcomics.com/comics/archive.php?comicid=1174


Other articles recommended to me at the end of this article

* Trump is entirely delusional about why he’s losing. His new campaign shakeup proves it.

* Our robot panic is overblown

It seems like fixing the editorial problem is more fruitful than having researchers obfuscate their findings.


I'm fully confident that we're going to look at the robot/AI panic the same way we do the Satanism panic of the 1980s. Its almost completely irrational but touted by enough influential individuals to be believable.

Its weird to witness live mass hysteria like this.


Were there as many high-profile (respected?) people involved with the Satanism panic as with the 'AI panic'? Honest question.


Great point. You could actually probably say the same thing about any panic today.


A thought I've had is for journals to require two abstracts for a paper. The first is the traditional one, directed towards a technical audience. The second is akin to a press release, expressing the scientist's own summary in terms that a layperson can understand, including an assessment of the robustness of the result. For instance, "this result needs further work to establish its reproducibility."

Of course a journalist can still give their own take on it.


PNAS (probably the highest-impact science journal) has something like this. Every Proc. Natl. Acad. Sci. paper has an abstract, which can be quite technical and intended partly for insiders, but also a "Significance" statement, which is highlighted at the top of the article, is limited to 120 words, and is intended as a near-layman summary of the article. From the author guidance:

"Authors must submit a 120-word- maximum statement about the significance of their research paper written at a level understandable to an undergraduate- educated scientist outside their field of specialty. The primary goal of the Significance Statement is to explain the relevance of the work in broad context to a broad readership. The Significance Statement appears in the paper itself and is required for all research papers."

Example: http://www.pnas.org/content/early/2016/07/27/1603632113


Doesn't Science have a similar feature for articles published in it? A short, educated-layman-level summary of the meaning of the work?


Yes, Science partly does this too (I'm a subscriber). They have relatively simple abstracts that try to serve both purposes -- summary for laymen and experts -- but they are tilted more towards a non-specialist summary. The layout of PNAS really emphasizes the Significance statement.

The best thing about Science for outsiders is the "Perspective" article that sometimes accompanies high-profile research articles. The "Perspective" places the more in-depth research article in context. It can be more valuable for an interested non-specialist than the research article itself.


I think many times the researchers authoring the original articles merit as much scrutiny as the journalists who report on them. In the US at least there are many incentives to publish eye grabbing results, often at the cost of scientific rigor.

Some of the most successful research groups I know spend a significant amount of time essentially maintaining a brand and consistent message. The brand and reputation of the group is incredibly important for obtaining funding. You can bet if a study finds something off message or fails to reject the null hypothesis it will be set aside and not reported. So while journalists have an incentive to report the most novel and eye-catching results, that motivation has definitely trickled down to the level of guiding research inquiries and influencing data analyses.


The only thing that is going to change anything is if the consequences of 'misleading' articles being published on your work is you don't get anymore funding, or your current funding is clawed back (this would be more effective since the administration would have a fit). Only money will stop this problem.


better headline: popular science journalists also racing to the bottom just like every other kind of journalist


The irony of the Washington Post.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: