Hacker News new | past | comments | ask | show | jobs | submit login
Scientific Regress (firstthings.com)
111 points by eastbayjake on April 20, 2016 | hide | past | favorite | 55 comments



I am a post-doctoral research scientist that has wanted to be a scientist since early childhood (6-7 years old). The structure of how science is practiced seems to have changed substantially in that time frame, not for the better. Some of my childhood naivete plays a role in those perceptions, but many older scientists share that perspective.

A lot of the problems in modern science revolve around the growing focus on bringing large quantities of research funds to your institution (which gets a large cut). Getting grants has an increasingly large need for political clout due to peer-review and increased competition due to reduced funding. Consequently, labs are getting larger and more hierarchical (i.e., an increased number of long-term post-docs and research scientists). As the organizational structure of science becomes less flat, the influence of those at the top is just reinforced and tends to constrain dialogue to fit established ideas. I think that the long-term progress of science depends on placing small bets on a greater variety of ideas rather than doubling down on fewer. Unfortunately, it will always be perceived to be safer to fund conventional ideas. Peer review enforces short-term, safe bet approaches.

Large labs can churn out lots of papers, even if they are relatively financially inefficient. One way to correct this (if you agree that is a problem) might be to normalize grant scores by previous grant funding to the PI. i.e., (X papers of Y impact)/Z dollars of funding over the past 10 years. Double-blind review of grants might also help, but blinding is relatively easily circumvented.


The problem is purely one of incentives. If you design a pathological system and make highly intelligent people use it then don’t be surprised with the terrible results.

The solution to science is actually very simple - move from a peer review ranked grant allocation system (which is totally gamed by those at the top) to a basic screen and lottery system. The idea is to do a basic screen on grant applications to make sure they are scientifically viable and not majorly flawed (at least 75% of grants should pass this test) and then put them all into a pool and draw winners from this pool until you have allocated all the money you have.

Lets stop using a system that can’t actually do the job (peer review can't separate the top 10% from the top 20%), and which is open to corruption and old boy networks, and move to one that is at least fair and better than all the alternatives.


Or you could take the anarcho-syndicalist approach and split the money evenly to the entire pool, with a lower bound on grant size which makes the money inconsequential and if the lower limit is reached require all grantees to contribute a portion of their time to growing the money pool.


Two problem with this idea. If you don’t have some sort of barrier to entry then anyone can apply for a grant. This funding is made available on the basis that it will be used for science. I know this sounds elitist (it is), but science is really hard and it takes many years of learning to reach the forefront of knowledge where you can actually make a contribution. You only want to give funding to those that can actually use the money to do science. It would be a good idea to have another pool open to everyone to see if this barrier is really required - it might actually not be.

The second problem is if you split the money too much then you won’t actually be able to do any research. To see an experiment through to completion requires a not insignificant amount of money (in most cases). If you start to hand out amounts of only a few thousand dollars at a time then nobody would be able to get any done.


Under an anarcho-syndicalist system you don't need to worry what your neighbor does with their share. You have your share. If you want to work towards a common goal you need to convince them to work with you voluntarily.

I addressed the second point in my original post but maybe that was unclear: if there are so many applicants that individual grants become too small to serve their purpose, you stop accepting applicants. But then you require the applicant pool to spend some portion of their time growing the grant pool until the waiting list is empty. Flipping burgers if need be. Although we're talking about PhDs so I am sure there are better ways to use that labor pool.


Stopping the acceptance of potential grantees is just putting an inefficient barrier to entry of new people. The end result of limiting people rather than a base quality is that it would favour the old over the young which I don’t think is a good outcome.


You will have to argue it's less efficient than the current system. You can't just state that it is so.

As for favoring the old, all you have to do is require people to regularly re-apply. There's no reason you need to hop the line after the end of your term.

And you seem to be assuming that the waitlist will be prohibitively long, but the longer the waitlist the more labor you have for growing the grant pool, so it naturally self-regulates.


I agree with most of what you're saying, but I think labs with more professional scientists (Research Scientists as opposed to postdocs and grad students) might be the way of the future and a Good Thing. There are obviously too many PhDs trained today to work in the academic market, but I think it's also likely we're training too many bioscience PhDs for industry as well. It would require more money for sure, but I think the concept of depending on an indentured servant class/apprenticeship model developed in the middle ages to execute most academic research isn't ideal. I think professionalizing the lower levels of academic science would make for a fairer, less exploitative, and because the practitioners would be more skilled, potentially more productive scientific academy.


to normalize grant scores by previous grant funding to the PI. i.e., (X papers of Y impact)/Z dollars of funding over the past 10 years.

One way to bypass this would be for larger organizations to internally incentivize their researchers to cite their own organizations papers in preference to, or in addition to, other organizations papers. Thus the larger the organization, the more it can press down on the PI scales.

Alternately, even if it's not a several large organizations, smaller organizations can incentivize citing partner papers in the same way.

Which is just restating the concept that if you measure something, and then base monetary rewards off that measurement, then that number will be hyper-optimized (even if it's not a good predictor of future performance).

I don't have a good alternative, but just mentioning that there are easy circumventions (like you mentioned with double blinding)


My father definitely saw lab study fraud while getting his doctorate in the 70s. Academia in the US has resembled the game of thrones since at least the 60s campus revolutions.


>But it adds to this a pinch of glib frivolity and a dash of unembarrassed ignorance. Its rhetorical tics include a forced enthusiasm (a search on Twitter for the hashtag “#sciencedancing” speaks volumes) and a penchant for profanity. Here in Silicon Valley, one can scarcely go a day without seeing a t-shirt reading “Science: It works, b—es!” The hero of the recent popular movie The Martian boasts that he will “science the sh— out of” a situation. One of the largest groups on Facebook is titled “I f—ing love Science!” (a name which, combined with the group’s penchant for posting scarcely any actual scientific material but a lot of pictures of natural phenomena, has prompted more than one actual scientist of my acquaintance to mutter under her breath, “What you truly love is pictures”). Some of the Cult’s leaders like to play dress-up as scientists—Bill Nye and Neil deGrasse Tyson are two particularly prominent examples— but hardly any of them have contributed any research results of note. Rather, Cult leadership trends heavily in the direction of educators, popularizers, and journalists.

As much as I think the essay is great, this paragraph is a terrible ad-hominem argument for the existence of a "cult of science" or why it is dangerous. It doesn't matter if people wear xkcd shirts, Matt Damon uses profanity with "science", or if people like a facebook page. Laypeople having fun with "science" does not make a cult. A better argument would be to show how people cargo-cult science, and why it causes problems with science. The author fails to do both.

IFL is dangerous not because it is only pictures of scientific phenomena, but that bullshit is constantly posted on the page to be consumed by laypeople. The cult of science is dangerous because it has failed to teach people how to properly evaluate claims, and believe anything with the word "study" in it. Woo-pushers have appropriated the vocabulary of science indistinguishable to a layperson. Andrew Wakefield resurrected long dead pandemics by falsely publishing in the Lancet. This is what makes the cult of science dangerous, not the word "bitches".


I agree with most of your comment, but here are two nitpicks.

>This is what makes the cult of science dangerous, not the word "bitches".

Nobody said that, so there's no need to burn your strawman there.

Science is all about the method and proper use of critical thinking, so you could assume it is a direct contradiction to have a vapid attitude of shitty reposts of forced-meme-tier macros that are often inaccurate, without trying to think about it an instant because it's nice virtue signaling (IFL in a nutshell). But you're right, he could write it explicitly.

There's a nice writeup of this problem on the language log [0], arguing that science is basically filling the role of biblical parables.

>Woo-pushers have appropriated the vocabulary of science indistinguishable to a layperson.

They are not responsible for that, and honestly, nobody is. Recently, I read the description of some machine learning algorithm that was filled with buzzwords and dubious physics analogies to a point that I thought it was a clever Sokal, but after some reading all of it was genuine. That's just how jargon works, you assume that the one who using it understands what he is saying, as long as he's using it seemingly properly, but you can't know unless you have a sufficiently good grasp of the semantics.

[0]http://itre.cis.upenn.edu/~myl/languagelog/archives/003847.h...


>They are not responsible for that, and honestly, nobody is. Recently, I read the description of some machine learning algorithm that was filled with buzzwords and dubious physics analogies to a point that I thought it was a clever Sokal, but after some reading all of it was genuine. That's just how jargon works, you assume that the one who using it understands what he is saying, as long as he's using it seemingly properly, but you can't know unless you have a sufficiently good grasp of the semantics.

I don't think machine learning was a good place to pick an example from. A lot of so-called explanations of ML algorithms basically are Sokal hoaxes, and the fact is that the writer doesn't understand what the algorithm does and how.


My father spent every night of his 30 year career researching semiconductors with a deep belief he would be fired next week if he couldn't bring in enough grant money. He is an amazing scientist and you wouldn't be on the internet today without his spadework but he couldn't give a rat's ass about funding.

One day a former grad student showed him some pictures.

"That's my lab!" He said.

"No, that's the replica I built of your lab in China for 1/100 of the cost."

Science is such a noble pursuit. If only it were separable from humanity's endless supply of greedy pricks.


What's the problem with the lab in China? I don't get it.


The Chinese steal everything they can and reproduce it for cheap. No respect for intellectual property at all. We are being robbed.


Sure... the same way we robbed the UK of its mechanical looms in the 19th century. The US economy was based on IP theft.

It's perfectly normal if you're behind to copy the frontrunner and add your own improvements.


We stole movable type and gunpowder from the Chinese. It's payback time.


Reproducing semiconductors more cost effectively is a very good thing. Intellectual property deserves very little respect, and using an idea is not stealing.


Yeah you're right. I've read sometimes(maybe spacex?) companies just keep things secret and don't even patent them cause the chinese will just copy. Do they actually produce anything new/research and can/do we steal it from them too ?


If your goal is the advancement of human knowledge, is it not a good thing that those following behind the pioneers can catch up so easily.


While these problems clearly exist, I would argue that we're still making more progress than "regress". The whole premise of the article seems to be that the "good old days" are gone, but I'm not convinced that things were much better then, after all, human psychology hasn't changed. For example, psychological bias can be found even in physics, as described by Feynman:

"We have learned a lot from experience about how to handle some of the ways we fool ourselves. One example: Millikan measured the charge on an electron by an experiment with falling oil drops, and got an answer which we now know not to be quite right. It's a little bit off because he had the incorrect value for the viscosity of air. It's interesting to look at the history of measurements of the charge of an electron, after Millikan. If you plot them as a function of time, you find that one is a little bit bigger than Millikan's, and the next one's a little bit bigger than that, and the next one's a little bit bigger than that, until finally they settle down to a number which is higher.

Why didn't they discover the new number was higher right away? It's a thing that scientists are ashamed of—this history—because it's apparent that people did things like this: When they got a number that was too high above Millikan's, they thought something must be wrong—and they would look for and find a reason why something might be wrong. When they got a number close to Millikan's value they didn't look so hard. And so they eliminated the numbers that were too far off, and did other things like that..."


>>>First Things is published by The Institute on Religion and Public Life, an interreligious, nonpartisan research and education institute whose purpose is to advance a religiously informed public philosophy for the ordering of society.

I'm not quite sure what to think about the bias we might be seeing in this post from this site. This is quite a long essay and there's not a single link on a site that seems like it would be likely to be quite biased. I'm not saying anything is wrong with this, but I guess just consider the source? It could very well be fine.


Hi, I'm the author. Just wanted to pop in and say:

(1) I originally had the thing stuffed full of citations and links, but since they wanted to print it in the paper magazine, we stripped all those out. If you want a reference for any particular claim, I'm happy to provide.

(2) First Things didn't pressure me to make any content or editorial modifications to the article. The sole exception was one or two more technical points that they asked me to cut for length and flow concerns (and because many readers don't have a quantitative background).

Any bias in there is purely mine.


It might be helpful to compile a list of those citations online (your own website, perhaps?). This is the sort of essay that might get shared around a whole bunch....it would be a shame for its impact to be lessened because of the inevitable judgment people will have for the hosting publication.

It may be a perfect example of the "secular bias" the publication references, but I too groaned inwardly when I went to check out the "about" section after completing my read. Not because I changed my mind on the content of the article, but because I was imagining sharing it and having to deal with an argument about publication bias (in the media) for an essay about (among other things) publication bia (in the sciences)


That's a great point, I'll try to put something together.


I was interested in finding the paper from which you got this quote: "some non-reproducible preclinical papers had spawned an entire field".

Seems to be from: http://www.nature.com/nature/journal/v483/n7391/full/483531a...


That's a fair point - I was also surprised that the essay appeared in firstthings. However, the essay is really good, and the author is spot on about a lot of his critiques.


Biased against what? Science? How long until we get rid of the 'religion is against science' narrative?


A fair question but one that requires significant qualification on what the virtues and competencies of each are. If you consider religion in its entirety there are elements that certainly do compete and contradict scientific endeavour. I enjoyed the book Religion for Athiests by Alain de Botton on qualifying those virtues of religion from the perspective of one who isn't.


This is a well-known issue in scientific circles. The seminal paper of Ioannidis: http://journals.plos.org/plosmedicine/article?id=10.1371%2Fj...


I saw the nane of the institute in the certificate so knew the context of the article. Still, i found it to be one of the best accounts of current scientific practice i 've read in years.


Many years ago I started out as a Chem major. After 3 years and many hours I quit.

My grades were swell, I still loved science - but I couldn't be honest with myself that I was going to be a real scientists in the end.

So here's what typically goes wrong: You get a lab assignment. Somewhere along 4th hour or second week you screw up and grab the lab ass for help. He says "sorry - just keep going."

You won't get a do-over. You can't afford to start over because the expected competence requires immediate good results.

Your lab time is limited and resources are scarce. If you want the grade you'd better "learn from your mistakes" and "be more careful next time."

What about the results? Well you already have a pre-conceived notion of what they should be. Maybe you get them from your mates, or look them up.

Learning from our mistakes is "science code language" for pushing small known nudges and data "massaging" as an acceptable method for passing the course.

Perhaps at one time students really did learn and grow from these mistakes - but the modern concept of failing is simply a quick exit from a highly competitive major.

All the while my own scientific rigor which I was supposed to be enforcing on my own results was slowly corrupted.

I couldn't truly say that my practise in the scientific method was honestly the truthful result of my own observations and methods. Perhaps I was just too anal at the time - so be it.

I worked in an environmental testing lab and the sloppy procedures practiced were sometimes much the same.

Circumstantial? Yes. Take of it what you will. But if my experience is like that of others - such corruption as reported here doesn't surprise me.

Science should be about failure as much as it is about success. Failure is a valid result - but we often fail to oblige real and honest failure as scientifically and (most importantly) educationally valid.

That was my draw to CompSci. Our very embrace of failure as a tool for learning. I absolutely love it.


>Science should be about failure as much as it is about success. Failure is a valid result - but we often fail to oblige real and honest failure as scientifically and (most importantly) educationally valid.

I studied Engineering rather than a 'hard science'. My experience was the complete opposite. A failed experiment was almost seen as a good thing. It certainly gave you more to write about on your Lab Report. This probably highlights a lot of the difference between engineering and science. We care about Methods just as much if not more than the result.

>Your lab time is limited and resources are scarce. If you want the grade you'd better "learn from your mistakes" and "be more careful next time."

We were always encouraged to learn from failure and see it as an opportunity. My undergrad thesis was focused on synthesizing material for LI-Ion batteries using a technique called Electrostatic Spray Reductive Precipitation (ESRP). A large chunk of my writeup ended up being about the difficulties I encountered with the technique and how I'd reconfigure the experiment rather than the results I obtained.

I work at an industrial plant now and while plant trials aren't exactly laboratory science there is similar requirements when it comes to experimental rigour. You can be damn sure people won't cut corners because there are going to be a hell of a lot of questions asked if something from the trial can't be replicated when put into production.


I'm going to read that as short for "lab assistant" :-D


Yes, that's right. It was our term of endearment for the poor grad student that was stuck with the undergrads. He or she had to deal with our mistakes and assist in proper methodology and procuring lab materials.


Certainly, the reproducibility crisis in social science and cancer biology is important, and deserves attention. But this article mostly strikes me as profoundly overstated. Yes, science is done by humans, and is thus imperfect. Yes, science's approach to truth is nonmonotonic. (I think most working scientists would consider this obvious.) Yes, scientists have various incentives to hype surprising results, cut corners, and even cheat. Yes, a lot of bad science gets published. But clearly science is making progress, since we clearly know more than we did 5, 10, 20, or 50 years ago. Are there things that could be done to improve the way science is conducted? Almost certainly (e.g. preregistering experimental trials, requiring power analysis in published papers), but this article is remarkable quiet about such things. In the end, what is author even really saying? That lay people should trust scientists less than they do? That doesn't seem correct to me. For instance, on climate change, it seems that normal people trust scientists a good deal less than they should, instead placing their trust in talking heads, politicians, and people who are unambiguously paid by oil companies to advocate on their behalf. So: yes, science as practiced in 2016 is imperfect. But what exactly do you propose replacing it with?


It's academia that is broken, which is being referred to here as 'science'. What do we replace it with? Organizations that do not damage actual science in order perpetuate themselves, obviously.


In my experience, academia's brokenness has been greatly exaggerated. All medium-to-large human organizations are broken to some extent. And exactly what are the concrete outlines of these mythical "organizations that do not damage actual science in order perpetuate themselves" that you propose to replace academia with?


I agree that there are real structural issues, especially in certain fields, and especially involving the incentives for funding and publishing. However, I think that the overall impression gives some mis-characterization, at least from my perspective as a molecular biologist / geneticist.

First, I think that the most prominent studies are the most likely to have issues; generally they're doing something new, often with new methodology. It's here where extrapolations tend to be made that are the most dangerous. It's also not surprising that social sciences have such issues; they don't have rigorous tools (i.e. genetics) that can effectively ground their work. It makes compounding issues much harder to catch.

Related is the re-testing issue; in many fields, subsequent work will catch errors in previous work. If you work with a mutant and then someone else works with it, they'll see if it behaves differently than expected from the previous work. Germplasm travels, and it's the ultimate arbiter of truth. This usually does lead to further scrutiny and fixing the issue. The real problem that I've observed isn't that the errors aren't caught, but that a formal retraction isn't always done. Sometimes it just gets contradicted in a subsequent paper (and often with some relish) without ever resulting in a retraction. The editors of that journal clearly have a responsibility here that they are failing to uphold.

However, despite these faults, it's quite clear to those of us with 'boots on the ground' that you can't hide from the data; as long as you're using solid genetics and doing 'real' experiments (e.g. western blots, in situ/immuno-localization, simple gels/pcr, etc.) you can only hide for so long. The exception is if no one keeps working on it, in which case it's probably not that interesting to begin with. This also leads to a deep suspicion of bioinformatics among geneticists because we see how often things go wrong, and what is needed to make it right. Fortunately, genetics can still be used to great effect.

Ultimately, I'm not suspicious of the large body of work in my field. Most of it is based on extremely solid forward genetics that withstands lots of testing. Even now, great value is placed on these 'old school' methods because of how robust they are known to be.


This article appears to be bent on buttressing an anti-scientific religious viewpoint rather than improving science. (Read the last few paragraphs.) The conclusion strikes the same themes that I've seen many times in the religious anti-science movement: science is a religion, a cult, etc.

Sure, statistics are difficult and can lead to incorrect conclusions, but that's why we make sure that a scientific claim is falsifiable. The fact that we can test the claims are where much of the power lies. Let's not forget that a few centuries of the scientific method have made human lives so much better than millennia of religion.


It seems to me more bent on buttressing a pro-scientific religious viewpoint.


This.


Strange to see that HN is supporting the views of anti-science propaganda, promoted by religious groups. The great thing about science is that it is falsifiable. So, yes, mistakes will be made, specially with so much pressure to publish more papers. But finding the mistakes in these papers is another incentive that leads science to progress, while religion will forever keep pointing fingers without anything better to offer.


I think this is largely a case of 'moloch' as described by Scott Alexander: http://slatestarcodex.com/2014/07/30/meditations-on-moloch/

In that long (but excellent essay) Scott points out that self organizing systems like human society can sometimes arrange themselves in horrible local minimas which can be very difficult to escape.

I think the way modern science is organized is decidedly suboptimal. Note that I'm talking purely sociologically: science, as a method, is still by far the best thing we've developed to understand the natural world. However, the incentives around publishing / grants / hiring work are broken in a myriad of ways, from small to massive.

For example:

- High prestige journals/conferences are basically a crapshoot: http://blog.mrtz.org/2014/12/15/the-nips-experiment.html - consider that getting a paper accepted in NIPS/published in Nature might completely change the way your career shapes up.

- Grant funding is incredibly competitive, and the way they are awarded is also a crapshot: https://psmag.com/why-the-national-institutes-of-health-shou...

- The pressure to publish/obtain grants drives people to either make up data to obtain fancy publications, or even to kill themselves: http://www.dcscience.net/2014/12/01/publish-and-perish-at-im...

- We reward researchers for brilliant clean discoveries, not brilliant methodologies. However, the outcome of a serious research project is the one thing that a scientist cannot control: science investigates the unknown, if someone explores a plausible hypothesis in a clever way, and the hypothesis turns out to be wrong, the scientist didn't do anything wrong.

- PhDs are often used as cheap labour. A very famous PI once bragged that before they got their new fancy robot, they just had 10 chinese postdocs handle all the plating.

- Private companies make an insane amount of money from publishing scientific research that's carried out with public money, and that's reviewed by scientists (largely) paid with public money who donate their money for free.

I could go on for ages - and so could almost every researcher/ex-researcher on HN. There's a few no brainer changes - but the problem is that almost everyone in power who could affect change stands to benefit from the current system.


A more relevant post by Scott is The Control Group is Out of Control [1]. The starting off point is that parapsychology has high-quality studies with positive results, but we're pretty sure parapsychology is wrong, and this bodes poorly for "high-quality" studies in other fields. But then it just keeps going and going... by the end I always feel like lowering the acceptable p-value ceiling by a factor of 1000, and also like that would be a joke laughed off by the nature of the problem.

1: http://slatestarcodex.com/2014/04/28/the-control-group-is-ou...


Great article. I would say this is inaccurate though:

>"What it really means is that for each of the countless false hypo­theses that are contemplated by researchers, we accept a 5 percent chance that it will be falsely counted as true—a decision with a considerably more deleterious effect on the proportion of correct studies."

The hypotheses that the statistical "hypothesis testing" framework is usually applied to often amount to "two groups of cells/animals/people are samples from exactly the same population". Then there will be assumptions about the distribution of this population, etc. As has been noted by many (see eg Meehl 1967), such a hypothesis is pretty much always false and nothing but a strawman.

http://www.psych.umn.edu/people/meehlp/WebNEW/PUBLICATIONS/0...


The problem is that fields don't advance in a continuous process, but its in bursts of stop and go. This state with a lack of progress and entrenched thought schools is prelude to another fracking of the unknown. Somebody in all this science looks on the "outliers" and will connect the dots. And the truth is, that not the science community grants the ultimate reward- but society and economics, via application.

You can assemble a thousand followers, claiming that electrons are little golden dwarfs, running through the metal if fed with potatoes, but the world wants chips without chips, so what hinders science? Globalization, border-lessness, in a ironic twist, cause where can that industrial revolution flourish and reward overcome all prejudices? When the potato religion/ideology is the same everywhere, there is noway to run too for the first battery maker.

One of the charlatans is not one. Better to feed a hundred of them, for the one to prevail.


This article makes some good points, although ones that have been made a lot recently, but it hugely oversells the regress argument.

If that was happening, it should be quite easy to measure in some areas: 5 year cancer survival rates should fall, corn yields should drop, and so on. For the most part, we don't seem to see that.

If it was all some sort of faith, and the bad findings were accepted, we should see more-or-less random changes instead of steady improvement


TLDR: some scientists are doing science by critiquing what some other scientists did, using statistics. The authors are a wee bit confused about the fact that science is needed to weed out the 'bad' science from the 'good' science.


Please, this is not even close to the content of the article.


Actually, it is.


This is an obviously biaised text by an author with a fundamentalist religious agenda.

He cites all well-known and not at all surprising problems with peer review, which means that a sizeable proportion of published science is wrong -- wrong as in somewhat innaccurate and likely perfectible, and in now way as wrong as only religious ideas can be, i.e. completely false and unfounded. Asimov wrote a great essay about this Relativity of Wrong [1].

Indeed, that fact that science isn't regressive, and our knowledge is getting more accurate with each passing day, is self-evident. Suffice to compare what we have now with what we had 20 years ago, and its clear that we are not regressing.

Now, as to the specific examples that are falsely used to proove how wrong this wicked science can get:

1. "one hundred published psychology experiments".

Seriously, anyone in the field knows that these studies are rubbish, and all correlations are obtained after running so many statstical tests that they are clearly due to chance.

2. "half of all academic biomedical research will ultimately prove false".

Again, this is obvious. Many initially positive findings will be due to chance, especially when we are talking of small, exploration studies. Yes, many initially promising molecules did not live out to the hype, but others did, and spectacularly so. For instance, progress in oncology has been amazing lately. People who had life expectancies of mere weeks in early 2000s now routinely survive for years with their metastatic cancers.

3. "but are unlikely to mention a similar experiment conducted on reviewers of the prestigious British Medical Journal." [as opposed to Sokal's Social Text hoax].

Apples and oranges. Sokal's article is plain old gibberish, whereas the experiments in medical litterature were to assess the reviewers capacity to assess for methodology failures of otherwise plausible studies. Most of these methodology failures are weeded out by dedicated statistical and clerical staff.

Indeed, I would argue that some of those methodological problems were pretty minor, e.g. :

"Poor justification for conducting the study; No ethics committee approval; Failure to spot word reversal in text leading to wrong interpretation of results; No explanations for ineligible or non-randomized cases; Safety. No mention was made of monitoring patients for untoward effects; Format. The abstract was not written in the structured form requested by Annals; References. No reference cited was more current than 1989 despite the fact that there were numerous more recent studies; 5.Presentation. There were multiple grammatical and spelling mistakes, including the misspelling of propranolol as “propanalol." [2,3]

[1] http://hermiene.net/essays-trans/relativity_of_wrong.html

[2] http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2586872/

[3] http://www.annemergmed.com/article/S0196-0644(98)70006-X/ful...


I notice your comment also got downvoted. Seems there are a bunch of people on HN promoting this religious-fluff-critique.


disturbing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: