Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Academia is almost entirely based on trust and reputation, which we're seeming to discover is not a useful heuristic if your end goal is a net gain in uncovering the truth of the phenomena around us. If you ask me, credibility should be based on reproduction of results rather than reputation of the author, name of the sponsoring institution, journal title, or a vague "peer reviewed" badge. New papers should be by-default untrusted until several reproduction attempts have been successfully executed. This would incentivize authors and scientific institutions to produce science of quantifiable quality.


Granted, my knowledge is limited to physics and chemistry, which I know are small potatoes compared to the life sciences.

With that said, a problem with replication is that a given lab tends to gear itself up for one or a small number of research programs that could span years or decades. Experimental apparatus are developed, knowledge and techniques are passed from one student to the next, and so forth.

My thesis project involved more than a quarter million dollars worth of commercial gear, plus a lot of stuff that I built. By the time I was finished, some of my tools were already obsolete.

If one lab publishes a result, another lab would have to gear itself up to replicate that result, which would probably include a capital investment plus a lot of time spent making beginner mistakes.

I don't believe strict replication is necessarily the best or only way to advance science. It produces reliable factoids, but they are still factoids. Physics has made its greatest strides when experimental evidence, that may be riddled with mistakes, supports the development of unifying theories of ever increasing power and accuracy.

Preferable to strict replication might be to let researchers study overlapping domains, so that several projects attack the same problem, but possibly from different angles.


Those who do the replication are also very unlikely to get published, both in success and failure.

My impression is that replication efforts often happen when one group tries to build on another group's work and they get frustrated enough to retry the original assumptions.


That's a problem of the current system we have, not a fundamental issue like the one he's portraying. It's one of the problems we need to solve if we want science to be better.


Totally agreed. Please fund us enough to do replication at that level.

(Although, at least in my field, replication is gives less bang for the buck than improved measurements. If I measure a thing, and then you measure it with 10x higher precision, you're not just replicating my result, you're moving the field forward.)


Individually papers are generally not trusted which is why literature reviews are a thing. https://en.m.wikipedia.org/wiki/Literature_review

The issue is not on the science side, but how results are communicated to the general public. Administrators tend to add as much hype as possible, and reporters strip out all the important details.


Is this assessment (reviews over papers) based on your own experience?

As a scientist I would say quite the opposity is the case, reviews are sloppy in citations, per editorial guidelines have to be written in a positive optimistic tone, and often overstate the claims of the cited articles.


I am not suggesting they are more or even nessisarily as accurate vs individual papers. Rather, they demonstrate the untrusted nature of individual papers.

Personally, I often find them a useful starting point on a topic. At best they capture the field at a moment in time, at worst their near useless. However, that’s just me not everyone in every field.


I've gone through the phase of distrusting individual papers and relying on reviews.

Only to again realize that reviews are also often vehicles of bias perpetuated by the authors where they subtly amp up papers by themselves and their "clique" of researchers.

Only solution really is to just read as much as possible, be as critical as possible and never trust authors to interpret their results with full honesty.


Every fraudulent paper adds to the scientific noise rather than to the scientific information.

Not only are they wasting their own funding, they are also wasting other people's time and money who often can't afford to ignore prior work. At the very least such papers come up in peer review.

And apparently, fraudulent papers can get cited quite a lot in practice!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: