I think this is good, and may also improve reviews. I doubt that Nature has an issue with substandard reviews, but I have definitely seen it in some of the computer science conferences I submit to and have served on [1]. If reviewers know that their review may be published, I think they may turn in more substantial reviews.
[1] In computer science, conference publications are peer-reviewed and highly competitive. We also have journals, but most novel work appears in conferences. The more theory oriented parts of computer science will tend to publish in journals more, but they still publish in conferences, too.
Published reviews will just make reviews political in new ways. As a graduate student, I helped my advisor review a paper submitted to Nature that was publishing a large biological data set. The internal metrics that the authors provided made it clear that the data was 95% noise. The review was direct and to the point. The paper was rightfully rejected, but ended up in another high profile journal after the authors removed the damning evidence. The corresponding author was one of the most famous and powerful people in a very large field. My advisor was still a relatively junior professor at the time, even if well known. I don't think publication of the review would have changed things for him, but for many others I have known, it might have.
I'd say this is actually a stronger argument for publishing reviews, as a persuasive argument might prevent venue shopping, potentially in multiple ways, depending on the when.
But outside of physics, there's many fields of science where double and even single blind is meaningless. If you get access to the review directly, you can tell by what they say and how they say it who they are. If you're a vindictive powerful person, who cares that you might be wrong?
Good point. However, the primary value of peer review is to reject bad papers. Does science benefit when authors can shuffle a rejected paper to another journal and tweak the results to hide the initial criticism? To me, the real value of reviews is making sure that bad studies never see the light of day (under the current system).
Personally, I am interested in seeing a model where all papers are submitted to a public archive and reviewed publicly (and optionally anonymously). The high profile journals would shift to providing value-added curation, formatting, and commentary.
Indeed. In my experience and other researchers I know, often rival groups will try to block your research from getting published in Nature or Science by asking for unreasonable follow-up experiments. It's sadly very political. Hopefully this will improve things.
I personally know the current Nature chief editor and she totally agrees with my points above. In contrast, I have always found math & CS journal reviews way more fair and rigorous.
Jesus. It's unbelievable that people can be so petty. This is why if I see an academic with tons of high profile publications, it generally signals that they merely know how to play the publication game (and they of course have the desire to do so). This behavior is a red flag imo if it occurs post-tenure.
In many ways the real peer review comes after publication, when more people have seen the work, and can react to it. In some fields it is not terribly rare for flaws in the work get discovered at that point, and rebuttals get written and circulated (or possibly even published in the journal.)
The prepublication peer review is not always a terribly high bar to pass. It does help to weed out some of the crap, but lots still get published, especially in less competitive journals.
The only reason the review process is politicized is because academic institutions compete for research money. Preventing your peer institutions from publishing helps you get money for your own. You need to change how research is funded. Follow the money.
Politics have been part of the peer review process for a very, very long time. It's not all about the money, either:
> Fourier was interested in heat propagation, and presented a paper in 1807 to the Institut de France on the use of sinusoids to represent temperature distributions. The paper contained the controversial claim that any continuous periodic signal could be represented as the sum of properly chosen sinusoidal waves. Among the reviewers were two of history's most famous mathematicians, Joseph Louis Lagrange (1736-1813), and Pierre Simon de Laplace (1749-1827).
> While Laplace and the other reviewers voted to publish the paper, Lagrange adamantly protested. For nearly 50 years, Lagrange had insisted that such an approach could not be used to represent signals with corners, i.e., discontinuous slopes, such as in square waves. The Institut de France bowed to the prestige of Lagrange, and rejected Fourier's work. It was only after Lagrange died that the paper was finally published, some 15 years later.
> The only reason the review process is politicized is because academic institutions compete for research money.
Well, publication is also how researchers get promoted by people who can't evaluate their research. There's a within-department need outside of just attracting grants.
Publish your article with accompanying data and programs on sci-hub (github for science).
Tell your colleagues about it if you normally hold seminars before publishing papers, or post to mailing list. As interested people read your paper they will post comments, open issues, or star your repository. If it gets many stars Nature will include a link to it in their monthly list of important works, which would help even more people to find your repository.
I did not mean the existing scihub site, but simply something that would be github for science. It seemed to me amusing to reference scihub in the name, but unfortunately looks like it is confusing, and diverts attention from the content of my comment, which is that publishing paper should be similar to publishing a repository on github, and reviewing should be like reviewing pull requests.
Nature would not assign the same value to every star. Star from someone who works in a university and has large impact factor in the related field will have much higher value than a star from newly created account without any published work.
Stack overflow is a "popularity contest" and nothing has gone wrong with it so far, and science have always been popularity contest, i am just suggesting to give it better tools to measure the popularity.
The way things were done before the current system took over (in the 1980s, I believe).
Not sure what it was exactly. I heard rumors that the journal often did its own review. Either way, if it was good enough for Einstein and Feynman, it might work for others.
The important thing is that the people whose careers might be upended by revolutionary findings can't stop or steal the ideas.
So you want to replace the current peer review system with one you don’t really understand? Have you tried looking into why the system was changed in the first place?
The fact that it worked for Einstein doesn’t make it a better system. Do you think Einstein wouldn’t have gotten his revolutionary work published with the current system?
> * Do you think Einstein wouldn’t have gotten his revolutionary work published with the current system?*
We'll never know without a control group of current Einsteins...
But yeah, that's the kind of thing I worry about. Einstein did stir up quite a bit of resistance at first.
Below is an interview with a scientist who got his idea "borrowed", and his paper shanked in peer review by the "borrower", who eventually got a nobel on it.
Not that I'd expect anyone to listen through a 2 hour podcast. It is pretty dramatic if you're crazy enough to go for it!
I can second this podcast. It was really quite interesting, and I'd love to hear other biologists comment on it, especially those that were mentioned. I feel like I can't personally have a very accurate picture until then
Btw: The old system was the editor reading and reviewing all submissions himself. The new system of only a quick glance by the editor and then further scrutiny by a peer was instituted for two reasons: 1.) to many submissions for the editor to read all and 2.) increasing specialization. So going back to the old system would either lead to very cursory and potentially incompetent reviews or the increase in the number of editors to a point where all current senior reviewers are now editors. The former is completely intractable and the latter is still worse then the current status quo, especially since it removes the practice of having two or more reviews that specialize at different aspects (e.g. having on reviewer focus on the numerical side of the simulation and another on the statistical side of the comparison with observation data).
In many many subject areas we already have that through the use of preprint servers such as Arxiv. No need to get rid of the upsides of peer review just to get that.
In my (very limited) experience there are still large differences in the quality of the reviews. As researchers do this basically on the side and for free, the amount of time they devote to a review varies greatly depending on their time budget and motivation. While some reviewers seem to go through the paper character by character and point out even the tiniest flaws, others are contend with a superficial reading and miss even glaring mistakes.
I'd say publishing the reviews is good because it allows you to see how much effort went into the review process. It will also (anonymously) shame the second type of reviewer into being more thorough, which I think is also good.
As a side note, I think articles should all be available for free and the work of the reviewers should be compensated or appreciated somehow. Publishing reviews with the reviewers names (only for people that want it) can help to create such appreciation, because it is a lot of work.
They'll only publish reviews for already accepted papers. This may help reviewers be (or at least appear) less obviously biased against the authors in the review process if the paper has a chance of being accepted. But it won't do anything for papers that 2 or more reviewers want to shut down. Note I'm not talking about low quality research but about papers that are shut down for political reasons (happens often enough in neuroscience at least) so that the reviewers can then do the same experiments and publish before you.
I don't understand how they can get away with this blatant idea-stealing. Isn't there something in the grant process that can punish them for misbehavior? If one team has a laboratory notebook showing they did the work, it seems like a paper trail to the fraudster could easily be constructed.
At the very least, people will be more likely to decline to be a referee for a paper they're less of an expert on or don't have time for if they knew the report would be public.
After seeing some review comments that make it clear the reviewer either had little knowledge of the field or comments that could have been rebutted by cursory view of the data, this seems like a an area that sorely needs improvement.
>area that sorely needs improvement.
the simplest idea would be to improve peer-reviewing with moving it to something similiar to hacker-news discussion board (but possibly not open to everyone), where reviewers would comment, authors could respond to questions or clarify something.
I’m not convinced that would work. If the problem is the quality of the review it seems like the reviewers need more vetting, not less. I could see the HN version devolving into a lot of wasted effort responding. It would be even worse if it was allowed to be anonymous.
If it wasn’t anonymous and the reviewers were vetted with respect to the field they were commenting on, it might work
As mentioned in the article, Nat Comm has been doing this for a few years now. I have found it immensely interesting and helpful for my own research to have access to the peer review files (usually in the Supplementary Information [1]). It is a great way to show younger students the process and formatting behind peer review and also have examples of what constitutes a positive review process. As an author, it does feel like a positive as well to have the peer review file out, so that readers who are interested can see how the work developed, and what points or weaknesses the reviewers focused on.
We all know that publishers add little value to science, and nature here looks to be striving to add even less than usual. If the peer reviews are not of good enough quality, your editors should be handling that! Don’t rely on the readers to review the peer review. Dumping the peer reviews online is just a way to avoid the responsibility of quality control.
Conversely, it's also a way for the general public to see criticisms and laudations of research that might otherwise be accepted with blind faith. While Nature has its pick of comparatively incontrovertible research, sending a signal to the rest of the industry that peer reviews ought to be published seems like a useful step toward better, more replicable, more honest science.
> While Nature has its pick of comparatively incontrovertible research
Nature doesn't actually attempt to select for incontrovertible research, but rather the most groundbreaking/noteworthy research. Which is also why their retraction rate (and those of other "top" journals) is higher than average.
(Disclosure: I'm part of an initiative that explicitly distinguishes between exciting and robust research for this reason.)
From my limited experience peer reviews are treated as fairly private communications between the reviewers, editor and authors. We would often use it as an important clue on what to change even if we are going to another journal for publication. This makes the content of the reviews rather sensitive, especially in the cases where there is competition for publication on the same subject.
There are also the cases where Reviewer X may ask authors to do a very specific analysis/improvement that may somewhat benefit their own future research. It's all par for the course.
Another aspect of this that sometimes experienced professors have a fairly good sense on who's giving the review, even though they may be anonymous.
Since this policy gives authors the choice of publishing reviews, I hope it does not degrade the quality of reviews and make people uncomfortable for writing "rude" feedback.
I don’t publish or review too much, but I do a little. One of the recently frustrating events to me is that I got comments back from a reviewer that made an incorrect factual statement along the lines of “it’s against government policy to share x type of data.” And then discounted large parts of the conclusion as impossible or inappropriate. That was the only substantive comment from the reviewer.
We responded back to the editors refuting it and showing the relevant policies and regulations showing how data sharing for x was allowed.
The next message back was from editors declining. So further feedback.
I would have really liked to learn more about why the reviewer thought what they thought and why what seemed like clear evidence to my co-authors and I wasn’t to the peer reviewer and editors.
I’ll never know and I don’t spend many cycles contemplating it, but this new trial in Nature would help me understand the criticism better.
Some comments seem to miss the point that reviews will be published without naming the reviewers (unless they want to), and then only for accepted papers. So for the reviewer this should not change things much.
Also, NeurIPS conference publishes reviews for a long time now. But I haven't heard of anyone talking about a review of a paper. And if you do look at these reviews, really many of them are of really dismal quality. Maybe things will be different for Nature, idk.
Do you know any other ML conference/journal, doesn't have to be NeurIPS quality,that also publishes reviews? I know there is ICLR, but anything else? I really like reading them.
Start by publishing the name of the editor alongside each paper. Some editors barely look at papers or even the reviews. They basically just look at whether the reviewers ticked the 'accept' or 'reject' box, and base the decision on that (after possibly getting another review to break a tie). This can go undetected for quite a while, unless the journal has a chief editor who keeps on top of things.
Oh, man. I've been involved in peer-review discussions where the review correspondence is far more voluminous than the article itself. I wonder how the journals will handle the page-count.
That does not solve the problem that more pages take longer to read. Given the pace of science these days keeping up with papers just in your subfield is an actual problem.
Do you actually read whole papers? It's an honest question. I usually scan the introduction, quickly skim through charts/tables and jump directly to conclusions. If I find anything remotely interesting during that process, then and only then I would even consider reading the whole paper.
I like the idea in principle, but it raises the bar for the quality of the review that is going to be published, and might thus discourage people from accepting to review papers. It is hard already as it is to find peer reviewers!
We're getting to the point where there really has to be a financial incentive to do a peer review (Springer gives you a free book, which is a start). If that were the case, I wouldn't mind doing it occasionally when I retire.
> We're getting to the point where there really has to be a financial incentive to do a peer review
IMHO, a free book is an insulting and scientists shouldn't accept to do this work for commercial publishers for free to begin with.
I know many people feel it's their responsibility to provide "services to the community" which is great, but it's ridiculous that commercial publishers get to profit from this when they proceed to claim ownership of academic work and hide it from the public behind a paywall.
If they want to make money off research, fine. But reviewing costs time and money. Academic publishing may well turn out to be not so profitable anymore if reviewers have to be paid. Maybe then we can finally move to university-hosted open-access publishing.
> Research communities are unanimous in acknowledging the value of peer review
> 82% agreed that standard peer review ensures high-quality work gets published
How common is the view that peer review only serves as an institutional gatekeeping mechanism and harms innovation? (as I've heard Eric Weinstein charge on his podcast, The Portal)
EW has a PhD but I can't seem to find any academic publications under his name. That doesn't outright mean he's wrong but it does mean that he actually has limited experience with the process. I have several and I've reviewed quite a few papers. I would say that peer review is a pretty good filter for garbage. Typically reviewers want to see shiny new ideas so if anything it over emphasizes innovation and discourages incremental improvements. As for gate keeping, I don't think so. Again, reviewers want to be WOW'd with new ideas.
In general, I prefer papers from peer reviewed venues over the arXiv. Good work does get put on the arXiv though so I can't ignore it completely.
>Typically reviewers want to see shiny new ideas so if anything it over emphasizes innovation and discourages incremental improvements.
I know publications fetishize novelty, but wasn’t there research showing a high percentage of publication is either non-replicable or derivative and subsequently of little value once you remove auto-citations?
Why not have a site where people can openly comment and discuss individual research papers? This would make it more accessible to the public and increase the transparency in the research.
You can have a mod that keeps the conversation on topic and in any case it's better than nothing.
The British Medical Journal (BMJ) have been doing this for a while - publishing both the peer-review comments and author responses.
I have published a paper in the BMJ and thought the process worked well and tempered the responses on both sides.
In many communities, the problem with double-blind peer review is that it's pretty clear who the authors are, based on the exact line of research, writing style, et cetera. And in many cases, you can also guess who the reviewer is with some confidence. This means that double-blind is a nice feature for the "big guys" who dominate their communities to show how objectively strong their research is, although in fact oftentimes their friends review their work and know exactly who the authors are. Open peer reviews would at least make sure that there is some degree of accountability.
That's potentially true. However, I will say that as a reviewer for several double-blind journals, I've often thought "I totally know who is writing this..."
All research indeed will benefit from such an approach. Think of transitioning from the lack of version control systems to their appearance. But it is not enough for better scientific publishing. There was quite an interesting discussion[1] about creating something in between Overleaf[2], ArXiv[3], Git, and Wikipedia, moreover with the ability to do a peer-to-peer review, discussion, and social networking. See also the last[4] article in that series. There are a few implementations, albeit not covering all features, like Authorea[5] and MIT's PubPub[6] (it is the open source[7]). See also GitXiv[8]. See also the Publishing Reform[9] project. Moreover, there is quite an interesting initiative from DARPA, to create the scientific social network of a kind - Polyplexus[10].
Note that eLife will also be publishing peer review reports, even independent of "accepting" or "rejecting" a paper (and might even abandon that concept altogether): https://twitter.com/mbeisen/status/1155286615721254912
They've also announced two days ago that they will have a new journal dedicated to aging. This is big news for those of us who want longer, healthier lives.
The peer review process has reached a point where it's more harmful than helpful.
What I want has following properties:
* Version control, I can contribute to other people's papers
* I can link to a sentence in another paper at some point in time so that when I read another paper, I can go to the location immediately
* Open, like arxiv (anyone can publish anything), however arxiv discourages uploading personal or class projects, which is a terrible idea, the best part about GitHub is finding someone's abandoned project that does something
* Full-text search
I think that most academic publishing startups approach the problem by indexing already published papers. I think that starting clean slate is achievable.
This may work for some, but not many types of work.
For example
> * Version control [...]
If I am working on a non-trivial system made of some novel-ish material doing some hard-ish measurements (I just described most of experimental physics/chemistry/material science) then there is approximately zero other people in the world who can reproduce without considerable effort.
This reproduction effort is meaningful/justified once the original group finds some cool effect and ensures it is not spurious, but that's relatively late in the paper development (and at a point where it would make more sense to write a new paper for follow up rather than contribute to the current work).
You are underestimating the amount of domain knowledge needed for this.
Try contributing to open source software which is currently used for research, you'll realize the issue.
The domain knowledge being how academia works? My point is not to perfectly copy how academia works but to create a new platform that will allow people outside of academia to participate in the scientific process.
Like maybe your skill is that you can fix up poorly written wording. This platform will allow you to specialize in one thing.
They are not about peer review, if by "peer review" you mean the abstract notion of someone reviewing your work.
But peer review is a specific mechanism of getting comments about your work. And unfortunately it is a mechanism that has not been updated after invention of computers, so it is very inefficient at its job.
Peer review in 21st century should be as easy and as open, as reviewing pull requests on github.
How does one "Filter for error and importance" without writing comments?
I have been peer reviewer multiple times, and always had to write comments either to author pointing out errors and asking for clarifications or to the journal explaining why the paper is hopeless. It was not much different from reviewing pull requests by email.
Did you have different experience reviewing papers for scientific journals?
For sure there are comments required to justify the decision/revisions.
I was replying to the idea that one might submit an article for peer-review in order to get comments on one's work from the peer-review process, rather than to attempt to get it into a journal where it might be read by many people.
I am genuinely curious why these comments criticizing the current process of peer review and publishing are being downvoted.
In my ten years of working in academia i have not met anyone who was happy with the current state of things.
So i would be very interested to know are there so many scientist on hn who are happy with peer review process, are there many programmers who want to protect science, or are these comments terrible in some other way to deserve downvotes.
It’s not that peer review is great (there’s lot of problems with it), but rather the fact that the points you brought up have nothing to do with peer review.
Anyway, I agree that academia, and science in academia in particular, has lots of problems
It seems people are kneejerk downvoting you, but maybe the peer review process is a problem. It's worth questioning at least.
Eric Weinstein (for one) has been talking about this a lot and argues that it could be better to work out errors in public, instead of defaulting to consensus.
"The need for verification from others dampens the potential for imagination, which Weinstein believes to be essential for stretching beyond what is currently known to dream the unknowable. This, he argues, is how the sciences actually evolve.
Weinstein points to a 1963 Scientific American article by Paul Dirac in which the theoretical physicist discusses the discovery of a third dimension, itself a revolutionary idea proposed by Newton, and then to four dimensions, as provided by Einstein. Imagination is required to make further theoretical leaps, which might require not listening to present-day consensus.
Weinstein notes that when Crick and Watson published their seminal 1953 paper on the double helix, Nature did not need peer review to allow its publication. "It was an editor's job to figure out if it was worthy of publication." Thankfully, the editors allowed it; that paper revolutionized our understanding of molecular biology. Their work is the basis of all genetic research today."
It's indeed strange to see the parent comment being downvoted. Especially by people who know what github is.
Current academic journals are harmful for science, the same way as trying to use Roman numerals would be harmful for math. They are outdated means of communication that slows down progress.
Science needs its github, and github could easily become that with very small effort.
The current scientific publishing method is pretty much dead. It is absurd, biased, produces terrible results. Promotes inept research, unimaginable waste.
Yet dead as it might be most decision-makers and other leeches of science will pretend that it is just fine and alive. They will continue to do so for many years. Decades perhaps.
It is like watching Weekend at Bernie's.
This attempt of Nature is little more than holding up Bernie, waving his hand. Look, it is moving! It is not dead.
How about you do something constructive instead and propose a better alternative. You're just "poisoning the well" with empty accusations and ignoring how successful the scientific method has been.
The parent comment is not an empty accusation, but a truth known to everyone in academia. The publishers extort huge profit margins from libraries and universities without providing any useful service. See http://thecostofknowledge.com and the articles linked from it for more information.
That's not what the OP said though. I am in academia and most of my colleagues would agree with you about publishers. That's why most of us publish pre-prints on our websites, so that everyone can access our work. We can debate the business model but the OP is saying it produces inept research without really supporting that statement.
Nature is not the problem though, they just happen to be one of the most prominent and selective publishers - many/most? people hate them because they can't get in.
IMHO the entire process of scientific review is wrong. Once millions of dollars in research grants depend on a paper being "accepted", once getting a grant is a zero-sum game, the scientific review is irrevocably corrupted.
The main post is not about alternatives though - it is about a new thing that Nature is trying out. To which I posted that the whole thing is rotten from inside
I don't think this is the right avenue or thread to talk about alternatives.
If you want my opinion, it is the scientific publishing model the is broken, starting right with the structure of a scientific paper.
The way research papers are written dates back to a time where your only source of information was the printed paper.
Every single scientific publication today is overdone, exaggerated, overly verbose and almost impossible to comprehend. If you make a single, true and noteworthy observation that helps others, you cannot publish that alone. It has to be dressed up. You have to exaggerate, dress it up, bury it deep into "more" content to make it publication-worthy. The inablity to get credit for small but important research findings is the what, in my opinion is killing science from within. The big papers are a thing of a past, but scientists cannot let go, because that's what they were taught as being important.
It's fighting our nature as social animals and base entropy, the pair of pernicious problems all intelligent creatures in the universe have to deal with. It's a hard freaking problem.
[1] In computer science, conference publications are peer-reviewed and highly competitive. We also have journals, but most novel work appears in conferences. The more theory oriented parts of computer science will tend to publish in journals more, but they still publish in conferences, too.