peer review at CS conferences is normally anonymous and double-blind to reviewers and authors. however, the conference organizers themselves know who the reviewers and authors are, and the conference management websites are set up to avoid assigning reviewers to papers where they have a conflict of interest with the author.
this is usually something like: anyone whose email address has the same domain as yours, plus anyone you have coauthored any paper with, both of these going back a few years. If you are ethical you will report any additional conflicts that this doesn't catch.
so if a cabal wants to make sure they can positively review each others papers, they have to make sure they don't trip those automated filters. this means they must have been at separate institutions for a while, and must avoid publishing any papers together.
then, during the review process, they can "bid on" (put themselves forward as a reviewer for) each other's papers and ensure they give only positive reviews.
In small fields, it's sometimes really not very hard to guess who are the authors, just given the topic and what papers they cite. Even with double blind.
I don't think this is true from my experience. I've been on several program committees, and have managed a couple as well. Usually if you're above the first-level reviewers (e.g. senior PC, track chair, PC chair), you can see the actual author names and reviewers.
Most of the time when an author later says (formally through the review system, or informally in casual conversation) that "I know it's this reviewer who asked me to cite their own paper" they are wrong. Their main evidence is that the reviewer asked them to cite two papers that include the same author.
But of course I can't tell them otherwise because of the confidentiality, so they keep on believing that, perpetuating the myth. Perhaps someone can aggregate some statistics on this, but I genuinely believe that authors suggesting their own papers only about 20% of the time, and when they do it's because their paper was highly relevant.
I agree, in my experience the request to add citations is usually because there is fairly obvious context missing, and the reviewer wants to make the connection/comparison is made. Sometimes it's the authors own papers, but that's selection bias at work - if they didn't work in the area, they probably wouldn't be reviewing it.
I think there are two aspects to this, the first one as you mention is that often reviewers just asked to cite relevant papers (and it's not unusual they come from the same author, because the fields are small).
The other aspect is, that there are a some black sheep who will ask you to cite a whole bunch of barely relevant papers (often >5) all from the same group. With those you very clearly see that they are abusing the system. I suspect that these instances cloud people's judgement on the first aspect, so they always interpret the question to cite as fraudulent.
As a general advice if you suspect reviewers are try to push their citation counts up, a note to the editor (or PC if a conference) will typically help, because most don't look very favourably at this.
Makes sense. Also at times the automated anti-plagiarism check systems may list some references by similarity of the topic or the certain degree of match, not necessarily suggesting that it's been copied. So it's helping the reviewer to suggest such references to the relevant extent.
It's even easier to trick the authors into believing you (the reviewer) is someone else.
E.g: ask the authors to cite two papers of another researcher (as was mentioned somewhere in the thread as making it easy to guess who the reviewer is), ask to compare to results of that researcher more thoroughly, use some writing style typical of that researcher country of origin, etc.
It's especially easy if you are familiar with works from the other researchers for instance because you also reviewed their work.
Not even in "small" fields. A lot of the time any new work is just a further development of previous work by the same author. So if you're well read in your field, that "double blind" peer review is not blind at all.
I'm not saying we need a cure. I'm just saying we can all stop pretending that those reviews are "double blind" to the extent currently assumed by many.
the cure to that is a field where independent people can also contribute to further development of existing work - through making sure that resources used and code are published along with the paper.
Of course, this is hard to do with cell cultures and such, and in the case of large databases or compute-intensive tasks it's not quite feasible. But a surprising amount of what goes on in CS and neighbouring disciplines can work that way once you work past the reticence of individual established authors and set it as a goal for your (sub-)field.
The concept of Double Blind is gone, because papers are out on Arxiv the second work is done on them. Within a small research domain, everyone knows what the other person is working on.
You don't know the reviewers, but the reviewers often know who the authors are.
This isn't true at all. In AI, ML, NLP and Computer Vision, for example, conferences tend to be double blind. In robotics they tend to be single blind. Most people are not putting their work up on arxiv before publication, so you can't find that out either.
Often those double blind conferences contain explicit exceptions that let you post on Arxiv. Check the Call for Papers from this year's NeurIPS for example. Obviously you aren't required to put anything on Arxiv, but lots of people want to stake their territory and will post either a preprint or an incomplete version of the paper (e.g. with an example missing).
> In robotics they tend to be single blind.
I just wanted to point out that this does vary a little bit. RSS was double blind this year but ICRA was single blind last I checked.
The NeurIPS exemption isn't specific to arxiv: "The existence of non-anonymous preprints (on arXiv, social media, websites, etc.) will not result in rejection" People can post anywhere. Most people don't. I don't check carefully, and as I reviewer I explicitly don't look, but when I do see a preprint it's rarely any different from the submitted version. People do stake out territory, but it's more like they do that with shitty papers that no one wants to accept but they want to "own" some phrase that might be popular in the future. The NLP community is very unhappy about this in particular.
> I just wanted to point out that this does vary a little bit. RSS was double blind this year but ICRA was single blind last I checked.
It has been this way for a few years now. RSS & CoRL are double blind, ICRA and IROS are single-blind. I can't think of the last time I submitted to an ML, AI, NLP, or vision conference that wasn't double blind. It's sort of a strange robotics thing :)
Depends on the author. Some people don't post to arXiv until after they've submitted, or after they've gotten initial reviews back. Others like to post to arXiv proactively to get feedback prior to submitting. Just depends on the person.
Reviewers are encouraged not to read anything that might be such a paper, though, for whatever that's worth.
this is usually something like: anyone whose email address has the same domain as yours, plus anyone you have coauthored any paper with, both of these going back a few years. If you are ethical you will report any additional conflicts that this doesn't catch.
so if a cabal wants to make sure they can positively review each others papers, they have to make sure they don't trip those automated filters. this means they must have been at separate institutions for a while, and must avoid publishing any papers together.
then, during the review process, they can "bid on" (put themselves forward as a reviewer for) each other's papers and ensure they give only positive reviews.