My idea would be to either restrict access (to certain areas, such as comments and reviews) to verified groups. Initially I would say to verify users through universities and/or use a computer automated system to rank each user, determining whose opinion to weight heavily and whose to dismiss.
I would also make reviews public and name-and-shame any caught review fixing (if proper evidence is available), which would be the bigger problem imho. But again, I think this is something we should also be able to detect using proper algorithms. If a user Alice always rates the papers by Bob highly, even when others disagree, perhaps her opinion of Bob's work is not to trustworthy.
Hey, my idea is basically a free-market counterpart to yours: instead of restricting access to verified groups, allow anyone to create groups and let them compete for reputation.
This is how some journals come to be more prestigious than others anyway. You don't restrict the creation of journals to verified people; instead everyone gets to run a journal and they compete for reputation.
I think both ideas have merit. Which works best in practice would depend on the degree-of-shit (that's the scientific term) on your network. I fear special interest groups will, if they want, always outnumber legit researchers. If your network becomes large enough to attract the interest of these groups you might end up with "attacks" on climate research and other 'controversial' issues. Verified users could prevent this.
OTOH, I do not like excluding users, but I think public read-access and write-access trough accredited universities and research groups would be a fair balance. (Note, I also think if there is a charge associated with this process we should charge relative to the users country. Subsidizing access for developing countries by charging more in developed countries. To reduce exclusions.)
I don't see how special interest groups will be a problem. Surely, they can flood the reviews with unlegit attacks, but the academia can just ignore them and only stick to the high-reputation review groups. They can make a lot of noise, but I don't think it can be a big problem for the academia.
I'd say that from a UX experience you want the default-settings to work 99.9% of the time. This includes the ratings and reviews of articles. Which you could then only do by effectively censoring these groups when calculating ratings. If you're going to do that, why let them on in the first place?
I would also make reviews public and name-and-shame any caught review fixing (if proper evidence is available), which would be the bigger problem imho. But again, I think this is something we should also be able to detect using proper algorithms. If a user Alice always rates the papers by Bob highly, even when others disagree, perhaps her opinion of Bob's work is not to trustworthy.