Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Coming full circle much? The h-index was touted as a better metric than judging someone’s paper by the merit of the journal since high profile researchers would get their papers in NSC easily even if it never served a purpose. The h-index is definitely less gameable than you think - the only effective way to game the h-index is to become completely illegitimate in your publishing, using farms like above. That would be easily identified if anyone even vaguely related to the field tries to read the titles and journal names.


The evaluation of scientists and academic staff needs to be done by independent evaluation panels based on a (subjective) assessment of the scientific merits of their publications. In every reasonable place it's also done that way. In addition to this, funding for time-limited contracts has to be treated similar to investment funding, i.e., evaluate very carefully in the beginning and do an internal evaluation of the performance afterwards (mostly to evaluate the initial evaluation) but avoid continuous evaluation and mostly only advise/help during the contract period.

The worst thing to have is indicator counting of any kind. The system will be swamped with mediocre, sometimes almost fraudulent scientists who game it. (It's just too easy to game the system: Just find a bunch of friends who put their names on your papers, and you do the same with their papers, and you've multiplied your "results".)

H-Index is also flawed. In my area in the humanities papers and books are often quoted everywhere because they are so bad. I know scholars who have made a career by publishing outrageous and needlessly polemic books and articles. Everybody will jump on the low-hanging fruit, rightly criticize this work, the original authors get plenty of opportunities to publish defences, and then they get their tenure. Publishers like Oxford UP know what sells and are actively looking for crap like that.


There are moderate tools like multi-round double blind reviews to resolve such issues.

A tool which was used to resolve issues among chemists and biologists is now running rampant over fields which do not have a high volume of citations. Mathematics is suffering, for example.

An Annals of Math paper might have fewer citations than a paper in Journal of Applied Statistics. But the prestige is incomparable.


So what's the problem there? The person going for a job in a maths department with an Annals of Maths citation isn't going to be in a competition with someone with a big H-index because of applied stats papers... the committee won't look twice at the statistician! On the other hand if the Annals of Maths person wants a job in stats then presumably they will also have stats papers (and the stats people will be keen to know.. "what about your amazing career in pure math?!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: