Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having the citation's impact on one's H-index decay with author position would cause an interesting stir to the "academic game". I wonder if we would start seeing supervisor's bully for first authorship.


As the authors are typically listed alphabetically that would unwittingly have an inhumane result.

(My name moved from the end of the alphabet to the middle when I married. I was amused that it actually makes a difference).


Authors aren’t usually listed alphabetically in academic papers though, typically it is relative contribution (not saying alphabetical never happens though).


It differs by research area. For example, the mathematics authorship convention is alphabetical. Computer science is by contribution.


Computer science theory papers are alphabetical as well.


In large collaborations, this is common. See e.g. https://inspirehep.net/authors/1222902?ui-citation-summary=t...


Also depends on the journal; some have the most impactful author stated last.


H-index should go away, it cannot be fixed. Traditionally, academics have been encouraged to publish in prestigious venues. Metrics like H-index do not take this into account.


Coming full circle much? The h-index was touted as a better metric than judging someone’s paper by the merit of the journal since high profile researchers would get their papers in NSC easily even if it never served a purpose. The h-index is definitely less gameable than you think - the only effective way to game the h-index is to become completely illegitimate in your publishing, using farms like above. That would be easily identified if anyone even vaguely related to the field tries to read the titles and journal names.


The evaluation of scientists and academic staff needs to be done by independent evaluation panels based on a (subjective) assessment of the scientific merits of their publications. In every reasonable place it's also done that way. In addition to this, funding for time-limited contracts has to be treated similar to investment funding, i.e., evaluate very carefully in the beginning and do an internal evaluation of the performance afterwards (mostly to evaluate the initial evaluation) but avoid continuous evaluation and mostly only advise/help during the contract period.

The worst thing to have is indicator counting of any kind. The system will be swamped with mediocre, sometimes almost fraudulent scientists who game it. (It's just too easy to game the system: Just find a bunch of friends who put their names on your papers, and you do the same with their papers, and you've multiplied your "results".)

H-Index is also flawed. In my area in the humanities papers and books are often quoted everywhere because they are so bad. I know scholars who have made a career by publishing outrageous and needlessly polemic books and articles. Everybody will jump on the low-hanging fruit, rightly criticize this work, the original authors get plenty of opportunities to publish defences, and then they get their tenure. Publishers like Oxford UP know what sells and are actively looking for crap like that.


There are moderate tools like multi-round double blind reviews to resolve such issues.

A tool which was used to resolve issues among chemists and biologists is now running rampant over fields which do not have a high volume of citations. Mathematics is suffering, for example.

An Annals of Math paper might have fewer citations than a paper in Journal of Applied Statistics. But the prestige is incomparable.


So what's the problem there? The person going for a job in a maths department with an Annals of Maths citation isn't going to be in a competition with someone with a big H-index because of applied stats papers... the committee won't look twice at the statistician! On the other hand if the Annals of Maths person wants a job in stats then presumably they will also have stats papers (and the stats people will be keen to know.. "what about your amazing career in pure math?!"


This would be the inevitable outcome. Currently the first author did the work, and the last author supported in some way (such as by supervising).

This is purely by convention.


different fields have different conventions regarding what a given authorship position implies in terms of work contributed to the paper. Some place very high weight on the last named author, others on first named, among many other permutations and subtleties. There's no single rubric for deriving relative effort from author name position.

part of this is contingent upon citation formats used in different kinds of publications (and thus different fields) where long lists of authors are condensed to one two or three at most.

this is not even getting into more locally scoped, second order inputs such as any given department's traditional handling of advisor vs. grad student power dynamics.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: