Hacker News new | past | comments | ask | show | jobs | submit login

Most commit-based metrics are arguably flawed for the same reason looking at wikipedia edits doesn't tell the whole story. From Aaron Swartz's commentary http://www.aaronsw.com/weblog/whowriteswikipedia

> Wales seems to think that the vast majority of users are just doing the first two (vandalizing or contributing small fixes) while the core group of Wikipedians writes the actual bulk of the article. But that’s not at all what I found. Almost every time I saw a substantive edit, I found the user who had contributed it was not an active user of the site. They generally had made less than 50 edits (typically around 10), usually on related pages. Most never even bothered to create an account.

It's easy to belittle the drive-by single commits, but arguably that is a much more useful measure than the other proposed metrics




Interesting observation by Aaron Swartz. That could happen because the barrier to entry for adding to a text blurb is much lower than that for contributing to a code base — maybe because we practice the former for a decade in school.

For code, ease of access (edit ability) is non-obvious and therefore, an important axis along which to evaluate open source projects.

So, funnily enough, I agree with you because the data you quoted for Wikipedia is much less likely to apply to code :-)

Btw, I recently came across a talk by Evan Czaplicki (creator of Elm lang) on “What is success?” for an open source project like Elm, and he raises some interesting points about how measuring projects by Github activity is strongly biased by a model of how the larger Javascript community works: https://youtu.be/uGlzRt-FYto




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: