Hacker News new | past | comments | ask | show | jobs | submit login

Yes yes yes! I’m in the very same boat, and came to an epiphany that the ranking trick here, combined with some subjective questions (ability to appreciate humor - seriously!?), that these things hide almost everything about actual skill. Not only does it amplify mistakes, it also forces the participants to have to know something about their cohort. Having to guess your ranking fully explains the less than perfect correlation. It also undermines all claims about competence and incompetence. They’re not testing skill, they’re only testing ability to randomly guess the skill of others.

What about the slight bias upwards? Well, what exactly was the question they asked? It’s not given in the paper. They were polling only Cornell undergrads looking for extra credit. What if the question somehow accidentally or subtly implied they were asking about the ranking against the general population, and then they turned around and tested the answers against a small Cornell cohort? I just went and looked at the paper again and noticed that the descriptions of the ranking question changed between the various “studies” with the first one comparing to the “average Cornell student” (not their experiment cohort!). The others suggest they’re asking a question about ranking relative to the class in which they’re receiving extra credit. Curiously study 4 refers to the ranking method of study 2 specifically, and not 3. The class used in study 4 was a different subject than 2 & 3. How they asked this question could have an enormous influence on the result, and they didn’t say what they actually asked.

Cornell undergrads are a group of kids that got accepted to an elite school and were raised to believe they’re better than average. Whether or not all people believe they’re better than average, this group was primed for it, and also have at least one piece of actual evidence that they really are better than average. If these were majority freshmen undergrads, they might be especially in calibrated to the skills of their classmates.

In short, the sample population is definitely biased, and the potential for the study to amplify that bias is enormous. The paper uses suggestions and jumps to hyperbolic conclusions throughout. I’m really surprised that evidence and methodology this weak claims to show something about all of humanity and got so much attention.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: