Hacker News new | past | comments | ask | show | jobs | submit login

The plot to me always read "People estimate themselves at 60-70% percentile - above average, but not the best". And then given this broad prior, people do place themselves accurately(because the plot is increasing).

So it seems people are bad at doing global rankings. If I tried to rank myself amongst all programmers worldwide, that seems really hard and I could see myself picking some "safe" above-average value just because I don't know that many other people.

There's also: If you take 1 class in piano 30 years ago and can only play 1 simple song, that might put you in the 90th percentile worldwide just because most people can't play at all. But you might be at the 10th percentile amongst people who've taken at least 1 class. So doing a global ranking can be very difficult if you aren't exactly sure what the denominator set looks like.

So I think it's an artifact of using "ranking" as an axis. If the metric was, "predict the percentage of questions you got correct" vs. "predict your ranking", maybe people would be more accurate because it wouldn't involve estimating the denominator set.




This is exactly my conclusion, and it seems obvious... just look at the self assessment line - pretty much everyone thinks they are slightly above average. Once you know that everyone thinks they are above average, you already know how it will play out... the bottom quartile will have the biggest gap between actual skill and estimated skill.


What's that line about half of all people being below average…


> There's also: If you take 1 class in piano 30 years ago and can only play 1 simple song, that might put you in the 90th percentile worldwide just because most people can't play at all. But you might be at the 10th percentile amongst people who've taken at least 1 class. So doing a global ranking can be very difficult if you aren't exactly sure what the denominator set looks like.

Yes, and this literally implies that people in the lowest quartiles can't and won't rate themselves to be in the lowest quartiles when they are forced to give an answer. (Especially on tests that doesn't measure anything (getting jokes? really?), on tests that they have no knowledge about (how would they know that how their classmates perform on an IQ test???), or on tests that just have a high variance.)

And therefore they will "overestimate their performance".

It's like grouping a bunch of random people, and forcing them to answer whether their house is short, average or high. The "people living in short houses" will "overestimate the height of their houses", while the "people living in towers" will humbly say they live in an average high house.

Is this an existing and relevant psychological phenomenon, different from the general inability to guess unknown things? I don't think so.

If you think so, then give me proof.


One difference is that as the experiments were run on psychology students, they know the population, those are their peers with whom they interact on a daily level and they should have an idea of how they compare with them.

> how would they know that how their classmates perform on an IQ test???

Are you serious? If you're interacting with your classmates, you definitely should have some idea on how their intellectual capabilities differ between each other and also with respect to you. In a small class doing lots of things together, someone might even literally count their "ranking" at some metric that highly correlates with IQ, estimating that Bob, Jane and Mary are above me and Dan and Juliet are below me, so I'm at 40th percentile.

It's not appropriate to treat these aspects as unknown things or unknowable things.


Thank you! This article creates a dichotomy where our hypothesis must be either

1) self-assessment is perfectly correlated with skill, or

2) completely uncorrelated.

I think neither of these makes sense as a null hypothesis.

The model you describe matches my intuition about what we should expect: people know something about their own skill level, but not everything.


One minor correction - the article creates a dichotomy where the hypothesis must be either 1) self-assesment is somewhat correlated with skill, or 2) completely uncorrelated

And this is a true dichotomy. The "autocorrelative" effect doesn't need perfect correlation, just some correlation.


60% of the time, it works every time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: