Dunning-Krueger's findings are, unfortunately, overly generalized (perhaps because people overestimate their abilities to accurately draw inferences). In fact, the bias exists for very smart people as well: Dunning found that 94% of professors consider themselves above-average.
Re breily: compared to other professors. (thanks!)
Above-average compared to other professors or compared to the general population? I haven't read anything about their findings but I'd imagine 94% being above average compared to the general population would be on the low side.
"First I am interested in why people tend to have overly favorable and objectively indefensible views of their own abilities, talents, and moral character. For example, a full 94% of college professors state that they do "above average" work, although it is statistically impossible for virtually everybody to be above average."
"Dumb people don't know they are dumb" is an idea I think most of us hold (or held before reading this). [insert reference to pg essay/xkcd here]. This article discusses that idea.
The second sentence of the title suggests there is more (or less) to it than we previously thought.
A more accurate title ("All are skill unaware") would be harder to mentally classify. I thought the idea in the blog post was important enough that a small amount of inaccuracy was justified.
In isolation, your title conveys the opposite of the article's meaning. If you didn't misinterpret, then you made a very poor editorial choice... As others have pointed out, it's safer to not change the title - or, if you feel you must, at least use a verbatim quote from near the top of the article. In this case, there is no such suitable quote, so just use the title.
Next time, please don't inject bias in the headline. The title should have been "Overcoming Bias: All Are Skill Unaware." Luckily, on this site we have comment sections for each article where we can...comment.
I would argue that the (overcomingbias.com) after the link is fine to convey what blog the article is on, and the title should just be "All Are Skill Unaware".
But that's just a nit, and I do generally prefer a poster-bias-free headline.
You did. The whole point of the article is that everybody are anaware of how dumb or smart they are.
If you wanted to reinterpret and not misinterpret you should've name it "If you believe that dumb people don't know they are dumb you are likely wrong: all are skill unaware."
I skimmed over the original article (Burson et al '06) to decode "noise and bias model":
The authors found out that on easy tasks less skilled people overestimated their performance while more skilled people estimated correctly that they did ok.
On hard tasks more skilled people underestimated their performance, while bad performers correctly estimated that they performed poorly.
In other words, on easy tasks everybody thought they were doing ok, on hard tasks everybody thought they were doing badly.
Thus it seems nobody did any metacognition; people of all skill levels just assessed task difficulty. On easy tasks "smart" people happened to be correct, on difficult tasks it was "dumb" people turn.
I've thought about this a lot. I really don't think it's possible for someone to comprehend what it is like to be someone of substantially greater or lesser intelligence. Most smart people think that dumb people are just like them but maybe not quite as good as math, or maybe just mentally lazy. Most dumb people think Barack Obama is just like them but a more polished speaker.
The exception being noted geniuses like Einstein, who most people view as demigods that are so rare as to be nearly fictional.
I imagine that the optimal level of confidence would require us to over-estimate. What possible advantage could one gain by being very realistic about one's own skills?
In a zero-sum game of two equally capable players, will over-estimation of skills enable a beneficially higher appetite for risk and exploration?
Of two independently tested players with one's "IQ" higher than the other, will the "dumb" one calibrate his skill level improperly after the same amount of exposure in a game where a player is pitted against AI with different skill levels?
It seems to me that people who spend more time validating their smarts or questioning it more would naturally arrive at more fine grained understanding of their ability.
Realism about your own skills, if coupled with realism about others' skills, might encourage you to build a stronger team earlier. The point of the article is that "you should listen to those you disagree with instead of writing them off as idiots." Overestimating your own abilities makes it less likely you will listen.
True, but you could also talk about aggregate skill estimation of a team as a whole, and what they should do when another team (or the stock market) disagrees with them.
Realism would be nice to arrive at, but say I hope to build a better mousetrap. How do I realistically estimate my chances in this endeavor?
Another team is different from the stock market. In an economic competition you have a number of opportunities to gather data from customers, prospects, non-customers, and competitors customers. The key step is to assume that you have something to learn from your competition. In the case of the better mousetrap you would be looking for folks with a mouse problem who are unsatisfied by current solutions, these are "non-customers."
You think higher appetite for risk is universally beneficial? I'd submit that the current economic situation shows otherwise. I see nothing beneficial if your overconfidence causes you to incorrectly estimate your risk exposure.
I think that there is an optimal appetite for risk. Probably higher than what results in repeated reality checks, but lower than what results in exuberance bubbles.
We do tend to calibrate our risk appetites depending on events [1], but there is a baseline around which this level fluctuates.
It is a separately interesting issue if overconfidence is merely a scaling factor in the current scenario when it wasn't risk being underestimated in the model, but the model itself being utterly flawed.
I can agree that those who believe they are above everyone else are more than likely to be bigger fools than those around them. But I find it hard to believe that essentially giving everyone the benefit of the doubt is the most "intelligent" thing to do.
Not trusting every one to have competency doesn't equate to being "dumb," in my opinion. If so, then why do companies give competency tests? Why don't they just assume anyone applying for a job "gets it."
Maybe I'm missing something here. I sure hope not, for the sake of being pegged the fool.
As DA once said:
"never understimate the ingenuity of complete fools"
They will find errors, bugs and dead-ends that would never have occured to you (or your semi-savvy testers) in a month of Sundays :)