I quite honestly had to read this over a couple of times to make sure this is not some kind of a parody. I don't understand why people are complimenting this. Let me count the ways:
1) Asking a client to produce a quantitative "job spec" like this is simply going to produce a bunch of random numbers. What basis is there for saying that "vision" counts for 15% and "financial mgmt expertise" for 25%? These are not comparable things and you can't throw numbers on them. At most you can say things like "financial mgmt is a must, and of all that have it, we want the ones with the best vision". You still have to define these...
2) How are you supposed to give such fine-grained numeric grades to candidates on things like "vision"?
3) What do you do with the chart once you have it? This is the part of the post that makes it look like a parody - he has these charts for two candidates and says "now it's easy as pie to choose", but neither chart looks hugely better than the other. Are you supposed to compare the areas with grid paper? That makes no sense and may leave you with a candidate that lacks something crucial.
4) This brings me to my final point - you may be looking for things in a candidate that are pass/fail, e.g. in his chart "dealing with government regulators" looks like a sine qua non thing. His method could well result in overlooking this.
I don't get why people are calling this insightful and so on. This is extremely similar to a ton of competencies matrix methods, but this way of presenting the results is particularly bad.
It may be that the ranking exercise at the beginning is the big benefit, rather than the resulting ranks, because it makes you think about what's important to you. You may hire someone who doesn't actually fit the ranking, but the exercise was valuable anyway because it helped you better understand the candidates and their relationship to your business.
Ahh, the weighted average approach to decision making.
If you want to make a decision but don't want to trust your own intuition and judgment, and don't want to get embroiled in hard arguments where someone can get blamed later when things don't work out, then hey, I guess you need a mathematical model!
"But I only know basic arithmetic," you say.
That's okay! Because the arithmetic behind a weighted average is so easy to do and understand, anyone can do it!
Not only that, but you can take that underlying model and produce any number of pretty charts that make your entirely subjective decision seem like it has some mathematical rigor behind it!
The Weighted Average (tm): for when you absolutely have to make a completely subjective decision and don't want to be blamed for the results.
Studies have shown that often people are so clueless that weighting categories equally produces better results than their best weights.
(The study I read about asked probands to give weights for indicators of prospective university student performance (like good grades in high school, extracurricular activities, IQ, scholarships, etc), later they compared with the grades those students actually got.)
This sounds like he's tried to make hiring as quantitative as possible (a hard and worthwhile task). However the slices are still as subjective as ever. How do you measure someone's "vision"?
Even when you've measured all these slices, how do you combine them?* Notice that if you take the total area of the pie chart filled, this implies all the scales you're measuring are quadratic, so that going from 8 -> 9 is worth twice as much as going from 4 -> 5. This is why Tufte hates pie charts.
*I assume you combine them. If not, what's the point of the pie chart?
I was just thinking this - it's saying that how good the candidate is in a particular category counts for more (by a factor of r^2) than how important you have judged that category to be. To me this is the most interesting claim in the article, and it seems like if this were actually what the author was trying to say he would have at least mentioned the reasoning behind it.
Not that this is the biggest weakness with the method, since as others have pointed out the assignment of these weights is pretty arbitrary in the first place, but it makes me think this is just hand-waving all the way through.
It certainly is a nice way of thinking about hiring. The "problem" is that each one of those slices (whatever they are) is deeply and intrinsically subjective both in width and length. For instance, how can you determine how important "vision" really is in any particular position, how can you hope to evaluate "vision" on a scale of 1 to 10? How do you know that someone with stellar skills on one area isn't capable of transferring those skills to a new area that they have no experience in?
Moreover, I am convinced that employers often really don't know what they want out of a potential hire. In other words, there may be pieces of pie that they're not bringing into consideration or pieces of pie that should not even be on the table.
I think that employers would do best to look for what is known as "T-shaped" people: folks who have broad experience and a few areas of very deep expertise. These are people who have demonstrated that they CAN develop expertise and if they've done it before they can do it again in new areas. This makes sense because, except for the most humdrum operations jobs, you never really know what your employees will have to work on in the future.
I've never had much trouble comparing two candidates. I wish I had the problem of finding two qualified candidates and having to only pick one. That beats the sea of unqualified and hilariously underprepared candidates I get now. Finding someone who's able to do the job is like finding a diamond in the rough.
Given how bad our brains are at determining the size / area of slices like that, I'm not sure visually comparing candidates like that would necessarily result in you picking the best one - and that's assuming that each of the required skills is measurable on a linear scale, which is rarely true.
The biggest difference: The second derivative of quadratic growth (the change in the rate of change) is constant. While any derivative of exponential growth stays exponential.
For quadratic growth, think naive sorting like bubble sort. For exponential growth, think bogosort.
Technically: If you use Big-O notation, then indeed O(x^2) \subset O(a^x). But also O(x) \subset O(x^2) \subset O(a^x).
This sounds like a good idea. I wish people would do randomized controlled trials on this sort of thing, but there are only a handful of companies that hire enough people to do RCTs, and I doubt any of them (except perhaps Google) would be willing to do something that isn't "best" to get more data even though, as things are now, no one can really know what's best.
There are a series of behavioral economics studies that show, in the right circumstances, that going with your gut is better than explicitly making a list of factors, assigning weights to them, and making a decision after that (whether that decision is algorithmically determined by the list, or is a holistic decision that comes after looking at the list). The ‘gut’ method tends to be better when decisions are more complicated, and have many factors, as is the case when hiring, so I suspect that pie method is one of those things that sounds helpful but isn’t. No one knows, though (unless there’s actually been a study on this; if there has, please, correct me if I’m wrong).
From what I've heard, getting the attributes right is hard.
You need stuff like:
- Honesty. Obvious, but I'd expect it gets quite hard to find at the CEO level.
- Frugality. Some great CEOs are legendary tightwads.
- Not penny-wise, pound foolish. Because even if you are a tightwad, you shouldn't kill moral or R&D. 90% of a CEO's job will be knowing what expenses to approve, and what expenses to knock back.
- Deep knowledge of the organization, its environment, and its processes. Otherwise, they will not be able to verify when their C-execs are screwing with them.
- Commitment to improving operational effectiveness. Low level engineers are great firefighters. It's the guys who can keep improving the small stuff that are really worth the big bucks.
The signal to noise ratio of Steve Blank is very high. I like this idea a lot. In the past I have been involved with firms who have done this on two dimensions: managerial competence and industry knowledge, or content knowledge and process knowledge. The circle is a great way to add another dimension to it.
The hard part seems to be the 2nd step - mapping the skills of the candidates to the description. It sounds easy, but rating soft skills is very difficult.
I think part of that would be asking the candidate how they feel about their skills.
Sure, they could lie, but if they're lying, the whole interview is suspect. Hiring a person who lied -at all- in the interview would be a bad idea. How could you possibly trust them?
1) Asking a client to produce a quantitative "job spec" like this is simply going to produce a bunch of random numbers. What basis is there for saying that "vision" counts for 15% and "financial mgmt expertise" for 25%? These are not comparable things and you can't throw numbers on them. At most you can say things like "financial mgmt is a must, and of all that have it, we want the ones with the best vision". You still have to define these...
2) How are you supposed to give such fine-grained numeric grades to candidates on things like "vision"?
3) What do you do with the chart once you have it? This is the part of the post that makes it look like a parody - he has these charts for two candidates and says "now it's easy as pie to choose", but neither chart looks hugely better than the other. Are you supposed to compare the areas with grid paper? That makes no sense and may leave you with a candidate that lacks something crucial.
4) This brings me to my final point - you may be looking for things in a candidate that are pass/fail, e.g. in his chart "dealing with government regulators" looks like a sine qua non thing. His method could well result in overlooking this.
I don't get why people are calling this insightful and so on. This is extremely similar to a ton of competencies matrix methods, but this way of presenting the results is particularly bad.