A small but important attribution note: the 'Developernomics' article was by Venkatesh Rao, author of the Ribbon Farm blog ("The Gervais Principle"). As Forbes notes:
"The author is a Forbes contributor. The opinions expressed are those of the writer."
It's possible, perhaps even likely, that another 'Forbes contributor' will write an article with a different opinion. So it's not 'Forbes' expressing an opinion that can be right or wrong; Forbes is just a conduit.
By traditional standards, if Forbes itself was expressing the publication's view (as typically determined by its chief editor, publisher, or an editorial board), the piece would either indicate that clearly or be completely unsigned.
As online publications open their sites to an ever-larger variety of contributors, to broaden their audience/archives/inlinks, this tradition of distinction about the actual author is as important as ever.
Your observation is a good and material one. Not everyone will be as observant. As such, it's up to publishers to be as transparent as possible to distinguish reportage from opinion, paid content from more traditional editorial, contributions from staffers "vs" non-staffers, and what represents the voice of the publication "vs" individual contributors (regardless if they're staff or otherwise). IMO, the industry has work to do here - and Forbes is doing a good job of it so far.
Reading this I can't help but think of Joel's classic piece, Hitting the high notes. He did go into some productivity statistics collected by a prof who had, in the course of his teaching, subjected many students to identical programming tasks. But what was far, far more compelling was this part:
"The real trouble with using a lot of mediocre programmers instead of a couple of good ones is that no matter how long they work, they never produce something as good as what the great programmers can produce.
Five Antonio Salieris won't produce Mozart's Requiem. Ever. Not if they work for 100 years."
And great work in software can be replicated essentially for free. That's why a great programmer can be worth so very much more to a business than an average one.
I hope I don't sound rantish, but the author completely missed on his treatment of the Boehm/Beck curves
The Beck curve seems to (I'm not expert) describe the cost of making a change or adding a feature. With routine deployment in XP, it's not going to be that expensive to roll out a new one (although, the Facebook push is a process that requires the complete, undivided attention of lots of smart people, so you don't exactly get it "for free")
The Boehm curve describes the cost of finding a defect. If you find a defect in unit testing, you've wasted one person's time; in inspection, two; in systems testing, possibly many more than two; and in production, potentially 500 million people's time. Not just that, but the bugs are much harder to find.
Consider a hypothetical bug that leaks memory in the event of network flakiness. Your servers are humming along just fine, there's a transient network failure that you fix, and then you think you're fine. Meanwhile you've leaked a ton of memory, web servers start swapping, and the user experience goes to shit as most requests are delayed or timed out. You can fix this by restarting everything (but you should fear the cold roll in general [1]). So now you're fucked; you've got this weird bug that only shows up when the planets align while the microwave is on, no easy repro, and it's costing you real time and money.
The evidence he cites—that there are no 10x companies—against there being 10x-median developers is weak. How would you even define the metrics to begin with? And assuming you came to a reasonable (and measurable!) definition, why would you assume that there would be companies made up entirely of super developers, and that they would retain their 10x advantage when working together even though for every other developer communication imposes overhead?
Other than that there are a lot of great points in the article, but in terms of actually proving or disproving the 10x-median theory, I think it's impossible. My gut tells me that 10x is definitely out there for the right master programmer with the right experience and ideas to apply to a specific project, but that you'll never see it across the board because programming is just too big and free-form for talent to outpace experience.
Certainly if HR people are out there basing their hiring strategy on finding 10x-ers, they are hopelessly clueless and deluded.
It's an interesting point: if some programmers are x10 productive, shouldn't there be programmer-organizations that are x10 productive?
Some possible explanations:
- an organization's productivity has many other factors that influence its productivity - optimizing part doesn't necessarily translate into the same optimization of the whole. Other factors, such as business skills (e.g. obtaining specifications, negotiating, adroitly managing scope creep, etc)
- it's easy to be x9 more productive if you are creating a "program" instead of a "systems" "product" (Brooks)
- massive productivity results from choosing what to solve, rather than how. I.e. redefining/reframing a problem instead of confronting it directly
- x10 productive programmers work on interesting problems, which are different from those that x1 programmers tackle, so it's hard to make a direct comparison. Who can you compare Jane Street with? RADgametools? If they do have a competitive set, they are probably competing against other x10 programmers - e.g. Id, Unreal, Crytek
What about the evidence that companies that are known to seek out the best programmers(Google, Microsoft, facebook) are some of the most successful? (Or at least the most visible.)
I find the entire article quite poorly thought out and I disagree with it more or less entirely.
One part in particular stood out as being perhaps the most offensive to reason:
"Professional talent does vary, but there is not a shred of evidence that the best professional developers are an order of magnitude more productive than median developers at any timescale, much less on a meaningful timescale such as that of a product release cycle."
I really don't understand how anybody with experience in our industry could say such a thing.
I would have thought it was fairly routine for "professional developers" (by which I assume he means those devs who work in companies) to meet large numbers of people who are completely incompetent.
So in my view there must be a huge amount of first-hand evidence to support the statement that some people are 10x better than others (or even the more stringent requirement that they're 10x better than the median).
The problem with the 10x vs. 1x statement is that the 10x people are not constantly 10x better than the median. The environment in which they are placed has a lot to do with it. Circumstantial mediocrity is often a consequence of poor management, and the product release cycle is the petri dish of mediocrity.
So a common problem arising from this situation is the choice between leaving a 10x-er on the line or promoting him to do other things. It's a trade-off. If you choose the former, you maintain increased productivity, but the exceptional developer will eventually get bored and leave; if you choose the latter, you end up watering down your production pool, and risk placing the developer in a role where he might not be a 10x-er.
Constant [relative] mediocrity in a given developer pool is the stable state in either case.
I have no idea about the median developer, but the median job applicant is surely an order of magnitude worse than the top since the lower end spends much more time looking for work and the top devs never have to interview. So if you judge the industry by your experiences interviewing applicants it will be very bleak indeed.
Meh, all of these debates about "the mythical 10x programmer" are missing an important point: You can't just lump all programmers into one batch in the first place. That is, I posit that there are programmers who are better at certain things, or better in certain domains, or better at certain times of day, or even better during certain phases of the moon. Whatever.
Maybe one guy is a goddamn genius when it comes to writing embedded systems code in C, but if you put him to work writing CRUD webapps in Ruby, he might just be average. Or vice versa. Or maybe one gal is shit-hot at writing statistical analysis code for dealing with scientific data, but is useless if recast into a role doing embedded systems in C. Who knows?
To use an analogy, imagine this... you have a group of football players. You take the one who's best suited for playing Left Tackle and ask him to play Tailback. And maybe you move the Quarterback to Fullback. In either case, they are going to fail miserably, even if they would otherwise be an All-Pro at their "natural" position, simply because the attributes required to be great can't be generalized to a generic "Football Player" requirement.
In other words "Fit Matters." I posit that it matters a LOT more than most people acknowledge, or certainly more than you hear anybody discussing. And let's not even get into the importance of chemistry and the "gel" factor among the members of a team. It isn't popular to talk about these factors; probably because they're hard to measure / validate empirically and so this all remains a mostly subjective thing. But I believe this discussion is just as important as - if not more important than - all the debate about "10x programmers."
I believe in the "10x programmer" phenomenon.. In my experience, the best programmers seem to quickly become very good at whatever development task they put their minds to.
Then you may need to consider the possibility of a "100x" programmer in your scale. The "best of the best" that I know are, say, 10x better at embedded C code than they are at CRUD web apps. The best CRUD programmers I know can't hold a candle to the best embedded programmers in their domain and vice versa. At the high end of proficiency, programming is not a commodity. Specialization matters.
I think the myth comes from this: developer productivities fall on some bell-curvish distribution, but the peak of that distribution is maybe half a standard deviation from zero productivity point, with many developers adding negative value and many being only slightly positive.
Then when you start considering people a standard deviation above the mean, they're easily 10x (or infinitely) better than a significant number of programmers.
> there is not a shred of evidence that the best professional developers are an order of magnitude more productive than median developers at any timescale,
I beg to differ, using examples of Fabrice Bellard creating QEMU, Brendan Eich creating JavaScript or PG creating Arc and HN. Of course there are other examples out there, one just ought to look. Those are not tasks a single `industry average' developer or a team of such, can pull in similar timeframes. Perhaps in 10x the timeframe they would producte something of similar capability, but not necessarily of matching quality.
One could argue in every case other than perhaps Bellard that while maybe some programmers are 10x more productive in spurts than other programmers, they average out to a similar productivity in the end anyway. Sure, Eich created JavaScript in 10 days or whatever the story is, but as far as I can see (and I mean no disrespect to Eich here), those 10 days were an anomaly in terms of his usual output.
I can only really speak for my own experience where I do often have bursts of creative output that are extremely productive which are then often followed by days or even weeks of relatively slowish slogging maintenance work.
Am I capable of "10x" work? Yes. Can I maintain that "10x" 100% of the time. No, certainly not.
I think Joel's Salieri/Mozart point is the far greater one than whether anyone is "10x" productive -- some developers are simply capped at a certain level beyond which they lack some combination of the basic creativity and/or fundamental software engineering knowledge that would allow them to perform some task outside of their routine. If all of your developers are of that type, you're pretty fucked unless you're just producing run of the mill CRUD apps all of the time.
And I think he chose his words carefully, he said median, not average. Of course - to paraphrase Greg Wilson[1] - if one were to go further and compare the "best" driver to the "worst" driver, the difference would be to the power of infinity because the worst driver is dead. The comparison is not useful.
I'd love to see some studies on this sort of stuff though.
[1] "Greg Wilson - What We Actually Know About Software Development, and Why We Believe It's True" http://vimeo.com/9270320
Great video. Do you know if the book that he mentioned, originally to be called Beautiful Evidence, but that he had to change the name, was published? By which name?
We are discussing productivity in the corporate bureaucracy, not efficacy and fame. And do you have any evidence that the success of those projects were due to those persons rather than the environment in which the projects were developed and supported?
And before you reply, no efficacy is not productivity unless you are going to credit the original microsoft windows developers as productive as well.
no, I'm claiming if we use fame, popularity, and complexity of a software product as a measure of productivity of its original developers, then the original windows developers must be 10e8 times more productive than everyone else. Of course they are not 10e8 times more productive, and therefore fame, popularity, and complexity of software products are not good measures of developer productivity.
>Perhaps in 10x the timeframe they would producte something of similar capability
I think you missed a few zeros there!.I would argue that most developers would never (yes not even if they spend their entire lives) come up with anything even close to Fabrice Bellard or Paul Graham.
The difference is not genetic but of attitudes and as we all know very people are willing to change their attitudes.
"Professional talent does vary, but there is not a shred of evidence that the best professional developers are an order of magnitude more productive than median developers at any timescale, much less on a meaningful timescale such as that of a product release cycle."
Nonsense. I have seen evidence of it on nearly every project I've been on for 18 years now. It's one of the most obvious facts of enterprise software development.
I have seen evidence of it on nearly every project I've been on for 18 years now. It's one of the most obvious facts of enterprise software development.
I haven't seen any evidence of a 10x disparity in productivity myself. Of course measuring productivity is a dodgy issue in the first place, and I've never ran a proper scientific study on the matter... but my subjective observation of a decade and a half of doing this stuff, is that the spread between individual developers (excluding the ones who are so incompetent that they get fired in short order) is probably more like 3x-4x at most.
In the area of debugging games, I have been EASILY 10x more productive than the developers I was supporting in finding bugs in their code. In fact, you could say infinitely more productive, because they would typically spend hours or days trying to find the bug, and the only reason I'd ever see it is that they failed to find it.
I was the architect of the underlying game engine used by a casual game publisher, and I was the last line of defense if one of our developers got stuck. Most of the bugs weren't even related to my engine, but were deep in their game code, which I typically had never seen before I dove in.
In one case a developer had been trying to find a bug for weeks when they finally punted to me. I had it fixed in a half hour. I never spent more than a day finding one of these "impossible to find" bugs, and 95% were fixed in 1-3 hours.
These are all developers who otherwise are at least average, and possibly above average. We certainly ran them through technical interviews and determined they were at least competent to begin with, and they DID all eventually ship games. So you can't tell me that 10x average doesn't exist, at least in particular areas of expertise.
If you wanted me to write a CRUD app, or anything in Rails, though, it would take me a while to get up to speed.
As a point of reference: A lot of people sought out -- and praised -- advice I gave in the SDK support forums. I would typically figure out what their problem was without seeing any code. Many attributed the popularity of the SDK I was supporting to the fact that I was supporting it, and as the SDK became irrelevant about 3 months after I resigned from the company, there may have been something to that.
Another thought: A lot of the teams I was supporting had multiple developers, so presumably had already shown the problems to other "fresh eyes."
I'm certainly capable of getting stuck looking at a problem wrong, and I have benefited from talking to other developers. For me the benefit is typically because I've gone down the wrong design path. It's been a LONG time since I couldn't debug my own code faster than anyone else.
When you're working on a design, sometimes being good isn't an advantage; you can see a solution to a problem, and you work to solve it that way, but it turns out there's a much simpler solution to the same problem. If I weren't as confident of a programmer I might work harder to find a simpler solution. At the same time you can spend hours of time trying to come up with the perfect design that would be better spent just coding, so it's a hard balance to always get right. And for problems that really need a complex solution, well, the average programmer may decide that they can't be solved.
I've typically (with one painful exception where the match to my skills wasn't ideal) been the developer that everyone comes to for advice everywhere I've worked. I've known a lot of excellent developers easily at my skill level or beyond, and yet MOST professional developers I know are nowhere even close, and even among those I consider competent I feel like 10x may be insufficient to describe the real difference.
I'm tempted to write a blog post about the difference between productivity and effectiveness.
If you are working for someone whose sole focus is your productivity, take it as a warning sign. The only thing that matters is results and the single biggest contributor to getting results is being effective. Productivity is such a gross measure of performance that its relevance in today's business environment is almost anecdotal.
Both the author and Forbes have it backwards: there are developers out there that are ten times >worse< than the average developer. There are also, contrary to the articles assertion, plenty of teams that are 10 or a hundred or even infinitely worse than normal. (infinite meaning they never release anything at all, ever)
No, the author doesn't have that backwards. He makes this very point: "This folklore arises, in part, because it is possible to be arbitrarily more productive than the worst." And goes into it at length in the linked piece: http://www.sdtimes.com/content/article.aspx?ArticleID=31698&...
I guess that in any kind of game you have a system to sort out the worst players. Going after your theory, there are sure plenty of 1/100000 out there, but they never get in a major development project.
Another point: there are also superstar surgeons and lawyers. I think the author wanted to point that software development is not different from other professions.
Also his cost curve may be more accurate for a devops scenario than for classic product dev. Late changes/fixes induct lots of additional management cost to the technical issues, discussions with client etc. Not to say that you can't debug on a convenient environment but have sometime to reproduce it based on logs.
I think there are two different ideas being mixed up here. As a developer, there is the code and the vision. Given a particular vision, the differences in developer productivity will be small, perhaps even normally distributed.
However, developers are not simple compilers that translate a human problem into a programming solution. There is an artistic, creative side to the process that can radically impact how effective a solution will be.
To muddy things even more, there are developers who are great visionaries but poor coders and vice versa.
I know physicists/programmers who are not only 10x faster in getting anything done, but can do in a day what most programmers never achieve in a career. You'd better believe there are massive differences in productivity -- or your business will be inevitably outcompeted, and fail.
So surprisingly -- ot not, depending where you come from -- it comes down to the question of management: put a developer to a place they're not suited for, and they'll tank in terms of productivity.
"Thus the organized proletariat after the downfall of the capitalist system was to work for the establishment of state-owned and operated industry, the tremendous expansion of production by freeing it from capitalist restrictions, and the creation of a new psychology in which each would want to perform his social function. All capitalist ideology would be crushed, and all antagonistic differentiation between manual and mental labor (and therefore between town and country and between degrees of skill) would cease."
"The author is a Forbes contributor. The opinions expressed are those of the writer."
It's possible, perhaps even likely, that another 'Forbes contributor' will write an article with a different opinion. So it's not 'Forbes' expressing an opinion that can be right or wrong; Forbes is just a conduit.
By traditional standards, if Forbes itself was expressing the publication's view (as typically determined by its chief editor, publisher, or an editorial board), the piece would either indicate that clearly or be completely unsigned.
As online publications open their sites to an ever-larger variety of contributors, to broaden their audience/archives/inlinks, this tradition of distinction about the actual author is as important as ever.