I reported to a manager that attempted to measure our productivity fixing bugs by comparing the lines of code before and after the fix. Most fixes reduced the lines of code, so our numbers were usually either negative or close to zero. A couple weeks later the manager interrogated the team as to why we were so unproductive.
Another fun experiment is to measure developers using lines of code, and then switch to unit test coverage. For some reason, the code always seems to shrink.
Counting lines of code for maintenance work is a little like evaluating your mechanic based on the weight difference of your car before and after he fixes it.
Counting lines of code for maintenance work is a little like evaluating your mechanic based on the weight difference of your car before and after he fixes it.
"The scholar seeketh daily increase of knowing; the sage of Tao daily decrease of doing. He decreaseth it, again and again, until he doth no act with the lust of result. Having attained this Inertia all accomplisheth itself. He who attracteth to himself all that is under Heaven doth so without effort. He who maketh effort is not able to attract it." - Tao Teh King, Chapter XLVIII (Crowley's translation)
Too bad that it doesn't seem like Microsoft applied that philosophy to most of its products (at least not back when I used their products. Maybe they've started recently -- I've stopped following them a few years ago when I bought a Mac).
I wonder if any ex-Microsoft employee ever came out and said that they actually had the explicit goal of making each successive version of their main products (windows, office) much slower than the last so that people would upgrade to a new computer, making Intel happy, and leading to more "comes with the computer" licenses instead of pirating.
"Never attribute to malice that which can be adequately explained by stupidity" - it's more likely that Microsoft's PMs and developers felt swayed by new features and possibilities and when the issue of performance came up, it was "meh, the average computer selling today can cope with it" and therefore little thought was given to performance on older tech or tech that has most of its performance diverted to other tasks.
Indeed, it could be that. But I'm still wondering if maybe it was more deliberate than that. After all, it would have been a way to make more money for both Microsoft and Intel, so there was an incentive there. And the downside was very small (what was the average Windows user going to switch to between 1995 and 2005?)
I am thinking that this comment came about in the era of the OS/2 Microsoft-IBM project. It was clear that IBM had a culture of kloc being a positive metric, and Microsoft did not.
I think it is useful to keep in mind that gobs of features sell, and (was it Joel Spolskey?) someone noted that any given user needs only 11% of a product such as a word processor, but another user needs a different 13%.
Apart from computers getting faster, Microsoft, Adobe and others should also keep up the artificial trade adequacy. If 20 years ago you'd gladly pay $400 for a software package that arrives on 20 floppy disks, the same amount today should be payed for a couple of DVD's to get the same level of emotional satisfaction. $400 for a 30-second download would be perceived as highly inadequate today, unfortunately.
I love the feeling after having rewritten some code part with the results of having it much shorter, much clearer, more stable and even faster. :) In almost all cases, it was worth it.
The strange thing is, sometimes there are multiple iterations of this on the same code part (sometimes by different people, sometimes even by my own).
Then I am thinking, maybe I just have gotten wiser, have learned my lesson or whatever. Stupid me that I haven't implemented it in the first place like this.
Also, I am always wondering, how much more iterations are there until I get to some final, perfect, optimal solution.
Or will I really end up with one single line of code in the end? :)
All code can be made shorter. But all code also has at least one bug. So eventually your code will be just one byte - but it will be the wrong byte :-)
If you look at programming as a knowledge acquisition activity instead of a code production activity, then you couldn't possibly have implemented it that way in the first place.
Exactly. Usually I don't have (or give myself) the luxury, but when I'm working on something small and I don't really have time constraints for it, I will write it, then rewrite it, then look at it from a different angle, and rewrite it again. Only after about three rewrites will I get something that looks good.
I'd like to bring this down to writing something good looking earlier, but it's like you say - I only learned it then and there.
This was pure joy to read. Management asks for something foolish and counterproductive, and then one of the top techs pushes the absurdity right back in their faces -- take this: obvious big win, but a drastic opposite delta by your crude, misguided, metric!
My kids stared at me laughing very loudly for quite a few seconds :-)
I've seen this great folklore story hit the front page of HN more than a few times, but it is worth it every time. It is a good reminder for every programmer that verbose, complex solutions aren't necessarily good ones.
This is one of those posts that periodically appears, but something that's almost never pointed out is that QuickDraw was written in assembly. So, by reducing the lines of code, not only did he reduce the complexity but he also directly made the code faster by being smaller.
Everybody here is having a great time beating this mostly-dead horse, but I know for a fact that I have more productive weeks and less productive weeks, and in those less productive weeks I usually write way less LOC.
What's more, if you just slightly adjust this metric to "patch size", as in count lines added as well as lines removed, you'll get a more accurate measure, and also make the linked story moot.
Measuring developer progress is incredibly hard, but we have to do it anyway, and "use your gut" cannot be the only advice on this. At the very least, comparing this metric for the same person over different periods of time can give you a clue as to how they're doing.
Reminds me of that TDWTF post where engineers were paid based on the amount of lines of code they had written - led to massive comment blocks explaining the simplest things ("This is a for loop that works by...").
It's not infinite. It's equivalent to while(i<count). The right side of the && has no effect (for i<count, !(i>count) is always true, and it short circuits to false when i==count).
"Je n'ai fait celle-ci plus longue que parce que je n'ai pas eu le loisir de la faire plus courte." - Blaise Pascal = "I would have written a shorter letter, but I did not have the time."
Counting the number of lines must be done after the algorithm is described. There are two roles: those finding the correct algorithm for a problem and those writing the code for implementing the algorithm.
Another fun experiment is to measure developers using lines of code, and then switch to unit test coverage. For some reason, the code always seems to shrink.
Counting lines of code for maintenance work is a little like evaluating your mechanic based on the weight difference of your car before and after he fixes it.