If you program in a bottom up fashion, you can test your code seamlessly while you write it. In Lisp for example, you simply check that the function you just wrote returns what you expect. If not, you fix it right then and there. While I can't present any evidence to back up my claim, I imagine programming in this way reaps the same benefits of test-driven development without adding as much time.
The problem is not to write something that works, bottom-up, as you type it. Any half-decent software engineer can do that. The problem is maintenance : when you start to modify existing code, a function many other functions rely on (I assume functional style, without side-effects here). And sooner or later you'll have to alter your code, whatever the reason: spec change, performance issue, bug, new problem to tackle, you name it.
The simplest thing I can think of is if you have to alter a function to handle correctly some corner cases. You didn't think about them when you wrote the function in the first place, so basically your function was buggy all along... Now you have to test all the functions that rely on this function again. What if they relied on said bug and you didn't realize it ? With tests, you just run the tests (and hope they were well designed...). With your technique, you do the same, by hand, function after function...
Or to remain in a very bottom up style, what if you realize that one of your underlying brick was poorly designed ? It just doesn't address well the new problem at hand and need little enhancements... Anything that relied on this brick have to be, well, updated to use the new functions, and of course checked again. Again unit-tests come in handy...
Note that I'm just making the case for unit-testing, not test-driven developement itself...
FEWER bugs. Jesus people. If you can count em, use 'fewer'.
Also, it's comparing time to count. Does 90% fewer bugs make up for the lost time? Are serious, systematic defects caught by tests? I suppose I should read the paper.
I did (sort of) read the whole thing and it did not indicate how much time the non-TDD teams spent fixing their higher defect count. Did they take more time than the equivalent 15%-35% increase in dev for the TDD teams? Perhaps it was a wash? I don't know.
How accurate is to compare the various project teams? It always seems to me that no two projects and no two dev teams are ever the same. How do you account for the differences? The researcher somewhat addresses this but I am not fully convinced:
>>"...The projects developed using TDD might have been easier to develop, as there can never be an accurate equal comparison between two projects except in a controlled case study. In our case studies, we alleviated this concern to some degree by the fact that these systems were compared within the same organization (with the same higher-level manager and subculture).Therefore, the complexity of the TDD and non-TDD projects are comparable..."
I hadn't realized Christians were necessarily language prescriptivists. If we were necessarily language prescriptivists, I would have to tell you that "em" should be written as "them", or in the alternative written as "'em" to emphasize the omission of the first two letters. I might also go on to mention that you mean systemic, not systematic. Systemic means occurring across a system, systematic means planned and ordered, as in a system.
But we're not necessarily language prescriptivists so, whee, you can write how you want to write and we can understand what you've written because it is communicative even if not "correct" as defined by reference to static "authoritative" definitions.
I admit I didn't thoroughly read the whole paper, but this stuff strikes me as something to help the bulk of average programmers (or with average experience) get through a project. I didn't see anything in the paper addressing ability and experience at all, so it comes across as process being more important than the people involved.
If a low bug-count is your goal, use a statically-typed, functional-programming language like Haskell or Ocaml. You'll probably also win on development speed, although writing fast-running code in these languages can be painful/ugly. These languages kick ass for producing reliable programs quickly.
writing fast-running code in these languages can be painful/ugly
Seriously? The OCaml implementation is about as fast as C and -- if you were to look at the language shootout -- fast OCaml code is digestable for pretty much anyone with even minimal exposure to the language. The OCaml execution model is simple. Now, with Haskell you might have a point.
I wouldn't use Ocaml (and, to a similar extent Haskell or Erlang), for anything beyond research projects or small prototypes. These languages are research-driven, not community driven, and they have plenty of features that nobody wants except the maintainers. In the case of Ocaml, just take a look at crap like camlp5 and polymorphic variants.
Language shootouts are poor benchmarks for the quality of a languge. Better metrics are community activity and the quality of the libraries.
You are completely wrong about Erlang, which is absolutely not a research language, but when combined with OTP a very pragmatic tool designed by Erlang to answer their specific needs. Now it doesn't mean that it's the right tool for any problem, but when used for the right jobs, it is awesome : elegant, mature, rock stable and definitely production ready. And the community is growing...
For more generic purpose as far as functional languages are concerned F# might have a go, if Microsoft continues to push its efforts to integrate it in Visual Studio and make it mainstream. With .net/mono under the hood the libraries are there. Not completely production ready yet in my opinion, but hopefully soon I'll be able to use it instead of C# when I have something (fast) to do on top of mono...
The AXD301 ATM switch, one of Ericsson's flagship products, 1.7 million of Erlang code. This is just one example of large-scale software projects written in Erlang. Your perception of it as a research language needs to be re-evaluated.
I just wanted to clarify the other comment to this, erlang was was written at erickson to solve specific problems, it was driven by a business need, not communities or research.
OCaml's great, but my understanding is that you have to use imperative features in order to get C performance.
For most programs, you don't care about getting C performance, and functional OCaml code does an excellent job.
I should not have said that writing "fast-running" code in Ocaml is painful and ugly, since normal OCaml code runs fast enough for most purposes. What I meant to say is that writing optimized code in OCaml can be painful.
You don't need C's performance in 99% of the code. In fact, in most projects you far from need C's performance anywhere. Had it been the case, then fewer would write Ruby, Python or PHP these days.
The performance of OCaml is really good if you write no imperative code. If you have a cost-centre that bothers you, it is pretty easy to get it fast: OCamls compiler does very little optimization, so a bit of manual rewriting helps a long way. There are several documents on the net that explains how to manually optimize a cost centre.