Hacker News new | past | comments | ask | show | jobs | submit login
Be professional, do TDD (madebymonsieur.com)
18 points by koski on Oct 27, 2010 | hide | past | favorite | 48 comments



And yet ... millions of lines of code are written every day and tested the old fashioned way, and the world keeps spinning round and round.

Granted, every project can benefit greatly from unit tests. As for TDD, it is an admirable practice, but let's not get religious.


I didn't like the tone of the post either. I do think that this needs to be settled, though. If we're ignoring a practice that guarantees an improvement in the quality of our code and the speed of our delivery without a very good reason, then we are being unprofessional.

Does TDD deliver what the OP says it does? I think so (anecdotal evidence, so, worthless), and there is research out there to support it [1] (possibly valid evidence). It seems we have at least a basis for debate.

Therefore, if you don't use TDD, you should raise objections to the existing research you have. Or, offer some anecdotal evidence in opposition to TDD. Then perhaps we could conduct experiments to test the corresponding hypotheses and finally converge on a verdict regarding this. I think if we (as a field) fail to do that, we are unprofessional.

[1] Evaluating the Efficacy of Test-Driven Development [pdf]. http://research.microsoft.com/en-us/projects/esm/fp17288-bha...


That study you cite has a very lenient definition of "comparable team".

Not too surprising, most methodology studies that I've seen can't exactly claim scientific rigor. (Yes, it's hard to do when it comes to real-world projects, but other fields of scientific study manage better.)


"Lenient" would imply that their approach skewed the results. If that's what you meant, could you explain why you say that? I'm hoping your explanation could lead to a testable hypothesis.

If you meant that they weren't rigorous in controlling variables related to team composition, I'd agree. However, I think those variables can safely be ignored if the study is repeated and validated many more times (which the authors acknowledge is a work in progress).


Project A:

TDD Team: 6 people, 24 man-months, no legacy code, 6 KLOC;

Trad Team: 2 people, 12 man-months, no indication whether it was a legacy project, whether they used unit testing at all (and if yes, what their coverage was).

Project B:

TDD Team: 8 people, 46 man-months, no legacy code, 26 KLOC

Trad Team: 12 people, 144 man-months, 149 KLOC, again no indication whether it was legacy, used unit testing etc.

The report mentions that they were comparable, because both teams reported to the same manager (Not too surprising for Microsoft).

I hope that there's more info about this study and that I'm a bit overreacting, but just given these tables, I wouldn't agree that we're talking about comparable teams and projects here.

So does this test unit-testing? Does it test TDD? Does it test development without legacy code vs legacy code? And even if you would get results, would it be the same for people with different experience levels? (Which is a problem for academic studies)


>If we're ignoring a practice that guarantees an improvement in the quality of our code and the speed of our delivery without a very good reason, then we are being unprofessional.

Very good reason? Yes, it's very costly and only catches a certain sort of error.

Would clients prefer 25% more features or the set you have with a full suite of unit tests? Many would prefer the 25% more features.

Additionally, with that same amount of money you could spend doing TDD, you can do any number of OTHER practices which ALSO increase code quality, including, formal design reviews, informal design reviews, informal code reviews, formal code reviews, desk checking of code, automated analyzers, running through profilers, running through memory leak detectors, modeling, prototyping, component testing, integration testing, regression testing, system testing, beta testing and many more techniques.

All of these take TIME and MONEY. TDD is just one of the many things you can do if your client/employer cares about code quality, and not necessarily the best thing. I mean formal inspections (according to code complete 2, pg 470) and formal design reviews are always much more efficacious at finding bugs than unit testing (which is what TDD revolves around). According to that, we shouldn't be spending the quality improvement time doing TDD based procedures then, but instead pouring over the code in review sessions.

Secondly, many other techniques CAN yield better results than TDD, and in less time (but don't always). These include protoyping, personal desk checking, system tests, and beta tests with more than 1000 users.

So many adherents of methodology X don't accurately scope in the tradeoffs of their costs, etc, when advocating their system. Morning scrumm meetings sacrifice people who want to have long (and therefore, more unpredictable commutes). People who like automated function comment headers don't count how much time people spend filling in those header boxes. People who like variable naming conventions don't spend enough time understanding what that does to the people who have to constantly change the way they name them. People who like full waterfall don't understand the morale crushing cost of huge changes after large design/documentation pushes like that. I could go on and on.

TDD has costs. Those costs aren't necessarily worth it. Sometimes they are, sometimes they're not. The people who say "it saves you time" are full of crap. It may, sometimes, save some people, some time. All the tests, if they're at all worthwhile, both take time to write and maintain. I've been on projects which use it and don't, and while there was a lower defect rate on certain types of bugs, those weren't always very important, nor was the time cost of writing the tests worth it, as those projects had several high value features which couldn't be completed due to time.


I don't want to be dismissive of your point of view, but I take issue with your mode of argument. You've made a lot of assertions without evidence and done a lot of hand-waving. These are valid concerns but I think we need to get away from this mode of thinking and get to the details.

TDD makes pretty specific claims. Most of those are testable (at least to some extent). To assert that it's all relative is to dismiss the possibility that TDD advocates are onto something. As I outlined before, outright dismissal is arrogant and unprofessional. Please, back your assertions with evidence so they can be debated.

Edit: Removed an paragraph that diluted the point.


Made claims without evidence? I quoted the book with page number.

>(according to code complete 2, pg 470

Go look up the evidence and studies there. Respected book, respected author vs random blogger.

Hand waving? I quoted a book and talked about in a small portion of the post my personal experience on TDD and non TDD projects.

TDD, like everything else, isn't a silver bullet. It's a tool, and one you're often perfectly justified in not using.


You talked about unit testing, which is not TDD. In fact, TDD specifically predicts that unit tests will not be as effective if they're not written before the code.

I would really like it if you expanded on your personal experience, because there are probably valid objections contained there.

I really don't want to come off as a religious defender of TDD. It seems to me like you've made a lot of arguments against claims I never made (that TDD is a silver bullet; that adherence to a "methodology" trumps economics and other practical concerns). All I'm saying is: We have specific claims backed by detailed data that might allow us to be better at what we do; Let's look at those claims critically and figure out how to test their accuracy (by setting up similar experiments to produce data of similar rigor). I think that's a pretty conservative position. I'm not defending anything, even though I might be leaning a certain way in my evaluation of a hypothesis.


> If we're ignoring a practice that guarantees an improvement in the quality of our code and the speed of our delivery without a very good reason, then we are being unprofessional.

Looks like silver bullet reasoning there. It slays every problem, at least a little, and never makes anything worse.

I've done red green TDD before on 10-35k LOC systems in a couple high level languages (C, C++, Python). Used different automated testing frameworks when doing it. I've even taught it as a TA in college (although a decade ago, when XP was all the rage). I was doing test first unit testing before many of the existing frameworks existed.

You have supporting data: You have instances where it supports the conditional in your statement. The studies are not done in situations where it really can be found to be scientifically rigorous, only suggestive, and therefore my reaction is at the idea "We're unprofessional for not doing X", which implies a certainty to the idea far beyond what it is yet due.

I find the very assertion "a practice that guarantees an improvement in the quality of our code and the speed of our delivery" insulting and ironically very unprofessional, as it prioritizes advocacy of a technique over results. The tone of the OP is "TDD is like washing your hands before eating; You're paleolithic if you don't do it". He's calling those of us who don't always do it unprofessional.

I find the argument trying to get everyone to always do TDD very unpluralistic, another unprofessional tendency in my mind. There should be a heterogeneity of ideas and methods, as every situation is not the same and with the more developed tool box we can accomplish more things with the right set of tools rather than rigidly applying the few tools we apply in every situation. While there are things that should almost always be used (source control), they are very few in number. I mean, before DVCS, source control use wasn't always something that made 100% sense (remote development was quite hard to factor in with say, CVS or RCS usage without network connectivity).

TDD is horrid for soome applications, especially UI heavy ones. TDD has real costs. The idea that any technique doesn't have real costs is somewhat out there. Lots of TDD advocates espouse "it saves you more time than you spend". That's a pretty hard claim to prove, and suggestive studies don't prove it to a degree to make institutional adoption of the system mandatory. I mean, even source control has real costs, and people rarely argue against its use anymore.

My issues with this discussion: The strength of adoption which you (or at least the OP) are advocating requires almost absurd levels of proof. It literally will require some hiring/firing to implement in organizations, as some people can't handle that amount of testing. I personally have been on projects with different teams doing TDD, and don't find it to have met the promise of the central claim of this "always should use" position, which is it saves more time than it costs. I find it a mixed bag, therefore not a requirement for professional conduct.

The amusing thing is: I USE TDD (for appropriate projects). I just find the idea that it's the second coming a bit absurd and the way this is being asserted somewhat insulting. Most people think it has value, but the idea it should be cannon law is silly.


I'm sorry, I must not have been clear before. I'm not trying to assert or to prove that TDD "guarantees an improvement in the quality of our code and the speed of delivery" and I regret that it's come off like that. I'm saying that it's worth investigating with rigor and trying to steer this discussion towards specifics. As I implied in my first post, I disagree with the OP's opinions specifically because it stomps out the discussion I want to have here.

As you've implied, there are probably situations in which TDD is useful. There are situations where TDD is detrimental. The discussion I want to have is to define and support as precisely as possible when it's good to use TDD and when you should not. This isn't a call to bring up objections that I can shoot down - I'm genuinely interested in knowing because not knowing (or at least having a very good idea) is unprofessional (in my opinion, obviously).

Anyone who disagrees TDD is "a mixed bag" is unprofessional (as you pointed out - "No silver bullet!"). What I asserted originally was about what's in that bag. I asserted that not knowing is unprofessional because then you're advocating or dismissing a practice without evidence.


Yeah, I didn't catch that earlier:

IMO, TDD shines when:

You're doing completely data oriented tests

You're doing things that have to work under changing (non-GUI) conditions without crashing

You're using embedded/system level controllers which need to be functioning under all inputs.

You have a moderately data intensive app and a moderately complex GUI action required to put in edge cases.

You have a automation framework at your disposal to push buttons and capture window contents.

You can structure your project into backend library/front end GUI using library.

TDD lacks luster when:

You have lots of GUIs and no automation framework

You have poor definition of system behavior (proofs of concept based apps sometimes have this)

You have an embedded target with limited debugger support

You have physical switches in your application

You have apps with lots of heterogeneously structured mutli-threading (the more conditionally compiled/run code, the worse the heisenbugs get), especially with a strong GUI component.


"tested the old fashioned way"

Maybe a good follow up question would have been "...or any other kind of automated testing?"

If there are not any kind of automated tests, I think there is a good chance the project is in trouble, or will be shortly.


Because a flawed process has worked in the past is no reason to dismiss a newer, better process as 'religion'.


Nor does the emergence of a new design methodology preferred by some programmers in some situations, in itself serve as a reason to dismiss older processes out-of-hand as "flawed" and the new process as "better".

There's a big difference between telling someone, "the code you're delivering is not meeting objectively-decided maintainability goals for this project", and micromanaging the personal workflow which someone uses to acheive these goals (test first, test soon after, use TDD as your primary deisgn methodology vs use whatever design methodology has worked for you in the past).

Trying to do the latter, especially when it's done with a tone of superiority and "you're doing it wrong", I think sums up why TDD advocates often tend to rub people up the wrong way -- especially experienced programmers.


It is, however, a reason to dismiss arguments that everybody would have to use the newer process. Besides, it is not really clear whether "TDD" is superior to older processes.


"It is unprofessional to release code that you are not sure if it will work or not. Know that it works."

The cost for making software that you know works, for the formal definition of "know"[1] is very, very high. That level of quality is usually reserved for space shuttles and similar.

TDD makes software as good as the tests you write. No more, no less. It can be a great aid in producing better code, and better tested code, but in the end all software development is a trade-off between time spent and quality. You have to decide where the line for "good enough" is. TDD itself doesn't magically move this line to "perfect" without incurring a corresponding cost in development time.

[1] A knows B iff B is true and A believes B and A has good reason to believe B.


Only 40% does Test Driven Development (TDD)? It means that 60% of the developers don’t know if their code works or not!

First of all, 40% is way too high. Wonder what kinds of conferences they attend…

Also, even if you buy in totally to the agile premises, knowing whether your code works should be an effect of code coverage by unit tests, whether they're up-front or not isn't an issue at all.

And finally: No, we're not "professional". This is Hacker News.


Most successful, foundational Unix software is developed on a mailing-list, with a changelog.txt file in the HEAD. Just saying.


I don't quite get the connection to TDD/Agile, but most current Unix projects actually have two files, one ChangeLog (most of them without the DOSism ".txt"), one NEWS. One for the commits, one for user-readable updates since the last release. Always found that preferable to a mixed document, where you have to filter out the major changes (or weren't able to see individual bug fixes)


The point I was trying to make is that big projects can evolve and be managed in a very simple manner, which is the case for many pieces of Unix software. The details regarding changelog nomenclature vary, and of course there may be additional plain text files which are still simply kept in the HEAD.


True. Although, to play devil's advocate, just because you can do projects in other ways doesn't mean that you couldn't do them better in Agile. (Don't get me wrong, I think that Unix projects have a better track record than Agile projects)

Personally I think the biggest benefit of "Agile" is that it's established enough to sell it to customers if you're a consultant. As no two Agile processes/teams look the same (hence the name), you've got a lot of freedom. If you came to a big company without that name brand recognition of "XP", you'd probably be forced to do it with ITIL, the Unified Process or other Godzilla-like monstrosities.

Having been there, I feel their pain.


"Be professional" would be a better aim.

In my opinion, the most important thing for code is for the developers to actually care about it. Whether you test first, test later, or just simplify the code until it's obviously correct [1] it really doesn't matter, just care enough to want bug free software.

[1] "There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult." (Tony Hoare)


At the end of the day, TDD is but one tool in the toolbox. And as always, you have to use tools that fit the actual job in question, there's no golden hammer.


In response to the quote from Hoare:

"Beware of bugs in the above code; I have only proved it correct, not tried it."

- Don Knuth


> Only 40% does Test Driven Development (TDD)? It means that 60% of the developers don’t know if their code works or not!

What?! No. No, it doesn't mean that.


Plus it doesn't address the developers who are writing tests, but just not before they write their code.


Or the people who are doing code reviews, or many other ways shown to be more efficacious than unit tests at catching defects.


Does anybody actually do Test Driven Development for the web? Looking through the stack of features and bugs I've run through in the last couple days, I can't find even one of them for which you could write a unit test.

- "Timeline ticks don't line up with grid when changing scale"

- "Resizing shapes is jerky in FireFox"

- "Need a button you can hit to pull up a simple Help screen"

- "Add an opacity slider for shapes"

Unit testing rendering correctness during mouse actions? Unit testing for UI features? Unit testing for CSS? Hacka please.

If you're doing little comp-sci type things on the server then sure you can do TDD. It makes sense in the context of building a string library or wrapping somebody else's API. In the web startup world though, how often are you actually doing that stuff?


I think it's pretty widely accepted that TDD is not a good fit for UI development. What is important is ensuring that there is a clean separation between the UI and any operational code, such that the operational code can be unit tested in isolation. Then frameworks such as Selenium can be used to focus on testing the functionality of the UI. But, as you indicate, automated testing of UI look-and-feel is not trivial.


"If you're doing little comp-sci type things on the server then sure you can do TDD."

Are you implying that your web site does not have any "little comp-sci type things on the server" aspects?

"Add an opacity slider for shapes"

This seems like the kind of thing that probably does have a little comp-sci part to it. Do you have an abstraction for shapes? Maybe a unit test for the API underlying the slider, that validates that all selected shapes now have the opacity value passed from the slider?

Maybe that doesn't fit the design of your code, but I strongly suspect there is some unit testable part of that task. The point being, unit testable things should have unit tests, and most tasks might have some unit testable part.


You're talking about a lot of UI stuff here, which is certainly testable, albeit taking some cleverness, but it's not an excuse for not doing backend logic testing.


TDD doesn't necessarily mean unit tests. We write tons of functional integration tests to make sure that stuff like you describe works, using Selenium, YUITest, and the like. It's extremely valuable, since the alternative for us is manually testing every single feature on the site anytime something changes.


How much time would you say you spend writing those tests? Maintaining them? Just capturing a complex UI interaction that you can verify at a glance in person is next to impossible with the tools you mention. Unless they've made some serious progress in the last few months.

For the issues I listed above, how many of them do you actually consider to be testable, using TDD's "write your tests first" definition of the word.


Some points:

- True TDD requires 100% test coverage of the code the developer using TDD has written or altered. I've seen the reports from companies or organizations claiming they do TDD that aim at least 60%. That isn't true TDD, in the Kent Beck definition of the term. That's not to say these people are wrong. In fact, because they have a better balance between test code and production code, they can change functionality of code more easily.

- Many developers attempting to practice TDD don't write the tests well enough to cover every possibly argument/configuration for the various methods they were testing. That's not TDD.

- Many developers attempting to practice TDD won't test trivial methods like getters and setters. That's not TDD.

- Many developers attempting to practice TDD write tests that duplicate the parts of the code being tested. While there are valid reasons for this, it results in frustration for the developer who needs to change 20 tests just to change one bit of functionality. Avoiding this in many cases can take an enormous amount of discipline and time, ensuring proper modularization, mocking, and refactorization of both production code and tests.

So, why do people claim to be doing TDD that aren't? Because there is not a good term out there that everyone knows of for just writing tests before writing code some of the time, duplicating functionality of tests, and leaving out functionality tested by many tests.

In some way, writing and maintaining tests is a lot like auto maintenance. TDD is like keeping your car in pristine condition. It is super-shiny because it is always just-washed and sparkly, and it runs like a champ. But in the end, the primary reason for the car is to get you from point A to point B.


Many developers attempting to practice TDD don't write the tests well enough to cover every possibly argument/configuration for the various methods they were testing. That's not TDD.

That's ridiculous and impractical. What if my function fails for values of t >= 946684800?


If you have a conditional or some other piece of code that would act differently for t >= 946684800, then per TDD, you'd need a test for it to be true TDD. If you don't have a conditional or different behavior that would occur with a value that large then you don't need a test, per TDD. However, if you find that values of t >= 946684800 cause a bug, then per TDD you'd write a test for it, then write the logic to handle it.

I'm not promoting real TDD, btw. Real TDD is fine as a ideal, and is achievable in many circumstances and even may make perfect sense even in an ongoing basis in some environments, but it isn't practiced to the percentage overall that the OP (in the linked post) stated, and developers that heartily promote TDD usually don't fully understand its implications when practiced fully, or what doing true TDD really involves. I'm not against writing tests, but at some point, you need to relax.


Just a point - having tests in your code doesn't mean you're doing TDD.

Doing TDD - that is, writing tests up-front - forces you to not only consider the correctness criteria for your code (and gives you the confidence that the code, once written, works correctly), but it also forces you to design your code for test, which enforces separation between components (so that they're individually testable), generation of sane interfaces (so that simple mock objects can be written) and so forth. It's not just about making code that you can be confident in, it's about making saner code for the long term, too.


Yes. This article seem to confuse TDD vs having tests. It's talking about having tests, but referring to TDD. You can have a solid set of unit tests even if you didn't doing TDD.



Hi all, I am the author of the blog post.

It seems that the blog post got attention before it was correctly reviewed. Damn you guys are fast :-)

The idea was not to push TDD or promote it in any way. It was a rant to devs who don't do unit tests. Someone else needs to one day fix their untested code without knowing if changing something in X will break Z and Y.

Even the title of the post is changed now. Seems to be too late though.

By the way, your comments are great, they gave me many thoughts.


i'm working on legacy huge spagetty code with no option to write effective integration tests, testing that my method retuns some correct string/int etc will do me nothing, the whole system is huge with many integration points and no infra for writing integration tests and no time to do it. what should i do? (I love TDD, tests in general).


I've been in a similar situation in the past. The approach I took was to work gradually and try to clean the most heavily-used mess bit-by-bit, under the radar if necessary.

For instance, take the particular long, messy function that you happen to be working with right now, then write a bunch of tests for that function. Test that invariants are maintained, bad arguments or pre-conditions are handled, post-conditions are honoured and so on. You don't even have to know whether what you are testing is correct behaviour at this point. Just get as much test coverage as you practically are able to.

Then you can start TDDing smaller functions that are made up of bits and pieces from the big, ugly beast-function. You can develop and test these in complete isolation from the running code.

Finally you start replacing bits of the ugly beast with calls to your new clean functions, ensuring that you don't break any of those tests you wrote at the start. Over a period of time, you will gradually build up a set of unit tests for the most important parts of the system and you will hopefully gain enough confidence to refactor it into something more maintainable. At least, that's the theory, anyway. :)

I would also strongly recommend reading Michael Feathers' excellent book, Working Effectively With Legacy Code (ISBN: 978-0-13-117705-5). It is the bible for this kind of work.


This is actually the type of system (especially if it's very rough code quality wise in many places) I think regression tests are very useful (tests to make sure the system doesn't change function).

A book called "Working effectively with legacy code" by Feathers is great for instrumenting and regression testing old code bases then changing them without breaking them.

Non-aff link http://www.amazon.com/Working-Effectively-Legacy-Michael-Fea...


You start. That's all. Just... start.

Is one unit test doing anything? Yes. Absolutely. It's giving you 1 more unit test then you had to begin with.

Once you have a test, you can write another test. You can refactor a method (And write tests). You can then refactor the calling methods. And so on.

"It will be a mammoth task" is only a good reason not to do something when it's either discardable in the near future (And not "When we start on V2!"), or if it's the canonical 9-woman-1-baby situation. If partial improvements give value, do them.


You can make things better if you are prepared to make the effort, the politics can sometimes be harder than the technical work.

It might be worth seeking out a copy of Working Effectively with Legacy Code by Michael C Feathers.


thanks!!


Be professional, don't push your religion on other people.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: