Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(not the OP, just my two cents)

(1) is the distinguishing characteristic of TDD, but it is not an end in itself, it's a mere way to achieve (2). You write the test after you have the API design, but before you have the implementation, and thus supposedly decouple your test from the implementation, since you don't have the implementation yet.

This approach never sat right with me.

* It makes design iteration more costly: if you want to change the API design as you're implementing the thing, now you have to change a bunch of tests to use the new API, and that's when you don't even have the implementation to test yet! This makes you subconsciously prefer quick and dirty design patches instead of making bigger but better changes to the design. So you're statistically trading potentially better design for potentially better tests. I'd rather have the former than the latter, because it's much more insidious.

* Just like TDD encourages writing tests for behavior instead of implementation, it also encourages relying on tests as the sole arbiter of code correctness. The supposed better quality of tests is offset by over-reliance on them, including by not adding tests to check the weak points of your implementation (because you already wrote all the necessary tests before you came up with it... right?). I've worked with some TDD-loving developers who didn't even feel the need to run the code they wrote on their dev box, in the application context. Tests pass, therefore straight to production. The bugs caused by this attitude offset all other advantages of TDD.

* TDD pretty much relies on 100% unit test coverage, because since you (supposedly) don't think about what the implementation will be, you don't know in which methods the bugs are likely to be. This causes you to over-invest in writing unit tests, but writing and maintaining tests is not free. The risk of bugs is not the same in all of the code, and same for the cost of bugs. Not to mention that some of the code is better tested at a higher level with integration tests or end to end tests, but IME nobody takes TDD there, probably because it's too annoying.

* Overall, TDD seems like an overly elaborate way to proactively "gotcha" yourself with various unhappy-path edge cases before even thinking about the implementation – thinking that would, in fact, expose more of such edge cases. I find that instead of this weird backwards overhead-heavy process, the usual development process works just fine. Start with the big picture, and leave messing with small details for last. Whereas TDD requires you to bring up all the small details / edge cases upfront (otherwise it's indifferent from regular testing).

Like with most ideologies, there are some good high level principles to take away from TDD, but at the end of the day, most ideologies like this are designed to replace unreliable thinking with reliable process. And while that process may indeed reliably produce some kind of consistent output, it's not necessarily better than what you'd have without this process, or with a different process.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: