This is actually one of the bigger 'problems' with {T,B}DD; it's much harder to figure out how much time it takes to complete things.
BDD: Wrote the test, wrote the feature, checked it in, wall clock time, 1 hour. Next week, something unrelated changes, my tests show right where the problem is, 10 minute fix. Total time: 1:10.
No testing: wrote the feature, half an hour. Next week, something unrelated changes, breaks the feature, spend an hour fixing it. Total time: 1:30. And keep adding time when other things break later, too...
In projects with a high test coverage, I almost never spend any time debugging. That doesn't mean it's easy to recognize the saved time, though.
I think the difficulties I've personally had with embracing TDD come down to a slightly different set of experiences:
TDD: Spend an hour writing the test (sometimes more if I have to figure out how to properly work with a third-party library). Spend 10 minutes writing the code. Decide a few days later that I need to rework the API a bit on the feature, so spend another hour rewriting tests and 5 minutes writing new production code.
No testing: Write the feature, 20 minutes. "Test" while writing it (refresh a web page or run something at the CL). Realize a few days later I need to rework things, spend 10 minutes doing so. In the rare instances where something breaks, spend 20 or 30 minutes tracking it down and fixing.
Don't read this as an argument against TDD. It's more that I have a very hard time actually realizing an increase in productivity or code quality when using it. TDD-backed code generally takes me longer to write and pretty much nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise. The few bugs that do creep into production code are usually dealt with promptly, and I'm not sure TDD would result in entirely bug-free code either way.
Anyhow, that's why this conversation has been quite valuable to me. I am convinced that if I can clear away real-world obstacles in TDD I can do a better job of embracing it and realizing the productivity benefits that everyone always crows about. For now, my personal experience is that it's a means to marginally improve code quality at a high cost of time.
I think you're probably not taking into account the amortized cost of TDD versus what you're doing now: how much testing is really happening when you "refresh a web page"? How many times do you do that? How confident are you when you make a change to your code base that you don't have to go back and manually perform all those tests you were already doing? All that refreshing takes a lot of time!
I write unit tests so I don't have to refresh web pages all day. If I do find myself going to the command-line to test something, I figure out what I'm trying to test, and write a unit test instead; it's something I have missed when I was writing the upfront tests. Then I can do it again, and again, and again, and use continuous integration so it runs those tests for me over and over.
If you do a lot of web dev, Selenium is a really good way of testing web page features.
I have the same problem as the parent and this is exactly why. I've always thoroughly tested everything but with ad hoc tests that I had to do over and over.
The main thing I miss with TDD is the interactive nature. If I'm testing in Smalltalk I can write code right inside the debugger and watch the effect it has, but TDD always moves me back to what Smalltalkers call "cult of the dead" programming where I have to stop, run the tests and wait for the output. I wish there was a way to make it more interactive. It would be easier to force myself to do it then.
There's an awesome Ruby library called 'autospec' that watches for files being saved, and then automatically runs your test suite in the background, and gives you a growl notification if they've failed or passed...
> nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise
This. And also Refactoring is almost a dirty word so we try to keep it to ourselves and sneak in bits of refactoring when implementing bigger features
Testing is a skill like any other. When you started programming, I'm sure you were quite slow at it, as well. As you level up your testing skill, the time it takes to write tests drops down. At first, it's really slow going, though, I agree.
I also have the pleasure of writing 90% of my code in Ruby, which has top notch testing support basically everywhere.
I add code that does the "run something at the CL and print the result" type work into a t/ file. Then, when I'm happy, I change it from a print to a test, and move on.
This way manual testing becomes automated tests with very minimal additional effort.
This might be my own personal failing, but those tests will never get written. Once the feature works, the temptation to move onto the next thing is just too great.
Also, by writing the test first, you ensure that the code is actually easy to test. This gets easier as you get more experienced with testing, but still.
Also, writing the test first helps in identifying a sane API, I personally consider this to be the single greatest advantage of TDD.
I don't write as many tests as you do though, Steve. I generally stop once I have the nominal test cases in place, I don't test edge cases. They seem to tend to get hit by upstream unit tests anyway (I don't mock except where it is absolutely necessary). If I do hit a difficult-to-find bug, I use tests as one of my main debugging tools, writing tests for any code for which I have some doubts. By the end of a project, I generally have a fairly high code coverage for my tests, but I never feel like I'm just writing them for the methodology - they are either testing the nominal case, or testing to verify behaviour when tracking down a bug
I totally forgot about this, but I agree 100%. I'm a big fan of "Write this as though the underlying code exists, then fill it out" for API design.
I don't write as many tests as you'd think I do, actually, because I heavily favor integration tests over unit-style tests, and so we're probably much closer in that regard than you think. ;)
This is actually one of the bigger 'problems' with {T,B}DD; it's much harder to figure out how much time it takes to complete things.
BDD: Wrote the test, wrote the feature, checked it in, wall clock time, 1 hour. Next week, something unrelated changes, my tests show right where the problem is, 10 minute fix. Total time: 1:10.
No testing: wrote the feature, half an hour. Next week, something unrelated changes, breaks the feature, spend an hour fixing it. Total time: 1:30. And keep adding time when other things break later, too...
In projects with a high test coverage, I almost never spend any time debugging. That doesn't mean it's easy to recognize the saved time, though.