I think the difficulties I've personally had with embracing TDD come down to a slightly different set of experiences:
TDD: Spend an hour writing the test (sometimes more if I have to figure out how to properly work with a third-party library). Spend 10 minutes writing the code. Decide a few days later that I need to rework the API a bit on the feature, so spend another hour rewriting tests and 5 minutes writing new production code.
No testing: Write the feature, 20 minutes. "Test" while writing it (refresh a web page or run something at the CL). Realize a few days later I need to rework things, spend 10 minutes doing so. In the rare instances where something breaks, spend 20 or 30 minutes tracking it down and fixing.
Don't read this as an argument against TDD. It's more that I have a very hard time actually realizing an increase in productivity or code quality when using it. TDD-backed code generally takes me longer to write and pretty much nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise. The few bugs that do creep into production code are usually dealt with promptly, and I'm not sure TDD would result in entirely bug-free code either way.
Anyhow, that's why this conversation has been quite valuable to me. I am convinced that if I can clear away real-world obstacles in TDD I can do a better job of embracing it and realizing the productivity benefits that everyone always crows about. For now, my personal experience is that it's a means to marginally improve code quality at a high cost of time.
I think you're probably not taking into account the amortized cost of TDD versus what you're doing now: how much testing is really happening when you "refresh a web page"? How many times do you do that? How confident are you when you make a change to your code base that you don't have to go back and manually perform all those tests you were already doing? All that refreshing takes a lot of time!
I write unit tests so I don't have to refresh web pages all day. If I do find myself going to the command-line to test something, I figure out what I'm trying to test, and write a unit test instead; it's something I have missed when I was writing the upfront tests. Then I can do it again, and again, and again, and use continuous integration so it runs those tests for me over and over.
If you do a lot of web dev, Selenium is a really good way of testing web page features.
I have the same problem as the parent and this is exactly why. I've always thoroughly tested everything but with ad hoc tests that I had to do over and over.
The main thing I miss with TDD is the interactive nature. If I'm testing in Smalltalk I can write code right inside the debugger and watch the effect it has, but TDD always moves me back to what Smalltalkers call "cult of the dead" programming where I have to stop, run the tests and wait for the output. I wish there was a way to make it more interactive. It would be easier to force myself to do it then.
There's an awesome Ruby library called 'autospec' that watches for files being saved, and then automatically runs your test suite in the background, and gives you a growl notification if they've failed or passed...
> nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise
This. And also Refactoring is almost a dirty word so we try to keep it to ourselves and sneak in bits of refactoring when implementing bigger features
Testing is a skill like any other. When you started programming, I'm sure you were quite slow at it, as well. As you level up your testing skill, the time it takes to write tests drops down. At first, it's really slow going, though, I agree.
I also have the pleasure of writing 90% of my code in Ruby, which has top notch testing support basically everywhere.
I add code that does the "run something at the CL and print the result" type work into a t/ file. Then, when I'm happy, I change it from a print to a test, and move on.
This way manual testing becomes automated tests with very minimal additional effort.
TDD: Spend an hour writing the test (sometimes more if I have to figure out how to properly work with a third-party library). Spend 10 minutes writing the code. Decide a few days later that I need to rework the API a bit on the feature, so spend another hour rewriting tests and 5 minutes writing new production code.
No testing: Write the feature, 20 minutes. "Test" while writing it (refresh a web page or run something at the CL). Realize a few days later I need to rework things, spend 10 minutes doing so. In the rare instances where something breaks, spend 20 or 30 minutes tracking it down and fixing.
Don't read this as an argument against TDD. It's more that I have a very hard time actually realizing an increase in productivity or code quality when using it. TDD-backed code generally takes me longer to write and pretty much nobody on the business team I report to notices any difference whatsoever, except that feature X took a week longer than it probably would have otherwise. The few bugs that do creep into production code are usually dealt with promptly, and I'm not sure TDD would result in entirely bug-free code either way.
Anyhow, that's why this conversation has been quite valuable to me. I am convinced that if I can clear away real-world obstacles in TDD I can do a better job of embracing it and realizing the productivity benefits that everyone always crows about. For now, my personal experience is that it's a means to marginally improve code quality at a high cost of time.