I have seen TDD done successfully, at a few companies. I think it's one of those things where you really have to learn it by doing it with people who already know it. I can well believe that there are people who read a blog post, started doing it, and got terrible results.
It also requires quite a bit of discipline, or commitment, or conscientiousness. It's worth it, but you have to be on the ball. It helps a lot if everyone on the team is experienced in TDD and positive about it, because you can all support each other.
It requires constant occasional investment, in building test infrastructure, updating older tests, etc. Often not a lot, and you can do it as you go. But sometimes, particularly for integration tests, you have to sit down and bash out some sort of framework to make testing tractable at all.
And it's possible it doesn't work everywhere. In my current job, i write a lot of applied maths code, and i haven't worked out how to test-drive that, because i generally don't know what the result should be before i start (i wouldn't need to write it if i did!). Sometimes i can make relative assertions ("the antimatter consumption at warp six should be eight times the antimatter consumption at warp three"), and sometimes i can calculate rough bounds by hand ("the antimatter consumption at warp one should be within 20% of the inverse relativistic mass"). But mostly, i implement something, then run it on a lot of data, plot the output, and decide if it looks roughly right.
You're right, I didn't 'get it' until I worked at a place that was all TDD and I was surrounded by people who took it seriously. It wasn't about 100% coverage or any forced mechanics. It was simply a group of people who were all on the exact same page working on the same exact methodology. Almost a 'cult' of programmers.
> i haven't worked out how to test-drive that
That's exactly it. I don't always see tests as being necessary for greenfield code. You don't have to test drive it. But, you should write tests. Once you figure it out and have things in a place where you're comfortable, write a test so that when you go back and make a change later... you know that the code will break tests and you can be confident of 'change over time'.
It also requires quite a bit of discipline, or commitment, or conscientiousness. It's worth it, but you have to be on the ball. It helps a lot if everyone on the team is experienced in TDD and positive about it, because you can all support each other.
It requires constant occasional investment, in building test infrastructure, updating older tests, etc. Often not a lot, and you can do it as you go. But sometimes, particularly for integration tests, you have to sit down and bash out some sort of framework to make testing tractable at all.
And it's possible it doesn't work everywhere. In my current job, i write a lot of applied maths code, and i haven't worked out how to test-drive that, because i generally don't know what the result should be before i start (i wouldn't need to write it if i did!). Sometimes i can make relative assertions ("the antimatter consumption at warp six should be eight times the antimatter consumption at warp three"), and sometimes i can calculate rough bounds by hand ("the antimatter consumption at warp one should be within 20% of the inverse relativistic mass"). But mostly, i implement something, then run it on a lot of data, plot the output, and decide if it looks roughly right.