I don't want to be dismissive of your point of view, but I take issue with your mode of argument. You've made a lot of assertions without evidence and done a lot of hand-waving. These are valid concerns but I think we need to get away from this mode of thinking and get to the details.
TDD makes pretty specific claims. Most of those are testable (at least to some extent). To assert that it's all relative is to dismiss the possibility that TDD advocates are onto something. As I outlined before, outright dismissal is arrogant and unprofessional. Please, back your assertions with evidence so they can be debated.
Edit: Removed an paragraph that diluted the point.
You talked about unit testing, which is not TDD. In fact, TDD specifically predicts that unit tests will not be as effective if they're not written before the code.
I would really like it if you expanded on your personal experience, because there are probably valid objections contained there.
I really don't want to come off as a religious defender of TDD. It seems to me like you've made a lot of arguments against claims I never made (that TDD is a silver bullet; that adherence to a "methodology" trumps economics and other practical concerns). All I'm saying is: We have specific claims backed by detailed data that might allow us to be better at what we do; Let's look at those claims critically and figure out how to test their accuracy (by setting up similar experiments to produce data of similar rigor). I think that's a pretty conservative position. I'm not defending anything, even though I might be leaning a certain way in my evaluation of a hypothesis.
> If we're ignoring a practice that guarantees an improvement in the quality of our code and the speed of our delivery without a very good reason, then we are being unprofessional.
Looks like silver bullet reasoning there. It slays every problem, at least a little, and never makes anything worse.
I've done red green TDD before on 10-35k LOC systems in a couple high level languages (C, C++, Python). Used different automated testing frameworks when doing it. I've even taught it as a TA in college (although a decade ago, when XP was all the rage). I was doing test first unit testing before many of the existing frameworks existed.
You have supporting data: You have instances where it supports the conditional in your statement. The studies are not done in situations where it really can be found to be scientifically rigorous, only suggestive, and therefore my reaction is at the idea "We're unprofessional for not doing X", which implies a certainty to the idea far beyond what it is yet due.
I find the very assertion "a practice that guarantees an improvement in the quality of our code and the speed of our delivery" insulting and ironically very unprofessional, as it prioritizes advocacy of a technique over results. The tone of the OP is "TDD is like washing your hands before eating; You're paleolithic if you don't do it". He's calling those of us who don't always do it unprofessional.
I find the argument trying to get everyone to always do TDD very unpluralistic, another unprofessional tendency in my mind. There should be a heterogeneity of ideas and methods, as every situation is not the same and with the more developed tool box we can accomplish more things with the right set of tools rather than rigidly applying the few tools we apply in every situation. While there are things that should almost always be used (source control), they are very few in number. I mean, before DVCS, source control use wasn't always something that made 100% sense (remote development was quite hard to factor in with say, CVS or RCS usage without network connectivity).
TDD is horrid for soome applications, especially UI heavy ones. TDD has real costs. The idea that any technique doesn't have real costs is somewhat out there. Lots of TDD advocates espouse "it saves you more time than you spend". That's a pretty hard claim to prove, and suggestive studies don't prove it to a degree to make institutional adoption of the system mandatory. I mean, even source control has real costs, and people rarely argue against its use anymore.
My issues with this discussion: The strength of adoption which you (or at least the OP) are advocating requires almost absurd levels of proof. It literally will require some hiring/firing to implement in organizations, as some people can't handle that amount of testing. I personally have been on projects with different teams doing TDD, and don't find it to have met the promise of the central claim of this "always should use" position, which is it saves more time than it costs. I find it a mixed bag, therefore not a requirement for professional conduct.
The amusing thing is: I USE TDD (for appropriate projects). I just find the idea that it's the second coming a bit absurd and the way this is being asserted somewhat insulting. Most people think it has value, but the idea it should be cannon law is silly.
I'm sorry, I must not have been clear before. I'm not trying to assert or to prove that TDD "guarantees an improvement in the quality of our code and the speed of delivery" and I regret that it's come off like that. I'm saying that it's worth investigating with rigor and trying to steer this discussion towards specifics. As I implied in my first post, I disagree with the OP's opinions specifically because it stomps out the discussion I want to have here.
As you've implied, there are probably situations in which TDD is useful. There are situations where TDD is detrimental. The discussion I want to have is to define and support as precisely as possible when it's good to use TDD and when you should not. This isn't a call to bring up objections that I can shoot down - I'm genuinely interested in knowing because not knowing (or at least having a very good idea) is unprofessional (in my opinion, obviously).
Anyone who disagrees TDD is "a mixed bag" is unprofessional (as you pointed out - "No silver bullet!"). What I asserted originally was about what's in that bag. I asserted that not knowing is unprofessional because then you're advocating or dismissing a practice without evidence.
You're doing things that have to work under changing (non-GUI) conditions without crashing
You're using embedded/system level controllers which need to be functioning under all inputs.
You have a moderately data intensive app and a moderately complex GUI action required to put in edge cases.
You have a automation framework at your disposal to push buttons and capture window contents.
You can structure your project into backend library/front end GUI using library.
TDD lacks luster when:
You have lots of GUIs and no automation framework
You have poor definition of system behavior (proofs of concept based apps sometimes have this)
You have an embedded target with limited debugger support
You have physical switches in your application
You have apps with lots of heterogeneously structured mutli-threading (the more conditionally compiled/run code, the worse the heisenbugs get), especially with a strong GUI component.
TDD makes pretty specific claims. Most of those are testable (at least to some extent). To assert that it's all relative is to dismiss the possibility that TDD advocates are onto something. As I outlined before, outright dismissal is arrogant and unprofessional. Please, back your assertions with evidence so they can be debated.
Edit: Removed an paragraph that diluted the point.