Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The main reason TDD hasn't caught on is there's no evidence it makes a big difference in the grand scheme of things. You can't operationalize it at scale either. There is no metric or objective test that you can run code through that will give you a number in [0, 1] that tells you the TDDness of the code. So if you decide to use TDD in your business, you can't tell the degree of compliance with the initiative or correlation with any business metrics you care about. The customers can't tell if the product was developed with TDD.

Short of looking over every developer's shoulder, how do you actually know the extent to which TDD is being practiced as prescribed? (red, green, refactor) Code review? How do you validate your code reviewer's ability to identify TDD code? What if someone submits working tested code; but, you smell it's not TDD, what then? Tell them to pretend they didn't write it and start over with the correct process? What part of the development process to you start to practice it? Do you make the R&D people do it? Do you make the prototypers do it? What if the prototype got shipped into production?

Because of all this, even if the programmers really do write good TDD code, the business people still can't trust you, they still have to QA test all your stuff. Because they can't measure TDD, they have no idea when you are doing it. Maybe you did TDD for the last release; but, are starting to slip? Who knows, just QA the product anyways.

I like his characterization of TDD as a technique. That's exactly what it is, a tool you use when the situation calls for it. It's a fantastic technique when you need it.



You make a good point about not being able to enforce that TDD is actually followed. The best we could do is check that unit tests exist at all.

In theory, if TDD really reduces the number of bugs and speeds up development you would see if reflected in those higher level metrics that impact the customer.


> In theory, if TDD really reduces the number of bugs and speeds up development you would see if reflected in those higher level metrics that impact the customer.

The issue is that many TDD diehards believe that bugs and delays are made by coders who did not properly qualify their code before they wrote it.

In reality, bugs and delays are a product of an organization. Bad coders can write bad tests that pass bad code just fine. Overly short deadlines will cause poor tests. Furthermore, many coders reply that they have trouble with the task-switching nature of TDD. To write a complex function, I will probably break it out into a bunch of smaller pure functions. In TDD that may require you to either: 1. Write a larger function that passes the test and break it down. 2. Write a test that validates that the larger function calls other functions and then write tests that define each smaller function.

The problem with these flows is that 1: Causes rework and 2 ends up being like reading a book out of order, you may get to function 3 and realize that function 2 needed additional data and now you have to rewrite your test for 2. Once again rework. I'm sure there are some gains in some spaces but overall it seems that the rework burns those gains off.


You shouldn't test those smaller functions. They're internal details. They should be private.


You also shouldn't test business logic. Your test code is more likely to be a liability than an asset when it isn't testing your codebase's core infrastructure.


I totally agree with this. In practice, I see much more value in tests that fully utilize your dependencies. The hard part is tying all the shit together and not getting weird stuff on the boundaries between systems. We have to tools to make such testing reproducible but it’s underutilized.

I want my tests to give me confidence. Unit tests don’t do nearly as good of a job as something that fully utilizes infra.


Exactly, if it made a big difference to profitability then it would be evident in the market place. TDD shops would out compete the ones that don’t use it. This doesn’t seem to happen in the market. What that means, if TDD is a benefit, it is such a small benefit that other factors in the business eclipse its impact.


One can enforce the use of TDD through pair programming with rotation, as Pivotal does.

I don't know that Pivotal (in particular) does pair programming so that TDD is followed, I do know that they (did) follow TDD and do everything via pair programming. I'm agnostic as to whether it's a good idea generally, it's not how I want to live but I've had a few associates who really liked it.


Wow that sounds absolutely awful. A lot of the work I do is thinking long and hard about what I want my API to look like. It’s an iterative process and I want to be able to throw shit out a lot.


isn't that what txt coverage is about?


Did you mean test coverage? Test coverage tells you the code was tested, but it doesn’t tell you if the programmer used TDD to write the tests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: