Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I could write an entire blog post on my opinions on this topic. I continue to be extremely skeptical of TDD. It is sort of infamous but there is the incident where a TDD proponent tries and fails to develop a sudoku solver and keeps failing at it [1].

This kind of situation matches my experience. It was cemented when I worked with a guy who was a zealot about TDD and the whole Clean Code cabal around Uncle Bob. He was also one of the worst programmers I have worked with.

I don't mean to say that whole mindset is necessarily bad. I just found that becoming obsessed with it isn't sufficient. I've worked with guys who have never written a single test yet ship code that does the job, meets performance specs, and runs in production environments with no issues. And I've worked with guys who get on their high horse about TDD but can't ship code on time, or it is too slow, and it has constant issues in production.

No amount of rationalizing about the theoretical benefits can match my experience. I do not believe you can take a bad programmer and make them good by forcing them to adhere to TDD.

1. https://news.ycombinator.com/item?id=3033446



> tries and fails to develop a sudoku solver and keeps failing at it

But that's because he deliberately does it in a stupid way to make TDD look bad, just like the linked article does with his "quicksort test". But that's beside the point - of course a stupid person would write a stupid test, but that same stupid person would write a stupid implementation, too... but at least there would be a test for it.


Huh? Ron Jeffries is a champion of TDD (see for instance https://ronjeffries.com/articles/019-01ff/tdd-one-word/). He most certainly wasn't deliberately implementing Sudoku in a stupid way to make TDD look bad!


Top-most comment to the link you provided pretty much explains the situation. TDD is a software development method, not a generic problem solving method. If one doesn’t know how a Sudoku solver works, applying TDD or any other software development method won’t help.


One of the theses of TDD is that the tests guide the design and implementation of an under specified (e.g. unknown) problem, given the requirements regarding the outcomes and a complete enough set of test cases. “Theoretically” one should be able to develop a correct solver without knowing how it works by iterative improvements using TDD. It might not be of good quality, but it should work.

Note: I am quite skeptical of TDD in general.


I don't really use TDD, but I've never heard that TDD would help guide the implementation. I always understood it was about designing a clean interface to the code under test. This being a result from the fact that you are designing the interface based on actual use cases first since the test needs to call into the code under test. It helps avoid theoretical what ifs and focus on concrete, simple design.

Personally I think that one can learn this design methodology without TDD. I find learning functional programming and say Haskell/OCaml/SML/etc.. far more beneficial to better design here than I do TDD.


It’s both.

In theory TDD drives the interface by ensuring the units under test do what they’re intended (implementation), and that each and every unit is “testable” (interface).

TDD doesn’t really care about “clean” interfaces, only that units of work (functions, methods) are “testable”.

I’d argue this actually creates friction for designing clean interfaces, because in order to satisfy the “testability” requirement one is often forced to make poor (in terms of readability, maintainability, and efficiency) design choices.


>I've worked with guys who have never written a single test yet ship code that does the job, meets performance specs, and runs in production environments with no issues.

I'm curious to unpack this a bit. I'm curious what other tools people use other than testing programatic testing; programatic testing seems to be the most efficient, especially for a programmer. I'm also maybe a bit stuck on the binary nature of your statement. You know developers who've never let a bug or performance issue enter production(with or without testing)?


Originally when I started out in the gaming industry in the early 2000s. There were close to zero code tests written by developers at that time at the studios I worked for. However, there were large departments of QA, probably in the ratio of 3 testers per developer. There was also an experimental Test Engineer group at one of the companies that did automated testing, but it was closer to automating QA (e.g. test rigs to simulate user input for fuzzing).

The most careful programmers I worked with were obsessive about running their code step by step. One guy I recall put a breakpoint after every single curly brace (C++ code) and ensured he tested every single path in his debugger line by line for a range of expected inputs. At each step he examined the relevant contents of memory and often the generated assembly. It is a slow and methodical approach that I could never keep the patience for. When I asked him about automating this (unit testing I suppose) he told me that understanding the code by manually inspecting it was the benefit to him. Rather than assuming what the code would (or should) do, he manually verified all of his assumptions.

One apocryphal story was from the PS1 days before technical documentation for the device was available. Legend had it that the intrepid young man brought in an oscilloscope to debug and fix an issue.

I did not say that I know any developers who've never let a bug or performance issue enter production. I'm contrasting two extremes among the developers I have worked with for effect. Well written programs and well unit tested programs are orthogonal concepts. You can have one, the other, both or neither. Some people, often in my experience TDD zealots, confuse well unit tested programs with well written programs. If I could have both, I would, but if I could only have one then I'll take the well-written one.

Also, since it probably isn't clear, I am not against unit testing. I am a huge proponent for them, advocating for their introduction alongside code coverage metrics and appropriate PR checks to ensure compliance. I also strongly push for integration testing and load testing when appropriate. But I do not recommend strict TDD, the kind where you do not write a line of code until you first write a failing test. I do not recommend use of this process to drive technical design decisions.


You know developers who've never let a bug or performance issue enter production(with or without testing)?

One of the first jobs I ever had was working in the engineering department of a mobile radio company. They made the kind of equipment you’d install in delivery trucks and taxis, so fleet drivers could stay in touch with their base in the days before modern mobile phone technology existed.

Before being deployed on the production network, every new software release for each level in the hierarchy of Big Equipment was tested in a lab environment with its own very expensive installation of Big Equipment exactly like the stations deployed across the country. Members of the engineering team would make literally every type of call possible using literally every combination of sending and receiving radio authorised for use on the network and if necessary manually examine all kinds of diagnostics and logs at each stage in the hardware chain to verify that the call was proceeding as expected.

It took months to approve a single software release. If any critical faults were found during testing, game over, and round we go again after those faults were fixed.

Failures in that software were, as you can imagine, rather rare. Nothing endears you to a whole engineering team like telling them they need to repeat the last three weeks of tedious manual testing because you screwed up and let a bug through. Nothing endears you to customers like deploying a software update to their local base station that renders every radio within an N mile radius useless. And nothing endears you to an operations team like paging many of them at 2am to come into the office, collect the new software, and go drive halfway across the country in a 1990s era 4x4 in the middle of the night to install that software by hand on every base station in a county.

Automated software testing of the kind we often use today was unheard of in those days, but even if it had been widely used, it still wouldn’t have been an acceptable substitute for the comprehensive manual testing prior to going into production. As for how the developers managed to have so few bugs that even reached the comprehensive testing phase, the answer I was given at the time was very simple: the code was extremely systematic in design, extremely heavily instrumented, and subject to frequent peer reviews and walkthroughs/simulations throughout development so that any deviations were caught quickly. Development was of course much slower than it would be with today’s methods, but it was so much more reliable in my experience that the two alternatives are barely on the same scale.


I think this whole failed puzzle indicates that there are some problems that cannot be solved incrementally.

Peter Norvig's solution has one central precept that is not something that you would arrive at by an incremental approach.

But I wonder if this incrementalism is essential for TDD.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: