I think this comment, at the end of the readme, makes too strong a claim: "Write a failing test and see it fail so we know we have written a relevant test for our requirements..." While a test that passes at this point is clearly not a valid test, relevance cannot be assumed from its failure. Tests are not immune from simple coding errors like using the wrong comparison operator, and beyond that, it is possible that the programmer has misunderstood precisely what it means for the intended purpose of the code to be satisfied. I find that a few percent of my tests and assertions are actually incorrect as first written, for both of the above reasons.
If you are thinking that I am misunderstanding the purpose of unit tests in TDD, that it is only to check that you have written the code that you intended to write, then that would raise the question of how you address the issue of fitness for purpose - these are two distinct issues, even though the goal is for the former to match the latter.
Also, while a test may provide an easy to understand description of the failure, there is no guarantee that all possible failures are so described.
Teaching testing as a fundamental part of programming is important, and I like this approach, but I think this particular claim goes a bit too far.
You chose to take their statement to a far enough extreme so that you could point out that it's not absolute, like pointing out that tests are infallible just because someone points out it's good to write them. You could always take the tautological side that they aren't good if they are bad tests, but I'm not sure that needs pointing out.
But all you've done is attacked a straw man that you built yourself and, worse, punished someone for not enumerating all the possible exceptions to a general statement they made.
Do you think it would add much value if the top HN comment was always "well, there are exceptions" lest someone forget?
I was in the debate club in high school and precisely the thing you do in competition when you have no response is to take one of their points and attack it as if they meant it absolutely. It's sheepish and it doesn't win, but it fills the silence. And it's even more insufferable outside of the debate hall.
My intent was not to attack the tutorial, and much less the author, but to take a different position on one point raised by it. As I said in the last paragraph of my post, I like this tutorial and the way it teaches testing as an essential part of programming. Let me add that I think increased and automated testing, performed concurrently with coding, is the best single thing every organization I have worked for could do to improve the reliability of its software.
Nor do I think you are justified in claiming a straw man, as I am not simply saying bad tests are not good, just as the author is not simply saying that, in general, tests are good. The point is that when you are mistaken in exactly what needs to be done by your code, or whether your intended solution fully achieves that, then your passing tests may not show that your code is satisfying its intended purpose, because your tests are written under the same misapprehensions as your code. It may not be possible to show plausible short examples, but the bigger your system gets, the more internal interfaces it has, and the more cross-cutting consistency issues it accrues, the more likely this is to be a problem. This, IMHO, has been one of the hard problems of software development, and one of the reasons why there has not yet been found a silver bullet, TDD notwithstanding.
Nor do I think it is a pedantic point, if you take into account current opinions about how to develop software. The views that I disagree with are not uncommonly seen in articles, books, tutorials and on Stack Overflow, in support of claims for the efficacy of, and necessity for, a strict interpretation of TDD. Again IMHO, this significantly understates the difficulties in writing correct code and avoids considering the problems that contribute most to this difficulty, and a dissenting view has to be raised from time to time.
WRT high-shool debate, I think it has been ruined by rules that scores points made regardless of relevance, coherence or even accuracy, so I hope I am not doing the same thing.
So... you can write a bad test? Yup. That happens. And you can misunderstand what you're trying to do? Well, there's no magic test that will make you understand what you're meant to be doing.
If you are thinking that I am misunderstanding the purpose of unit tests in TDD, that it is only to check that you have written the code that you intended to write, then that would raise the question of how you address the issue of fitness for purpose - these are two distinct issues, even though the goal is for the former to match the latter.
Also, while a test may provide an easy to understand description of the failure, there is no guarantee that all possible failures are so described.
Teaching testing as a fundamental part of programming is important, and I like this approach, but I think this particular claim goes a bit too far.