Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This ultimately means, what most programmers intuitively know, that it's impossible to write adequate test coverage up front

Nobody out there is writing all their tests up front.

TDD is an iterative process, RED GREEN REFACTOR.

- You write one test.

- Write JUST enough code to make it pass.

- Refactor while maintaining green.

- Write a new test.

- Repeat.

I don't want this to come off the wrong way but what you're describing shows you are severely misinformed about what TDD actually is or you're just making assumptions about something based on its name and nothing else.



Writing N or 1 tests N times, depending on how many times I have to rewrite the "unit" for some soft idea of completeness. After the red/green 1 case, it necessarily has to expand to N cases as the unit is rewritten to handle the additional cases imagined (boundary, incorrect inputs, exceptions, etc). Now I see that I could have created optimizations in the method and rewrite it again and leverage the existing red/green.

Everyone understands the idea, it's just a massive time sink for no more benefit than a test-after methodology provides.


See my other comment below. I don't recommend doing it all the time specifically because with experience you can often skip a lot of the rgr loop.

> Everyone understands the idea, it's just a massive time sink for no more benefit than a test-after methodology provides.

This is not something I agree with. In my experience, when TDD is used you come up with solutions to problems that are better than what you'd come up with otherwise and it generally takes much less time overall.

Writing tests after ensures your code is testable. Writing your tests first ensures you only have to write your code once to get it under test.

Again, you don't always need TDD and applying it when you don't need it will likely be a net time sink with little benefit.


> - You write one test.

> - Write JUST enough code to make it pass.

Those two steps aren't really trivial. Even just writing the single test might require making a lot of design decisions that you can't really make up-front without the code.


This acts as a forcing function for the software design. That TDD requires you to think about properly separating concerns via decomposition is a feature, not a bug. In my experience the architectural consequences are of greater value than the test coverage.

Sadly TDD is right up there with REST in being almost universally misunderstood.


> Sadly TDD is right up there with REST in being almost universally misunderstood.

That's a flaw in TDD and REST, not in the universe.


The first test could be as simple as method signature check. Yes, you still have to make a design decision here, but you have to make it either way.


Then you need to keep the test and signature in lock step. Your method signature is likely to change as the code evolves. I'm not arguing against tests but requiring them too early generates a lot of extra work.


Interesting. The method signature is usually the last thing I create.


The first test is never the problem. The problem as OP pointed out is after iterating a few times you realize you went down the wrong track or the requirements have changed / been clarified. Now a lot of the tests you iterated through aren't relevant anymore.


Is it possible that it was those tests and code in the TDD cycle that helped you realise you’d gone down the wrong path?

And if not, perhaps there was a preconceived idea of the code block and what it was going to do, rather than specifying the behaviour wanted via the RGR cycle. With a preconceived idea, with or without the tests, if that idea is wrong, you’ll hit the dead end and have to back track. Fortunately I find that even though I do sometimes find myself in this situation, quite often those tests can be repurposed fairly quickly rather than being chucked away, after all the tests are still software, and not hardware.


In my admittedly not-vast experience, a pattern going bad because the implementer doesn't understand it is actually only the implementer's fault a minority of the time, and is the fault of the pattern the majority of the time. This is because a pattern making sense to an implementer requires work from both sides, and which side is slacking can vary. Sometimes the people who get it and like it tend to purposefully overlook this pragmatic issue because "you're doing it wrong" seems like a golden bullet to critiques.


Reiterating the same argument in screaming case doesn't bolster your argument. It feels like the internet equivalent of a real life debate where a debater thinks saying the same thing LOUDER makes a better argument.

> - You write one test

Easier said than done. Say your task is to create a low level audio mixer which is something you've never done before. Where do you even begin? That's the hard part.

Some other commenters here have pointed out that exploratory code is different from TDD code, which is a much better argument then what you made here imo.

> I don't want this to come off the wrong way but what you're describing shows you are severely misinformed about what TDD actually is or you're just making assumptions about something based on its name and nothing else.

Instead of questioning the OP's qualifications, perhaps you should hold a slightly less dogmatic opinion. Perhaps OP is familiar with this style of development, and they've run into problem firsthand when they've tried to write tests for an unknown problem domain.


> Some other commenters here have pointed out that exploratory code is different from TDD code, which is a much better argument then what you made here imo.

I find that iterating on tests in exploratory code makes for an excellent driver to exercise the exploration. I don’t see the conflict between the two, except I am not writing test cases to show correctness, I am writing them to learn. To play with the inputs and outputs quickly.


I don't think GP was questioning their qualifications. Its exceedingly clear from OPs remarks they don't know what TDD is and haven't even read the article because it covers all this. In detail.


In my experience the write a new test bit is where it all falls down. It's too easy to skimp out on that when there are deadlines to hit or you are short staffed.

I've seen loads of examples where the tests haven't been updated in years to take account of new functionality. When that happens you aren't really doing TDD anymore.


That's an issue of bad engineering culture, not TDD.


That also means they weren’t being run. So you aren’t even doing tests, let alone TDD.


How to write that one test without the iterative design process? That's something always missing from the TDD guides.


TDD is not a testing process. It is a design process. The tests are a secondary and beneficial artifact of the well designed software that comes from writing a test first.


> TDD is not a testing process. It is a design process.

The article actually discusses whether this is accurate or not. TDD started out as a testing process but got adopted for its design consequences which is why there is a lot of confusion.

Naming it test driven design would have gone a long way to help things and also resulted in less cargo culting. "Have to TDD all day or you don't do TDD"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: