Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are many ways to improve code quality. Using an automated test suite is only one of them, and while it's one that is widely useful, it is of very limited value in some circumstances and I think for some developers it instils a false sense of security. Not having an automated test suite that covers a particular part of your code does not imply that the code is "untested" or of no value. It just means some other approach is needed in that case.


Not having automated test covering a piece of code does not imply that it's untested at the time it's written, but it sure as hell implies that it's not getting tested when seemingly unrelated feature X gets refactored and unknowingly breaks it.

Tests are only marginally important at the time you're writing the code they test. The real value comes months later when something else causes the test to fail, and now you a: know the code is broken, and b: have a clear specification what what that code was supposed to do.


Sorry, but I simply can't agree with most of that. I do agree that automated tests are more valuable during maintenance than during initial development, though I think they help then too. It's the other details of your comments I'm disputing below.

Firstly, even if automated testing isn't appropriate for a particular part of the code, there should still be other forms of quality checking going on that would pick up a broken feature before the code is accepted, and certainly before the product ships. If this doesn't happen, you're relying on a limited set of automated tests as a substitute for things like proper code reviews and pre-release QA, in which case IMNSHO you're already doomed to ship junk on bad days.

Secondly, if you can break one piece of code by changing a completely unrelated bit of functionality elsewhere, you have other fundamental problems: your code isn't clearly organised with an effective modular design, and your developers demonstrably don't understand how the code works or the implications of the changes they are going to make before they dive in and start editing (or even afterwards). Again, you're already doomed: no amount of unit testing is going to save you from bugs creeping in under such circumstances.

Finally, unit tests are not a clear specification of anything, ever, other than the behaviour of a specific test.

Basically, if you consider automated unit testing a substitute for any of

(a) maintaining a clean design

(b) doing an impact analysis before making changes to existing code

(c) writing and updating proper documentation, including clear specifications, or

(d) proper peer review and QA processes

then I think you're suffering from precisely the false sense of security I mentioned earlier. In many contexts, unit tests can be great for sounding alarm bells early and giving some basic confidence, but even in the most ideal circumstances they can never replace those other parts of the development process.


QA itself is a process failure. If the testers have ever repeated an action more than twice, they should be automated, and you're back to automated testing.

The only QA I've ever worked with that was worthwhile spent their time writing automated tests - they were programmers concentrated in test. Otherwise, you're literally saying 'It would be cheaper to pay this room full of people to do what a machine can do instead of paying 1/10th their number to write the same thing as a test', which is essentially never true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: