Unlike, it seems, everybody else I find unit tests to have very little use, but hey if you want them do so.
TDD on the other hand is both an unprofessional waste of time (write code that you know isn't going to work just to make a test that you know isn't comprehensive enough pass? Really?), and an unprofessional way to structure your design - instead of a bunch of well behaved classes that abstracts and neatly encapsulate their behavior, you have to insert hooks so that you can inject your mocks.
At the end of the day, you end up testing software and find some, but far from all, of your bugs.
If you are concerned about software quality, great - write better code, do root cause analysis, but most of all: use a language with a good static type system.
I really agree with this. The main guideline I keep in mind when I write code is the old adage from Abelson & Sussman: "Programs must be written for people to read, and only incidentally for machines to execute". Readability is the main reason I get code that is very maintainable and with very few bugs. For those that still get into the code (they always do), it's very easy to find them thanks to the good structure and readability of the code.
Unit testable code becomes ugly and unreadable very fast. The main goal becomes to make it testable instead of making it readable. It pained me every time I had to do this in the past, and I did it because some company wanted it no matter what. I don't do this in my personal projects and they are almost bug free anyway (no unit testing does not mean no testing at all).
In TDD you spend time making your code ugly and you probably introduce bugs in doing this, and then you spend other time to find and fix those bugs. And when you need to go through that code again for fixing or refactoring it's harder to understand because of all those unit testing hooks.
For very big systems or for applications with a certain structure it probably makes sense to have unit tests. In functional languages they are a lot better because you usually don't have to change the code to adapt to it. But for a lot of applications they make no sense at all.
So please, stop selling them like a panacea for everything. Even better, please stop selling anything like it is a magic medicine for every illness.
If you see no added value in unit tests you're already doing something to ensure that your programs run as expected when you refactor, try on different platforms, try different input data etc. It would be interesting to hear more about how you test your software. Either we can learn from you, or you can learn from us (we/us being people who use formalized tests).
I'm not sure any type system can even notice that I returned True instead of False, had a 'not' where I shouldn't, that I forgot about operator precedence in some spot, that the string I return indeed has the format asked by in a specification, that I validated all input correctly, etc.
Unit tests are useful for all these ugly things that come from developers not paying attention to something. I've found them extremely useful as a sanity check when I get a particular specification (might it be a design document, an API or a specific RFC).
Then again, the testing I usually found the most useful had to be integration testing. I would just sit with the testers, see what they expect the program to be and basically translate the test cases they would do (or that I would do in a REPL) into a module and run it. Lots of time saved for everyone in the process.
I think there's a "baby and bathwater" issue here, on both sides.
IMO, the basic principle of TDD is solid: write the tests, then write the code. Like most ideologies, this takes an acorn of wisdom and grows it into an oak tree of folly (or some other really pompous metaphor -- that sentence rather got away from me).
The problem comes when people think the tests are something valuable in and of themselves. They aren't. They are only valuable insofar as they show a developer (a) if something is broken and (b) what it is.
I generally agree that Mocks are more trouble than they are worth -- they are an entire mini-codebase that doesn't contribute business value and must be maintained. But usually you can get most of the benefit without spending the effort: e.g. instead of a Mock::DB, have a separate instance of your prod DB running a snapshot; instead of a Mock::HTTPConection, just do a real HTTP request.
In short, I relate to your philosophy and share it to a certain extent, but I encourage you not to let the TDD zealots drive you away from what can be a useful tool.
>IMO, the basic principle of TDD is solid: write the tests, then write the code.
This is where we disagree: the value of the code is secondary to the tests (this shouldn't be that big a point of disagreement: would you rather have no tests and the code of a functional program, or great tests and no program?), but if you design the tests first, you are going to design the program such that it is easy to write tests for.
Write the program first, then write tests for it in the places where it is reasonably worth your time to do so.
Ok. I'm not actually that wedded to the order you do the two operations in. I think there is some value in writing tests first, so that you can verify they fail (before the code is written) and then work (once it is complete). But I think it's less important to do them first than to do them.
IMO, the "killer app" for unit tests is regression testing. If I make a change and the test suite still passes, I can be reasonably sure that I didn't break anything. All the other benefits are "nice to haves" in my mind.
The common complaints he lists at the start and acknowledge actually all boil down to the fact that unit tests do not always make work more efficient. It's a tool in the toolbox. Sometimes they are great for your project. Sometimes they are a waste of time. It's still not magic pixie dust.
In my current project we have a test suite for our public webservice APIs and various clients that interface against those. Those tests are invaluable. For other parts of the project unittests don't make sense, so we don't have them, since making them would be a colossal waste of time.
So yes, you still need to defend unit tests, just like you need to defend every other choice you make during development. No specific tech gets carte blanche.
From a bit philosophical point of view a program is the specification of what the computer should do. Unit tests and type annotations both are some kind of checksums.
Generally I am for brevity: the program should be short. So I prefer that the 'checksum' also should be short. Unit testing is a really elegant and good tool if you can test a quite big chunk of functionality with a short test. If the test is for example longer than the tested functionality then its value is more controversial. Ok, for extremely important code, like the control of an airplane, put there as much redundancy in the checksum as you think is needed. But otherwise, it can be questionable.
The issue is quite similar with static typing. I am a static typing fan, but dynamic language guys always complain that you have to type too much in static languages. And they are right to some extent, although better static languages have good type inference, so it is not that big of an issue.
In fact I don't really understand dynamic language guys. They always state that they like terse code, but with modern statically typed languages like Scala you really don't have to type much more than in dynamic languages. The difference is negligible compared to that those same guys are usually unit testing fans and their unit tests sometimes really make the code base significantly bigger.
TL;DR;: I value terseness, and I prefer to use both static typing and unit tests only if it does not increase the code size significantly.
When I'm working in a static language such as Haskell I very rarely write traditional unit tests, instead preferring to use QuickCheck (http://en.wikipedia.org/wiki/QuickCheck). I like the adversarial nature - I specify invariants and I challenge the computer to prove me wrong! This seems more likely to find bugs than me writing my own tests.
Wanting to find bugs seems to be an important factor in writing quality tests. It's easy to write soft tests when the same person writes the code and does the test. Maybe that's why pair programming and TDD seem to go well together, as the other person is more motivated to try and break your code?
I like TDD, but it has its place. I think you'll find the rhetoric here is a reaction to the inflammatory tone in the article more than anything else.
It's perfectly possible to develop and maintain large, complex systems without 100% code coverage. Clearly some level of testing coverage between 0% and 100% is optimal. The case for it being 100% is, to my mind, extremely weak.
A bit out of topic, but what is it that makes the article to have "inflammatory tone"? Would love to hear some feedback about this too to improve my writing skills.
It's effectively said that anyone that doesn't practice TDD isn't professional. Telling people they're unprofessional tends to piss them off :)
edit: In response to child post - That was my interpretation of this "It will make our work so much more efficient, motivating and a bit more professional."
I have to admit that I have a bad habit to write things a bit extreme way to start a conversation. How ever where did I do such a claim that if one does not do TDD is not professional?
It seems that I definitely need to improve my English. :)
1. None.
2. Yes.
3. Confident? - Yes. Guaranteed? - No, nor would it be with 100% coverage either.
4. Absolutely (Java + Eclipse + proper use of Exceptions).
Somethings need/should be unit tested, but most "glue" code benefits far more from investing time in robust error handling rather than testing.
The difficulty I have here is that you associate unit testing with a large time cost. If you had good TDD knowledge then the length of time it takes to write a test is only slightly longer than it would take you to think about exactly what your code should be doing. In java I find unit testing is actually a huge time saver as I don't need to keep starting/stopping a server to find out if some code works correctly. With the dynamic languages the feedback loop is much shorter so it's not so relevant. Admittedly I've never used javarebel which would render this a moot point.
I use a unit test framework to write tests. These tests take a logical component and test the interface it provides to the rest of the application. I do this for every logical component certainly not every individual class and I certainly don't write a bunch of fake code to pretend like the rest of the app works. I don't care if my code works when provided with a bunch of mocks and stubs I care weather or not my code works with the rest of the system.
The only way to call it unit testing is if you make unit mean something completely different to what most people mean by unit. Yet it is automated testing that covers real functionality.
TDD does not refer only to unit testing, it also includes the integration tests. Indeed, as you say, the value of unit tests is greatly diminished without integration tests. If you're working outside-in your workflow is
- Create integration test/stories
- For each step in integration test
- Add unit tests to make it pass, starting at the view layer and working backwards.
The level of unit testing required for each step depends on the complexity of the code. So what does unit testing get you over just performing integration tests? 2 things:
1) Documents and enforces expected behaviour for edge cases that you haven't written integration tests for - e.g. unusual error conditions
2) Isolates points of inflexion in your system - your stories/use cases will often rely on several components that have multiple logic paths. When you combine the branching logic from several components the combinatorial range of behaviour can be vast. By isolating units at the points of inflexion you are able to test a far smaller range of inputs/outputs so that you can achieve effective test coverage. You're quite correct that the units you tests are not necessarily aligned 1 to 1 with your classes. However, given that a well designed class will have a single responsibility this is often the case.
I take a pragmatic approach to Unit tests so find some use but don't feel the 100% gold standard of coverage is realistic or even sensible to aspire too but in response to Q3) never - but then I'd return the question When you deploy an app WITH 100% coverage how can you be confident that there are no regressions? I'm pretty sure the answer is never too as 100% test coverage doesn't prevent 100% of bugs.
I should clarify what I mean by 100% test driven. By this I mean that no line of code was written without first having a test written for it. While you can't practically achieve 100% coverage of all code paths you can at least exercise all of the expected code paths.
Kent Beck - "Before [market fit] you need the absolute lowest possible latency from idea through implementation to validation. ... Sometimes this means no automated tests."
I think this is a great point and perhaps explains the low support for TDD on HN.
In my experience the properties of a TDD project are: consistent flow rate of business value, low regression rate, low QA cost (assuming integration tests). It's the consistent flow rate which is perhaps most interesting.
At the start of a non TDD project it's easy to achieve a high velocity without tests. As the length of time from the start of the project increases the velocity will inevitably decline due to the increased complexity of the project and the greater level of QA required. Over a longer period the project without tests is going to tend towards a slower velocity until eventuality it hits a wall and it's big rewrite time.
For a TDD project the velocity curve is going to be different. You're going to see a slower start but once a velocity has been established it tends to stay fairly flat or increase. The XP guys talk about hyper-productive teams, there's no guarantee but you're far more likely to hit this point with a TDD project.
Increasing the number of features a product has and the number of team members will only exaggerate these two velocity curves.
There's another factor here as well, the knowledge of TDD within a team. The greater the TDD experience the smaller the difference will be in the starting velocity. With enough experience the velocity hit becomes small enough that the two curves can cross over very quickly.
The choice of strategy then becomes: are the velocity curves likely to cross over before or after you achieve market fit? Not an easy question to answer and one which is easy to ignore when everyone is excited about just getting something built.
I also can not believe how hard it is to convince people of the advantages of testing.
I often see stories that are not accepted multiple times simply because the product owner has changed the intent of the original story. Tests really help to define what it is that the product owner wants before a line of code is even written.
In the bigger picture, how much "more time" does it really take?
Dumb story, but my dad taught me how to do word working: making European Cabinets. So many times he would say "Measure twice, cut once." This, in some convoluted way, could apply to software engineering.
There's a reason they say that though. Unless you happen to have a board stretcher in the truck, cutting something too short ranges from "expensive" to "irrecoverably fails the entire project."
Compare that to writing code. Fixing something you did wrong is so trivial that I'll often find myself writing 3 variants of a bit of code and testing rather than pulling out a sheet of paper and doing a bit of math to make sure I code it correctly.
"Should this be cos(theta) or -cos(theta)?"... code, compile, check, whoops, delete, compile, check, done.
That's waaay better than "134 inches, I think", cut, fit, whoops, phone Jerry: Hey, could you pick me up a new piece of #4 crown moulding on the way in?
I'm intrigued to know whether your "code, compile, check" anecdote was meant to argue for or against TDD as it could go either way.
I guess the point you're responding to meant that "eyeballing" whether something actually works is the equivalent of "estimating" a length, a couple of easy tests is the equivalent of "measuring once" and "measuring twice" is taking the extra effort to build a small suite of tests with corner cases, a small sign of having learned from bitter experience.
Of course with automated tests you are literally measuring twice (indeed, measuring many times) even as the tools and materials change out from under you.
I also can not believe how hard it is to convince people of the advantages of testing.
It's about as hard as convincing people of the advantages of static typing. Not to pick on Ruby particularly, but those guys go crazy for writing their own tests for things a compiler could tell them "for free".
Software is free, so if you make a wrong cut, it doesn't matter.
Untrue; tell that to the person who's paying the QA teams who have to validate your code. Or the patch teams that work 48 hours straight to get a hotfix out because something broke after deployment.
I wonder if the split between pro/anti unit testing comes from folks in traditional developer positions vs startups (since there is a decent mix of both here).
As a pure developer working on an established codebase, the benefits of automated testing are quite clear, especially if you have paying customers and the cost of failure is high.
But as a startup, every extra line of test code you write slows you down from launching, and the most important thing at this point is launching and figuring out whether you actually have a market for your product.
After all, what's the point of having 100% test coverage for a product that no one wants to use?
It's also entirely likely that you're going to end up throwing large sections of your codebase out as you pivot towards something else.
Here's the way I see it:
It's rather pointless to have 100% test coverage of a product that no one is using, but it's also dangerous to have 0% coverage of a product with live, paying customers.
Intelligent developers will have the find the right balance between the two (as it shifts over time).
If you're ever intending to pivot, chances are that there's going to be a refactoring involved, just as much as you're likely to throw things away. I have difficulty seeing how that's going to be quick and easy without either a rock-solid type system, good test coverage, awesome IDE support, or all three.
While it is true that a start up needs to get a product or prototype up and running as quickly as possible once they have something working, it would be devastating for the program to stop working soon afterwards so even thought the startup company will not be spending time testing until they finish a prototype it would be in their best interest to generate some automatic unit tests as soon as the prototype is working so that the current state can be captured and future regressions can be detected.Check this out: http://www.parasoft.com/jsp/technologies/unit_testing.jsp?it...
I'm the author of the blog post. I'm very glad that it has gained already so much opinions and feedback.
As user ollysb, I'm quite surprised to notice that there seems to be more against the post than for it. And that was one of the reasons why I actually wrote it: You are smart guys, please proof me wrong about unit tests. I can only find good things in unit testing, so it seems that I have not fully thought it through.
ps. Did the post have "inflammatory tone"? It was definitely not my intention. Sorry if it did. I have done computer programming for living only just about 10 years and I am definitely not a professional in it yet. Which parts made it sound cocky? I would love to hear feedback for that too improve my writing.
Then you are simply not very experienced using it. You're in the love-phase where everything about some new technology seems wonderful, and since it's solved so many previous problems for you, you are blind to the warts and problems that come with it.
We understand your enthusiasm for it, unit testing and TDD are good tools, but they're just that. Tools. That you can choose to use if they fit the project.
Your blog post (and the previous one) both said that unit tests and TDD are always awesome for all projects. This is not true, and this is what people here are reacting to.
Yes, most likely the truth is that I'm not experienced enough of using it and the type of systems I build are limiting my view.
I also understand unit testing is a tool with it's limitation.
However, for me, unit testing is a good tool that helps me to proof that the code I did works. And I hope I would see more of unit tests on the field.
But as you said, probably the way to find the dark side of unit testing is just doing it and finding the problem. So I'll keep using the tool until I do.
I think I also really need to learn to write better. :) Where did I do such a claim that the TDD and unit tests are always awesome for all projects? I think I even did the opposite.
Importance of unit tests varies with the projected lifetime of the software you're writing. If you're writing libraries or system services, then unit testing is smart, as those are bound to live on for many years. Simple end-user applications might be a different case.
On the systems side (in this case, a PHP framework), I've found that once test coverage passes 70% you can make optimizations and refactor the code without having to actually run it - the tests will tell whether the changes you did still maintain API stability.
There's a bit of a solution to this out there: if you are a developer who really cares about unit tests, make sure to ask any prospective employers how they feel about them during the interview process.
TDD on the other hand is both an unprofessional waste of time (write code that you know isn't going to work just to make a test that you know isn't comprehensive enough pass? Really?), and an unprofessional way to structure your design - instead of a bunch of well behaved classes that abstracts and neatly encapsulate their behavior, you have to insert hooks so that you can inject your mocks.
At the end of the day, you end up testing software and find some, but far from all, of your bugs.
If you are concerned about software quality, great - write better code, do root cause analysis, but most of all: use a language with a good static type system.