If you mean that unit tests have accumulated a lot of dogma over the past few years, and you are saying they are "overrated" because you still need to think about how, what, and why you are testing, I agree.
If you are using your post as an excuse for not using automated testing at all, I completely disagree. That's the bad kind of developer laziness.
On the other hand, I do have to concede that when competing against people who don't use unit testing on the open market, I come off looking like a wizard in terms of what I can accomplish in a reasonable period of time and the sorts of things I can do (successful major changes to large existing code bases you wouldn't even dream of starting), so maybe I shouldn't try so hard to encourage others to use them sensibly. So, I mean, yeah, totally overrated. Have I also mentioned how overrated syntax highlighting is? You should totally just shut it off. Also, fixing compiler warnings are for wusses, and what moron keeps putting -Werr in compilers?
How much of that complexity is self-inflicted? Most of the unit testing advocates I know are also the worst architecture astronauts.
Every line in a codebase has a cost, including tests. I'd rather deal with a code base that's as trim as possible.
I've done unit tests before, but I don't find that they help that much, because they don't solve the most common source of actual production issues: things you didn't think of.
I find they do help there. Having unit tests makes me trust my code better. Confronted with a "it does not behave as I would expect" issue, that trust helps me focus attention away from the implementation of those functions.
Problem with that is that, to get that trust, I need to know that unit tests exist, and, preferably have spent time writing or reading them. Question then is whether that time would not be spent better on reading the existing code. I think that, often, the answer to that is "no", but I cannot really argue that.
Perhaps, it is because writing unit tests puts you explicitly in "break this code (that may not have been written) mode". Writing a unit test that calls a function with some invalid arguments and verifies that it throws is often simpler than reading the code to verify that. Also, unit tests may help in the presence of bug foxes and/or changing requirements. Bug report/Requirements change => code change => unit tests break => free reminder "oops, if we change that, feature X will break".
How do you then know that everything works fine when you do large scale refactoring? Test everything manually? (genuine question, not trying to be snarky).
He doesn't. And I'm not being snarky either. People will say they do, but they don't have any assurance of it. And furthermore, over time they'll learn to stop making these sorts of changes because they don't work, become very cynical about what can be done, and internalize the limitations of not using testing as the limitations of programming itself.
And then these people will be very surprised when I pull off a fairly large-scale invasive refactoring successfully, and deliver product no engineer thought possible.
I'm not hypothesizing; this has been my career path over the past five years, and I have names and faces of the cynical people I'm referring too. You can not do the things I do without testing support. I know you can't, because multiple people who have more raw intelligence than I try and fail.
It is equally true you can't be blind about dogma, 100% coverage being a particularly common bugaboo, but I completely reject the idea that the correct amount of automated testing is zero for any non-trivial project.
I get the impression that the code base on which you pulled off the "large-scale invasive refactoring" was not initially under test, else why would the cynical engineers think it could not be done. So did you have to bring the legacy code under test first?
I'm curious as to what exactly you mean. Can you give some examples? If your're frequently making large-scale changes, I'd spend more time worrying about why you're having such a hard time nailing the requirements down.
If you've only worked on projects with nailed-down requirements, you're probably not working on the sorts of projects most HN people face. The requirements change because the world changes, or our understanding of it. That's the nature of the startup. Stable codebases serving stable needs don't need as much refactoring, that's true. And in those cases units might be a waste of time. But for those of us (the majority, I'd wager, at least around here) who work on fast-moving, highly speculative projects, they are an absolute godsend.
A framework previously designed to work on a single device was ripped apart and several key elements were made to run over a network remotely instead. (That may sound trivial in a sentence, but if anyone ever asks you to do this, you should be very concerned.) The framework was never designed to do this (in fact I dignify it with the term "framework"), and tight coupling and global variables were used throughout. This was not a multi-10-million line behemoth, but it was the result of at least a good man-century of work.
As mentioned in my other post, first I had to bring it under test as is, then de-globalize a lot of things, then run the various bits across the network. Also testing the network application. Also, by the way, releases were still being made and many (though not all) of the intermediate stages needed to still be functional as single devices, and also we desire the system to be as reverse-compatible as possible across versions now spanning over a year of releases. (You do not want to be manually testing that your network server is still compatible with ~15 previous versions of the client.) And there's still many other cases I'm not even going into here where testing was critical.
The task I'm currently working on is taking a configuration API that has ~20,000 existing references to it that is currently effectively in "immediate mode" (changes occur instantly) and turning into something that can be managed transactionally (along with a set of other features) without having to individually audit each of those 20,000 references. Again, I had to start by putting the original code under a microscope, testing it (including bug-for-bug compatibility), then incrementally working out the features I needed and testing them as I go. The new code needs to be as behavior-similar as possible, because history has shown small deviations cause a fine spray of subtle bugs that are really difficult to catch in QA.
I could not do this without automated testing. (Perhaps somebody else could who is way smarter, but I have my doubts.) The tests have already caught so many things. Also, my first approach turned out wrong so I had to take another, but was fortunately able to carry the tests over, because the tests were testing behavior and not implementation. (Also it was the act of writing those tests that revealed the performance issues before the code shipped.)
This isn't a matter of large-scale requirement changes on a given project. This is a matter of wanting to take an existing code base and add new features that nobody thought of when the foundation of the code was being laid down 5-7 years ago. (In fact, had they tried to put this stuff in at the time it would have all turned out to be a YAGNI violation and would have been wrong anyhow.) Also, per your comment in another close-by thread, the foundation was all laid down prior to my employment... not that that would have changed anything.
The assumption that large-scale changes could only come from changing requirements is sort of what I was getting at when I was talking about how the limitations of not-using-testing can end up internalized as the limitations of programming itself.
Might I also just say one more time that testing can indeed be used very stupidly, and tests with net negative value can be very easily written. I understand where some opposition can come from, and I mean that perfectly straight. It is a skill that must be learned, and I am still learning. (For example: Code duplication in tests is just as evil as it is in real code. One recurring pattern I have for testing is a huge pile of data at the top, and a smaller loop at the bottom that drives the test. For instance, testing your user permissions system this way is great; you lay out what your users are, what the queries are, and what the result should be in a big data structure, then just loop through and assert they are equal. Do not type the entire thing out manually.) But it is so worth it.
1) How many lines of code is in that man-century project?
Is the number of lines of code ~proportional to the number of man hours, or lines(man-hours) function is ~ logarithmic?
2) How does your typical project look like (or how does that project look like) in terms of testing vs coding?
Do you spend few months of covering old code by tests and only then start testing?
Or you do "add tests - add features - add tests - add features - ..." cycle?
What's the proportion between time spent on writing tests and writing code?
3) What's the proportion of time you spend directly working (analyzing requirements/testing/writing code) and generally learning (books, HN, etc.)?
4) Do you do most of the work yourself or you mostly leading your team?
5) How do you pick your projects, and when you pick them - what are your relationships with the clients: Fixed contract? Hourly contract? Employment?
So, at the end of the day, you never actually did design-from-scratch work, and instead used tests to verify incremental design improvements (key part: verify not create)?
Starting from scratch does not take into account all the growing pains the previous software hat that made it into the quagmire you have learned to hate.
The sum total of the improvements were not incremental. Testing helped give me a more incremental path, but from the outside you would not have perceived them as such.
I don't do large-scale refactoring. Seriously. Small pieces? Sure.
But I've never, in 15 years of development, had to rewrite half of an application I've already written.
Spending a large amount of extra time and energy, things I don't have an excess of to begin with, for a "might" or a "maybe" seems like a rather poor choice to me.
If you are using your post as an excuse for not using automated testing at all, I completely disagree. That's the bad kind of developer laziness.
On the other hand, I do have to concede that when competing against people who don't use unit testing on the open market, I come off looking like a wizard in terms of what I can accomplish in a reasonable period of time and the sorts of things I can do (successful major changes to large existing code bases you wouldn't even dream of starting), so maybe I shouldn't try so hard to encourage others to use them sensibly. So, I mean, yeah, totally overrated. Have I also mentioned how overrated syntax highlighting is? You should totally just shut it off. Also, fixing compiler warnings are for wusses, and what moron keeps putting -Werr in compilers?