The very first thing I do when I take over a codebase is to write tests. Without tests, it's impossible to do maintenance work or add functionality in any sort of rigorous fashion--how can you know that your assumptions about how the code works are correct? How can you know that your trivial change didn't break something?
Of course, tests don't actually tell you these things. But they can tell you that your assumptions were wrong, or that your trivial change broke feature xxx, and that's crucial information to have.
Do you always have the time/bandwidth to write these tests? I'm curious what you might do if an old codebase lands in your lap and someone says "here, fix these bugs by the impossible_length_of_time."
I appreciate the idea here, and I've done the same in certain circumstances, but typically that means writing tests for bits of functionality that I need to touch.
If it's impossible, then the diligent engineer says so. Projects are often doomed by people with "can-do" attitudes attempting to achieve the impossible.
Does an ER surgeon always have the time/bandwidth to scrub hands before surgery?
Does it potentially take the surgeon several days to scrub before an emergency surgery?
Edited to add: I appreciate the analogy, but it's flawed. If someone comes to me and says "here, developer A is on holiday, and we have this bug that it causing massive disruption in the field," is it appropriate for me to say "well, I can do that, but it will likely take me five days so I can understand the codebase and write the appropriate unit test suite."
This is circumstance I thinking about, not necessarily inheriting a codebase and having to add features to it. In that case, certainly, I'm going to take my time, read the code, and write tests.
In this scenario, there should be tests already present covering developer A's portion of the codebase, together with documentation on how to run them (though tests should be as self-explanatory as possible).
In fairness, I recognize that this isn't always the case in the real world. Sometimes you really do need to just blindly attempt to fix something, and there's nothing to be done about it. But it should never become a regular occurrence, and you should never get comfortable doing it. First thing I would do is tell my manager exactly why I'm uncomfortable, and what a conservative assessment of the risk is. If we decide to go ahead with the change anyway, I would create two new entries in the bug tracking system, which should be developer A's top priorities as soon as she returns: thoroughly vet my changes, and DEVELOP A SET OF TESTS.
I see exactly where you're coming from, and I'm there all the time.
It just troubles me that people are so often willing (and eager!) to waste a lot of time doing half-assed manual testing when they claim not to have any time to write tests. Especially when the state of the art in test automation is better than it has ever been.
This has me thinking that the importance of test automation is related to the proposed frequency of changes. If someone wants a one-off change for something this very second I'll just change it. If someone wants me to inhabit a codebase for any length of time, I'll always set up tests for it. The problem is where you can't tell the difference between those two scenarios until it's too late.
Let's say you don't have the time to give it the complete understand-and-write-unit-test-suite approach though.
How are you verifying you fixed the bug otherwise? By changing some code, building the app, and running it to verify the bad behavior doesn't happen anymore? I don't really see how not writing a unit test (assuming the code is unit-testable in the first place) saves you any time. You are doing testing anyhow.
And if it was a critical bug, personally I'd want to feel as confident as possible that I fixed all permutations of it.
frobozz nailed it, really. If something can't be done, it is the engineer's responsibility to make that known to his manager, who is responsible for communicating that to whoever is asking for the work.
Of course working on a code base without a majority test coverage is dodgy (and intellectually frustrating), but it's a necessary skill.
I feel that it is unreasonable to expect that you will be able to pick up any code base and immediately write sufficient tests to get coverage on a majority of the code base. Speaking from my experience picking up old code bases, just being able to write isolated unit tests would require refactoring most of the code base, which is typically not something you will have time to do before you're expected to do other work.
I can't think of a single manager that I've worked for who would accept me saying, "it's going to take me 3-6 months of refactoring & building tests before I can start fixing bugs and providing enhancements."
> I can't think of a single manager that I've worked for who would accept me saying, "it's going to take me 3-6 months of refactoring & building tests before I can start fixing bugs and providing enhancements."
I can't think of a single developer I've worked with who would try that approach.
When a bug is identified in a project with few-or-no tests, the approach that I usually see taken is to write some sort of large, slow integration test that exercises bug, then fix that. That allows you to prove that the bug exists and prove that the fix fixes it, at least for the documented case(s).
There's no reason to cover an entire legacy code base with tests if you're only changing a small portion of it.
Of course, tests don't actually tell you these things. But they can tell you that your assumptions were wrong, or that your trivial change broke feature xxx, and that's crucial information to have.