> Individual unit tests should be on the order of a millisecond or less so you can rip through them very quickly.
So now we have to write tests for all our code, and tests that run fast. We have to refactor our code around the tests. Something seems a bit back to front here. Or is the assumption that if we have 100% test coverage that runs fast then it means that we have written the best code possible?
I think I am siding with the author of the article on this one.
I find when I have to re-factor code around the tests it means the code wasn't very good in the first place.
The author complains about re-factoring code into smaller testable functions. I completely disagree. Code structured as small easily understood functions which do one thing and have obvious inputs and outputs is good code which is much easier to extend and modify.
Yeah, that's one of the article's weaker spots. But as a rule, I'd tend to interpret imprecise statements like that charitably. He's not saying that small, clear functions are bad, but that splitting functions for the purposes of testing is counterproductive. He's not saying anything about splitting for clarity and focus.
So now we have to write tests for all our code, and tests that run fast. We have to refactor our code around the tests. Something seems a bit back to front here. Or is the assumption that if we have 100% test coverage that runs fast then it means that we have written the best code possible?
I think I am siding with the author of the article on this one.