In my experience situations where the bug-count goes up by 30% and sprint velocity goes down by 40% don't just happen out of nowhere. In every situation I've been in where that's happened, management has been warned repeatedly and well in advance that engineering was implementing short term hacks that would come back to bite us later on. To respond to something you were warned about with NVC baby-talk (as opposed to a clear and forthright, "Yeah, we screwed up, how do we go forward from here?") seems like it would just make the problem worse.
In my experience, developers always warn management repeatedly and well in advance that engineering is implementing short term hacks. And if management believes them and gives them a free hand, then sprint velocity goes down by 80% and bug counts go up by 90% as stuff randomly breaks in the refactor. Which seldom works out in the end as well as development thinks it will.
(Hint: Refactoring is usually an inefficient way to write a second system, with all of the downsides of the second system effect.)
The moral of the story is that "...management has been warned repeatedly..." DOES NOT mean that the warnings are a correct diagnosis. Here are alternate diagnoses that may also cause the same symptom:
- Confusing product stories.
- Onboarding new developers without proper integration in the team.
- Poor communication patterns between developers.
- General company morale problems.
- Improved QA. (This one is actually an improvement. Reported bugs are up because they are being caught, and developer throughput is down because they have to fix the newly reported bugs. In dysfunctional organizations, though, developers can wind up upset at QA.)
These are all good points, though I think a deeper point may be missing regarding employing empathic, curious, positive communication tools (If you prefer a term besides NVC, please use it--the politics of taxonomy could only distract us from the potential benefit of ideas).
Here's a theorem: For common personal or professional conversation types, I maximize the expected outcome of "difficult conversations" (both to me and to the group) by (in the conversation) assuming the best and focusing on open-mindedness and collaboration.
Proof: Enumerate expected outcomes from using that mindset versus not doing so over all possible values of {am I in the wrong?, is other person in the wrong?, other person's disposition}. (For full rigor multiple through by your priors on each of these but I hope the idea is clear enough).
In the example you gave, if I'm management and I'm in the wrong, being curious when trying to debug the problem will make it way easier for me to be open to my mistakes, since I've introduced no specter of guilt, shame, or blame for the fact that there is a mistake.
The deeper point is true, but hits at an even deeper point. Which is what lay behind the attitude in the comment that I was responding to.
Are you familiar with https://www.triballeadership.net/? Based on language patterns, people and organizations can be classified into 5 levels. The linguistic patterns are (1) Life Sucks, (2) My Life Sucks, (3) I'm Great (but you suck), (4) We're Great (but some other organization sucks), and (5) Life Is Great.
There are formal tools behind that classification, and formal ways to measure it, but in general the opinion of people you interact with will be fairly consistent. That notwithstanding, most people judge themselves as operating at a level or two better than they actually do.
Here is where that becomes a problem. Most successful executives operate at level 3, but think that they operate at level 4 or 5. They can adopt any linguistic pattern that they think will make them more successful, but the tools make them more effective at being level 3, with the attendant problems that result for others.
This is not a slight on the communication tools. Emotional people will tend to escalate until the emotion is noticed and acknowledged. Therefore doing that up front really does calm the other person down, and really does facilitate communication. Conversely, approaching challenges with openness, curiosity and openmindedness really does make problems easier to solve.
However if you give a person the tools without changing their underlying attitudes, what looks and sounds wonderful will, over time, create the exact same type of problems in the long run. It just creates them more effectively.
> if management believes them and gives them a free hand, then sprint velocity goes down by 80% and bug counts go up by 90% as stuff randomly breaks in the refactor.
5x less velocity and 10x bugs is not the result of any sane refactor. Anywhere close to this would indicate that either the devs massively underestimated the complexity of the job or their own competence, or something else in the process is broken, such as letting the senior estimate and the junior implement the change. The vast majority of refactorings are in my experience non-events, simply because I'm by now used to working with thoroughly tested systems.
Sounds like you are saying that refactoring is usually incompetently done to me.
Not at all intended on my part. Nor would I say it is usually what went wrong.
The primary root cause is an underestimation of the true complexity of the problem. The details of which only start to become apparent after you've begun the rewrite.
An inevitable secondary cause is organizational pressures to add features to the rewrite. This happens after frustration builds because other departments have been unable to get features into the legacy code. I can't fault developers for lack of skill in managing these organizational pressures. That skill is not a core competency for developers, but it generally is for the people who are trying to get their agendas into the development roadmap.
If there is any additional challenge from the lack of skill of the developers, then you had much bigger problems from the start.
Do you mean rewrite? You're acting as though refactoring is a process that involves destroying bad code and rewriting it from scratch. It's not.
Refactoring is usually the modification of existing code while maintaining existing functionality in order to make the code easier to maintain and reason about for future developers.
Furthermore, if you're given points for refactoring (as you should be) your velocity is very unlikely to go down in my opinion. I will say that many teams and managers do not do this, but that's because they're poor managers.
If you have tests in place you will cause fewer bugs by refactoring. If you don't you will still create a few bugs as you refactor. But imo, fixing the bugs created by refactoring is just... part of refactoring. If that means the task requires an even higher number of points, so be it.
As a manager, there's no point in forcing the team to accept a reality they know isn't possible. It's not beneficial for anyone.
I mean that when developers are given leeway to devote a sprint or two to a large refactor "to take care of technical debt", it naturally turns into much more of a rewrite than expected.
I'm not a professional, but you seemed to be using refactor and rewrite interchangeably. I had the impression they were different and you seem to be applying all the bad ideas of rewrites to refactors.
> 80% and bug counts go up by 90% as stuff randomly breaks in the refactor.
Er...that's not a "refactor". I know the term nowadays gets used liberally for any kind of rework, but refactoring involves making the tiniest of steps, often automatic or automatable, that improve the structure of the code without affecting functionality, all under the umbrella of a comprehensive test-suite that catches violations.
But, technical debt is a real thing. Making sure that refactoring/clearing debt is part of every sprint (one or two tasks per sprint, probably) is important.
However, if benefits of refactoring are abstract, and not attached to immediate or short-term/medium term features/perf, there is a very high probability that it is not a tech debt. "Code looks better this way" - is an overrated rationale to refactor.
In my experience, developers always warn management repeatedly and well in advance that engineering is implementing short term hacks. And if management believes them and gives them a free hand, then sprint velocity goes down by 80% and bug counts go up by 90% as stuff randomly breaks in the refactor. Which seldom works out in the end as well as development thinks it will.
(Hint: Refactoring is usually an inefficient way to write a second system, with all of the downsides of the second system effect.)
The moral of the story is that "...management has been warned repeatedly..." DOES NOT mean that the warnings are a correct diagnosis. Here are alternate diagnoses that may also cause the same symptom:
- Confusing product stories.
- Onboarding new developers without proper integration in the team.
- Poor communication patterns between developers.
- General company morale problems.
- Improved QA. (This one is actually an improvement. Reported bugs are up because they are being caught, and developer throughput is down because they have to fix the newly reported bugs. In dysfunctional organizations, though, developers can wind up upset at QA.)
And so on.