It's great to see deeper pieces about developer productivity - in this case starting from first principles and the actual experience of developers. The notion of feedback loops has also been explored in e.g. https://martinfowler.com/articles/developer-effectiveness.ht...
The bad reputation of developer productivity metrics comes from the misguided assumption that developers should be measured. The better approach is to treat developers as customers of the management team / engineering enablement team /etc. In that sense, developer productivity is actually a measurement of management effectiveness / organizational health.
Once you build your developer productivity approach on this basis, what needs to be measured becomes much clearer - for example: interruptions caused by meetings, performance of local tooling like local builds, latency of CI, number of on-call pages triggering outside business hours, etc.
The right set of metrics depends on your team and can be sourced from surveys and just talking to people about what's painful on a daily basis. As a quick and dirty solution, I'd even recommend piping Github webhooks and other events to a product analytics tool like Amplitude or Mixpanel. You'd be surprised how fast you can understand things like build or CI latency by using a classic funnel visualization.
A lot of great engineering teams are migrating to this approach, especially when they have a dedicated platform / enablement / productivity engineering team.
I think it's a little bit more complicated than that.
What you are describing is "treat developer productivity as a supply side problem." developers continually demand resources to improve their productivity.
However, there's two issues:
#1: Developers don't necessarily know how to be productive developers.
#2: Developers might not be motivated to improve their productivity.
Hence, it's not necessarily an efficient market.
I find that you need to control for market inefficiencies by:
- Control for #2 by having a "tech lead" or senior engineer be directly responsible for their developer's performance. Whether that's a 2 parent leadership team (people manager + tech leader) or otherwise, developers must have direct oversight of their personal productivity.
- Have an appropriate incentives in place for developers to improve. A couple of places (IBM) actually have an excellent infrastructure for providing productivity. But no incentive.
> Control for #2 by having a "tech lead" or senior engineer be directly responsible for their developer's performance. Whether that's a 2 parent leadership team (people manager + tech leader) or otherwise, developers must have direct oversight of their personal productivity.
Tech leads in most engineering firms don't have that kind of control. That's why OP mentioned "manager". You can't make someone responsible for stuff they don't control. Even if a manager "tasks" a lead with this they usually don't in fact own it. The paradigm you're asking for requires a people manager to be leveled the same as their tech lead so that they share responsibility for outcomes. Most places don't work that way.
> Have an appropriate incentives in place for developers to improve. A couple of places (IBM) actually have an excellent infrastructure for providing productivity. But no incentive.
OPs point is that developers are a reflection of their environment more than they are of their own knowledge. There's a DevOps study from DORA that covers this I think, as well.
> Tech leads in most engineering firms don't have that kind of control
That's a mistake by most engineering firms.
Direct technical leadership is extremely important. Weekly direct feedback and performance evaluation on technical tasks. By a technical leader who is directly responsible for getting this person improved.
I think, the vast majority of the status quo re: "managers" in the software industry is absurd rubbish.
I use the army/medicine analogy.
In the military/medicine, a junior doctor/2nd lieutenant has every aspect of their work overseen and audited by an expert who can do their job better than they can.
In the software industry we often skip this step.
Most software is cut price rubbish. Engineers who enjoy leadership and mentoring other engineers cost $$$. It's not cost effective, so bad management ensues.
> OPs point is that developers are a reflection of their environment
Definitely agree. I'd say my points are additive, they're not an alternative to OP's.
Thank you! I've read through every comment here, and
"#1: Developers don't necessarily know how to be productive developers.
#2: Developers might not be motivated to improve their productivity.
Hence, it's not necessarily an efficient market."
Has been my experience lately in thinking about this problem. Those who are productive generally know how to be productive and _want_ to improve their productivity. Many have the motivation but don't know how, and many others are hampered by external factors that limit motivation, or don't care.
>Once you build your developer productivity approach on this basis, what needs to be measured becomes much clearer - for example: interruptions caused by meetings, performance of local tooling like local builds, latency of CI, number of on-call pages triggering outside business hours, etc.
But those metrics don't really help management decide how many hours developers must be required to sit at their desks per day, how much unpaid overtime to demand from them, or how many years to wait before giving them a tiny raise! /s
The bad reputation of developer productivity metrics comes from the misguided assumption that developers should be measured. The better approach is to treat developers as customers of the management team / engineering enablement team /etc. In that sense, developer productivity is actually a measurement of management effectiveness / organizational health.
Once you build your developer productivity approach on this basis, what needs to be measured becomes much clearer - for example: interruptions caused by meetings, performance of local tooling like local builds, latency of CI, number of on-call pages triggering outside business hours, etc.
The right set of metrics depends on your team and can be sourced from surveys and just talking to people about what's painful on a daily basis. As a quick and dirty solution, I'd even recommend piping Github webhooks and other events to a product analytics tool like Amplitude or Mixpanel. You'd be surprised how fast you can understand things like build or CI latency by using a classic funnel visualization.
A lot of great engineering teams are migrating to this approach, especially when they have a dedicated platform / enablement / productivity engineering team.