Hacker News new | past | comments | ask | show | jobs | submit login
The Framing of the Developer (svese.de)
175 points by KingOfCoders on May 6, 2020 | hide | past | favorite | 65 comments



One of the most well-known companies using what the author calls “impact frame” is Facebook.

There, engineers’ promotions and perf used to be heavily dependent on their impact. Developers needed to be aware what the business impact of their work was, thus they needed to measure (using the excellent tooling available).

This model worked very well while the company was growing. Lots of features core to Facebook today were created by engineers, not product managers. Newsfeed. GraphQL. A/B testing frameworks.

The difficulty comes both when growth slows - and it’s hard to have impact without cannibalising another team’s impact - as well as when having more impact actually causes harm to the company on a different direction. While engineers were optimising for engagement, tweaking the news feed algorithm to show items you were likely to click on, sensationalist content (sometimes: fake news) was over-promoted. While the ads team optimised for ad engagement by exposing various metrics, they exposed metrics that turned to public uproar later on.

The idea of only optimising for impact is a great short-term strategy. Long-term it is not nearly as black-and-white.

My take is the optimum is in-between. Engineers should have an understanding and input on the business impact. But the strategy and prioritisation framework does need to come top-down, aligned with the long-term goals of the business.


> My take is the optimum is in-between. Engineers should have an understanding and input on the business impact. But the strategy and prioritisation framework does need to come top-down, aligned with the long-term goals of the business.

I used to work for a small-ish Dutch energy company, and they went to a great effort at implementing this kind of framework, I thought it was great. We basically had huge quarterly roadmaps pasted onto one big wall of the office, and you would frequently see the product, marketing, sales and C-level people stood there discussing and prioritising. The transparency here meant anyone could wander up and see what the strategy was for the quarter and therefore what would be prioritised. It was sometimes gloriously messy but you really felt included and incentivised to work towards the company's common goals. Wish everyone did this.


Transparent, inclusive, low-ego. Seems like a very Dutch way to organise things. Like you, I also wish with was more common.


Incentivizing your staff to optimize a measure is always dangerous. You'd better be darn sure that that measure is a very good approximation of your utility function.


I always think of the Business Dudes episode of Adventure Time: https://www.youtube.com/watch?v=_mxB_OlSsKI

It's so blunt that it's great, the business dudes are so adept at optimization while ignoring externalities.

"These poor souls are lost without jobs, we can't ignore their plight!"


One thing I learned at my time at Microsoft is when a measurement becomes a metric, it ceases to be a good measurement. So much effort goes into gaming metrics which roll up to impact people's salaries that you can no longer really trust the results.



If you head to youtube and lookup a channel called something like: JomaTech you will find a guy who just recently quit Facebook. He was a Data Scientist Analyst as they call it there(according to him).

It seemed he liked working at FB but he mentioned several times how your impact is your tracked metric and if you didn't have impact then who cares what you're doing.

I think it can be flawed because if your team doesn't like you and youre a Data Scientist? Well now they aren't going to listen to your compelling data and your impact is a 0. Or if they have some other motive, it could even be one person with power on your team!

Obviously I'm describing the worst, but it seems a flawed system.


There is also the bus factor.

If critical business functions depend on a piece of software maintained by one person, the organization has a problem.

If you decide to become dependent on a piece of software, that software better be well documented, tested, audited and maintained by a group of people with certain level of redundancy in case one of them is not available or leaves the organization.


Preventing the destruction of the company in the case of low probability risks is its own impact.


Let's say the probability of someone leaving their job in a given day is 1/100.

If you have 100 employees, from which each one of them is working on some critical piece of software that nobody fully understand, eventually you get a death spiral of orphaned projects.


Interesting article, however I have a different take on it.

Agile/XP was originally envisioned to reduce the impact of middle management/business processes on software development and put the customer and developer together to get things done.

During the early days of XP/Agile this worked in certain areas where it was feasible to do so and there were not a lot of already establish business processes and business people who had a vested interested in preventing it.

At some point middle/product/project managers realized that this was just another business process and adopted and morphed it as their own. Although agile was invented and made for developers, today only non-engineers are involved in 'agile'. As a result we have craziness like scrum master as a full-time non-development job that requires certification from a business process body. Additionally the process people have added a ton of process cruft (burn down charts, playing poker with card decks, etc.) All of which is mostly unnecessary business processes.

So we're back to where we were before and developers and customers continue to be separated from each other.

In the new world, engineering leadership is being replaced by non-technical business people with certifications. The has lead to less skilled teams of 'developers and testers' that are no longer guided by highly technical engineers that have either actual or perceived power in the organization.

The net result of all of this is that developers are struggling in many orgs.


I like the diagram in the middle of the article and think it has a lot of value, but I want to comment on the "XP failed" aspect of the article. I've very rarely seen any team actually implement XP. The rare times I've been able to do so, it has worked spectacularly well. However, the constraints are difficult to deal with and XP in its original form only works with small, co-located teams. If you have more than 3-4 pairs of programmers, if they are not changing pairing partners regularly, if the developers are not co-located and importantly coding at core hours, if you are not TDDing your code, if you are not ruthlessly refactoring, if you are not integrating all of your code several times a day (I like several times an hour!), etc, etc, etc, then you are not doing XP. I think its very fair to say that XP as a movement has failed, although I think it has contributed mightly to the tool chest of developers. I do also think it's fair to call out XP on its lack of program management. Any successful XP team needs to sort that out and leaving it to the programmers is rarely the correct solution.

Just my usual rant on this topic :-). I'm sure it is surprising to many people that XP can be wildly successful, so I try to speak up when I see this repeated.


This is the "no true scotsman" thing that plagues SCRUM too. The "you're not doing it right, so obviously it's not working for you" argument.

If a team attempts to implement XP, and fail, then XP failed. Any discipline that doesn't work in real situations using real people with real politics and management issues, doesn't work.


I can understand what you are saying, but there are some very subtle and difficult things in the world. The fact that only a very few people in the world can be successful with the things that make a concert pianist successful does not mean that concert pianists have failed. I will absolutely agree that XP is not a discipline for the masses, but it is incredibly effective for those who are able to use it well (to the extent that it covers -- there are lots of areas of software development that XP does not cover at all and where you can fail easily while still using XP). I think one of the problems with XP as a movement was that it actually became more vague over time. I actually think that the original 12 rules are better than the current 12 rules.

I have implemented XP in real situations using real people with real politics and management issues. It has worked incredibly well for me. It does not work in all situations and all political environments with all management issues. You need to be very careful when selecting XP to make sure that the situation you are using it in is appropriate.


I'd agree with that. It's one approach (of many possible), and it only works in certain circumstances.

I think part of the problem is that it's difficult to tell beforehand if XP will work with any given team. And that's the "no true scotsman" part :- if you can't tell beforehand whether it'll succeed or fail, and all failures are down to "not doing it right", then it's broken ;)


I may be being overconfident, but I feel I can tell you if it's going to be successful (I need to spend some time with the team, first though). That used to be my gig ;-) I'm usually pretty straightforward when I go onto a project and tell management: XP is not going to work for this team, let's try something else. Where I've had difficulty is when management wanted a hybrid approach and usually that doesn't work well.


nice gig :) what was the biggest signal that the team would fail trying to implement XP?


Hmm... Perhaps the best way to put it is maybe the opposite. There are lots of ways to fail, and less ways to succeed :-) Of all of the practices in XP, I think the one that gives you the most bang for buck is actually the planning game. However, there are a couple of things that are super important (as far as I can tell). First, trying to plan all your stories and then working until you hit the functionality you want almost always fails. And in fact, it almost always fails hard. It's much better to set a deadline and then say, "We'll release the best thing we can manage in that time frame". It's really counter-intuitive, but my impression is that this kicks the organisation out of the mindset of "plan once and then stick your head in the sand". One of the most important parts of XP is the exploratory nature of it. One of my colleagues put it best (and may have got it from somewhere else): the time you know the least about the project is at the beginning.

So one of the things I do straight off the bat is to take a list of stories to the stake holder. Usually they say something like, "When will this be released" and I say, "Never. We aren't going to release this. We're going to release something better than this. I don't know what it will be yet. It's related to this, but the actual details will be different". The extent that the stakeholders can grasp this concept will pretty much determine whether or not you can use XP (or, in fact, any iterative approach).

This is actually pretty fundamental, because there are advantages and disadvantages to XP's famous "No Big Design Up Front". It's valuing the ability to change the design frequently and easily over straight throughput. So if your stakeholders are not OK with the requirements changing frequently, then you are probably actually better off doing the big design up front -- you'll get the wrong system, but at least you'll do it quickly ;-)

You may think that it's a hard sell to say that the requirements must change, but in my experience probably about half of the organisations embrace that idea and half reject it. Ironically, I've found that sales focussed organisations actually have an easier time understanding this than internal tools organisations. Stakeholders that need to do frequent demos to clients often want to change things frequently. Stakeholders that are trying to checkmark "we did the thing on the strategic list" often don't want the extra complexity.

Within that same framework, I've found that 1 week sprints are literally the holy grail of XP. That is, you really, really want it, but it's actually impossible. I'm sure that there are some situations where it is possible, but the activities surrounding planning are numerous and complex and those organisations which insist on faster sprints often skip the important activities, leading to failure. So it's pretty important that the stakeholders understand the rhythm of planning: you have an idea for a change, you explore that idea in a meeting, someone breaks down that idea into realistic work pieces, the team estimates the pieces (I like one-size-fits-all these days... so all stories have to be the same size and you send back any stories that are too big... it's just a little easier for everyone to comprehend), pieces that are small enough go into the backlog, pieces that are not small enough are broken down again, then another meeting to prioritise all of the stories, then another meeting to build a sprint. It's a lot of work and sometimes it's hard to get the commitment to doing it (BTW, developers only go to the estimation meeting... this is incredibly important for reasons I won't get into).

One of the signals that you're going to run into trouble is that there is nobody with good XP knowledge at a high enough level to get the stakeholders to actually show up to the meetings. Once you get a good rhythm going, I find it's pretty easy to sustain, but breaking that barrier is a bit difficult and you need that champion in management. The way it's worked best for me was when I was working as an XP coach and I reported directly to the director, rather than through the management chain. Even though I don't do management consultancy and don't need to hobnob with the guys in the stratosphere, at the beginning I often do need a stick.

Just putting those two things in place will go a long way towards making you successful. It's the first thing I do on a team and if I can't manage it relatively quickly, then I know it's not going to work out. There may be other ways of being successful, but I don't know what they are. I'm always very candid about that when I join a team.

Apart from that stuff, from a more technical perspective I'll tell you one surprising thing. I've never been successful doing continuous deployment using XP. Never. I just don't think it mixes very well. This is probably one of the most innocuous ways to shoot yourself in the foot. Partly it's due to the above: it breaks the rhythm of development because now you have stakeholders who are not content to wait for the end of the sprint to get what they want. So if you must have continuous deployment, then I think XP is not for you (caveat: as always, perhaps there is a way to make it work, but I haven't found it).

Further to that (and I've thought about it a lot), I think one of the big advantages of XP is that once you have your rhythm in place, you have at least 3 weeks from proposing some work to delivering it. This will seem awfully slow to a lot of groups these days, but in the context of the 90's this was pretty darn quick. But that 3 weeks allows a lot of dust to settle. Stories, despite how they sometimes look, are never in isolation. The thing you are building needs to have an internal logic. If you rush the work into the build, you don't have enough time to see it properly within the context of what you are building. It took me a long time to really understand what problems really fast development brings and I always felt like we were missing analysis, even though we would work hard at analysis. I now know that it's simply time and context.

Another thing that's kind of crucial on an XP team (IMHO) is the concept of ownership. It's kind of strange because one of the values is no code ownership (and I really believe in that). However, for requirements and other things, you need a single source of truth. I call the person playing the customer role the "Customer Proxy". It's actually really important that this person owns the requirements. If there is a question about requirements, then that person answers it. This is an incredibly difficult job and it's a full time job. So one of the things I insist on an XP team is that you have a full time customer proxy and that they only service the single team of 6-8 developers. Of all the crucial roles on a team, this is the one that is hardest to sell to management in my experience. Management has a very difficult time understanding why they have to spend a full time salary for someone who doesn't produce any code, or do any selling, or go to any trade shows, or whatever.... But you need that person who knows everything there is to know about the thing you are building and who can tease out the logical connections. It's also critical (critical) that they can answer questions from developers immediately. Oh how much time is wasted because developers don't know what they are building? Interestingly, on my most successful XP team, the PGMs all declined to take that role and foisted an intern on me (who was studying to be an architect). He was amazing. Subsequently, I've stolen people from documentation and other areas to fulfill this role. It's important to understand that this is a creative role, not a managerial role, so I've found it best to look outside of the normal stakeholders for this work (and TBH, I've rarely had a PGM who could do it and never had a manager who could do it -- so grab that intern and thank your lucky stars ;-) ).

I haven't really gotten into the technical side of XP at all and I've been typing for a while, so I think I won't go there. If you are interested, I'd be happy to type some more another day :-), but I'll leave you with one last management issue that I think is really important to resolve very early on: reporting.

XP never had standup meetings. If you have co-located team members working during core hours (required IMHO, and also a divisive issue, but I can talk about it another time), there is absolutely no need to have a standup. However, sometimes some of the developers want to have it (hard to believe, but it happens!) I do not allow any PGMs, the customer proxy or any manager to attend a standup meeting! That breaks every dynamic of the meeting. You can imagine I get a lot of push back for this, but I never cave. Management of any sort breaks stand up meetings. If you have a coach (and I recommend it), then they can facilitate the meeting, but other than that you want the developers to be able to talk about whatever they want to talk about. You explicitly want to avoid any discussion about status. Status discussions cause your developers (either consciously or unconsciously) to change their behaviour as developers -- virtually always for the worst.

All status must be visible through artefacts produced by the normal working of the team. I'm really super serious about this and I will even go and chastise managers who wander over and say, "How is X going?" If you can't see how X is going by looking at the sprint chart, then we need to fix the sprint chart.

I had more to say, but bumped into the "That message is too long error". Sorry!


>The fact that only a very few people in the world can be successful with the things that make a concert pianist successful does not mean that concert pianists have failed.

Or does it mean they have failed at making a guide?

If you have a guide 'how to be a professional concert pianist' and people who faithfully follow it only have a 1 in a 1000 chance of succeeding, perhaps it is a bad guide. Now, maybe everything in the guide is necessary, but it isn't sufficient. It may also be the best guide around, as every other has a success rate of 1 in 10,000 or worse. However that doesn't mean it is a successful guide. It is still failing to capture something that is needed to become a successful guide that can be spread to the masses.

If it is guide for how a successful pianist can become even better, then perhaps it is a problem of too many people thinking themselves successful pianist when they aren't. So maybe what we need is to better focus on identifying which organizations aren't mature enough to adopt something like XP so they stop trying out a path that will fail because they aren't ready for it.


Exactly! I just tried to cook some scrambled eggs. One the eggs went on the floor, and the eggs in the pan cooked to quickly and turned into an omelette. Scrambled eggs has failed!


hehe nice analogy. But yes, if this was the experience of most people who made scrambled eggs, then I'd be comfortable saying that recipe had failed ;)


Let's switch scrambled eggs for a two Michelin stars-restaurant kind of recipe (which is where I would put XP - to the very top of developers)


> If a team attempts to implement XP, and fail, then XP failed

I've never done XP, but it's probably more reasonable to say: "If a team attempts to implement XP, and fails, then..."

(Choose one:)

- Maybe XP requires more discipline than the team is willing to give.

- Maybe XP is too rigid for that team.

- Maybe XP didn't fit the team's context or domain well.

Silver bullets don't generally exist. Maybe OP is right ("true" XP has real benefits), but the effort required to do "true" XP isn't worth it for many teams.

This might sound like "one true Scottsman", but it also applies to a lot of hard things in life.


The framing he speaks of in the beginning is the crappy framing developers, by default, apply to themselves. I remember when I was an engineer I had a similar mentality and felt like I was at the kids' table. Maybe that's why I got sick of it and became a PM.

Good PMs certainly do not think primarily in terms of backlogs and stuff to ship; developers who are not curious about impact and fetishize implementation details think this way. Nor do they judge engineers purely as a function of their ability to crank through the backlog (rather, it's their responsibility to set correct expectations with leadership and reconcile the problems to be solved with the resources available to execute against them).

Impact starts with prioritizing problems relative to the state of the world, not prioritizing solutions or features like we normally think of as on a backlog. Once you do that, the backlog comes pretty naturally.


Maybe you work in a sane company. I work in a company where every freaking VP, SVP, EVP, P and various C wants something different and battles between themselves; all the while we are trying to develop the last thing anyone said, right up until it all changes. We spent 18 months on a single project in which change accelerated over time, every single feature changed in radical ways, and then in the last 3-4 months they insisted that since we hadn't shipped anything we would need to work extra hard on the latest set of things or there would be hell to pay. And then when we got it done anyway cancelled the whole project on the day before the CEO announcement–so they could start on yet another one of these that someone had convinced them was better. Sigh.


... was this a software company? That’s horrifying.


Our business uses a lot of in house (and out house) software, but we don't sell software or hardware.


I've never been in a place where it was pretended that there is no person with authority to tell you what to do, whatever that person is called, and that person's ass is the one that's kicked in case things don't work.

What this article describes is alien to me: an organization where devs are apparently autonomous, so they're responsible for failures, but still need to go in the direction set by someone else. There are multiple problems with that system. From the outside you can ask without concerning yourself with priorities or tradeoffs. From the inside, different persons can push in different directions.


  an organization where devs are apparently autonomous, so they're responsible for failures, but still need to go in the direction set by someone else. 
I've just left a place like this, largely for this reason. Someone with impressive credentials was hired by the senior leadership and given project ownership. The problem is that neither this person's education, nor their work experience, had anything to do with software; they'd never worked on a software project before.

Relatively quickly, I started noticing exactly the dynamic you've described. If the backlog was growing (which it always is), engineering would get reprimanded and have the whip cracked on us. But when we'd reach objectives, we'd get no recognition or even acknowledgement, while the "product owner" would get showered in praise. After passing our first major feature milestone, the "product owner" was literally greeted with a bottle of champagne on their desk.

It took me a few weeks (read: too long) to trust my perception that most of the "delays" came from how terribly the requirements were written, and how often they changed for arbitrary reasons that created no value. Choice example: does a time period "from January 2019 to January 2020" include two Januaries? The answer to this question would change, sometimes twice a week, when asked of the "product owner".

In retrospect, I was in the exact no-win scenario you described, in which my team owned every failure but no successes.


> Choice example: does a time period "from January 2019 to January 2020" include two Januaries? The answer to this question would change, sometimes twice a week, when asked of the "product owner".

Unfortunately, the only way you can protect yourself is to have a paper trail. If the PO says something, write it down in your tracker (Jira, Trello etc.), or at least an email. If the PO is unclear, write the answer according to how you understand it and ask directly ("Is my understanding correct").

If the PO changes its mind, document that as well.

It shouldn't be necessary, and in many companies it is not, but some environments are toxic.


Though even in a non toxic environment you should document any product decisions and why they were made. Your PM now could be the most amazing and helpful person in the world, but in 3 years it may be important to know if there is an important reason a time period from January to January includes both Januaries (and thus something important will stop working if you change it) or whether that was an arbitrary decision because the question had to be answered one way or the other, so they flipped a coin (and thus there is no special reason to believe changing it is dangerous).

Always document product decisions. "Documenting" doesn't have to be a big process, and you don't really need to make polished documentation, but at the very least dump your notes from when decisions are made somewhere permanent and searchable (github comments / jira comments / even slack messages in a non-dm channel count). Your future self will thank you.


This is good advice, but in my case the "product owner" was too invested in seeming competent (by, e.g., over-confidently and over-emphatically making decisions that turned out to be coin-flips). Finally, there was no accountability for product ownership; other than the engineers, no one at the company was positioned, or willing, to notice/address the delays in the project that were caused by the product owner changing their mind arbitrarily.

So, while I did spend a few months documenting all these issues, the company itself had not made room for engineering-led process improvements. Noticing that, I started spending political capital on trying to fix it, which is incredibly stressful, slow, and unrewarding work. After a few months of this, I finally came to my senses and realized that, this not being my company, I would not accrue any of the benefits of improving their processes. Time to leave.

The lesson, for me, has been to avoid working at companies that are either not product-focused (in which case engineering is just a cost-centre, having extremely low status within the organization), and to avoid stubbornly non-technical founders (i.e. any founder/owner who refuses to care about their engineering department beyond asking "is it done yet?").


This seems like such an obvious thing to me. When an issue comes up for prioritization during review, we simply ask "What value does this bring to the business".

Every single meandering bullshit conversational path we happen to find ourselves upon can be immediately curtailed by stating these words. Technical folks may not like hearing it all the time, and it doesn't necessarily have to be the dominating factor 100% of the time (i.e. side-projects, 20% time code activities, etc). But, this is the penultimate tool for ending arguments and separating the bullshit from the reason people are receiving paychecks.

Very scary time-crunch episodes can in many cases be diffused by simply going to the customer and asking them, "Out of these 10 things, if you could have 3 by the end of the month, which would you pick?" I think you would instantly get a picture of how you should prioritize your efforts to drive value for them. I strongly suspect that if you can meet incremental targets in descending order of value (as perceived by the customer), you can keep virtually anyone happy. Hitting customer targets successfully is also very good for morale and can further compound productivity on all ends.


> simply going to the customer and asking them, "Out of these 10 things, if you could have 3 by the end of the month, which would you pick?"

This is only simple for a limited number of businesses. If you're building a product and not selling a service, it's much harder to nail down what is actually important to potential customers. Even talking to existing customers about new features is never a fool-proof way of getting the right prioritization.

Even when you do have a customer you can go and ask, they are rarely a single entity. The person in charge of the buying decisions may think some features are important, but the people who will end up using the product may have other priorities. Even then, prioritization may be hard on their part. If I were building a word editor for you and I came and asked whether word counts are more important than page numbers for you, would you be able to give a confident answer? Would your colleagues give the same answer? How about prioritizing both features over some stability work (e.g. you get both page numbers and word counts, but the product is still a little crashy)?


"We need to add unit tests" "That brings no value to the business!"

"We need continuous integration" "That brings no value to the business!"

"We need version control" "That brings no value to the business!"

"We need to upgrade from JDK 1.4" "That brings no value to the business!"

"We need to safeguard user's data" "That brings no value to the business!"

And on, and on, and on.


That's because "bringing value to the business" is confused with "increasing quarterly profits."

The problem is poor valuation. All of the above can be process lubrication. The actual value comes from the process that goes on top of them. But if they're not in place, process implementation is going to have a lot of unnecessary friction, which reduces the eventual value of the core business process.

Some businesses get this, others don't.

For those that don't, it helps to explain the effects with real financial metrics. If your product is six months late and customers hate it because it's full of bugs, your process has failed to "bring value."


Exactly right. Building cowboy software with intense schedule pressure and best practices corner cutting it's likely to give you both late releases and bug backlog long enough to circle Earth three times.

There will be more bugs, and bugs will be more time-consuming to fix.


And the people who could rectify the situation get the message that their opinions/judgement aren’t valued and give up or go to work someplace else.

I think perhaps people don’t realize that “you get what you measure” has deeper layers to it. You get actions, you get people who are comfortable with those actions, and then you get friction to change this new status quo.


Yes, that's how that question is normally used. But it's a lie, those often bring value to the business.

If you are trying to answer it honestly, you will certainly get better results. But I am still not sure it won't bias you in a harmful direction.


> "We need to add unit tests" "That brings no value to the business!"

"Yes it does, it allows to make needed changes quickly and confident that they won't break other functionality."

etc.


This kind of thinking is how you run a good business into the ground chasing local maxima.

Ask Kodak


As other people mentioned, this is shortsighted. What you want to target is a balance between exploitation, 'bring value to the business', and process improvement, 'how do we simplify our processes to make it simpler to bring value to the business in the future'.

Both can be argued with concrete scenarios and data.


I don't understand why 1) developers are promoted based on impact 2) developers are given tasks by PMs 3) PMs choose tasks based on impact. This makes no sense to me. Either get rid of the PM role and put engineers in control of choosing impactful tasks, or change the promotion criteria.


There is always a backlog and product manager. Roles! Not necessarily job titles. The question is who is / are managing the backlog and what criteria they use.

If you claim to have no backlog then you basically come to work each day fresh and choose something to work on - but don’t plan on continuing that tomorrow because to do so would mean you actually do have a backlog!

It might not be a Jira backlog but it exists in the minds of the devs at least.

So if we have a backlog then who should control it?

Good question and it depends on the company and the skills of the staff including the developers.

It also depends on how much it is wanted that developers do business strategy and to what degree they do it.

Leaving it all to a product owner can be OK if they do their job properly and understand the technical trade offs and really listen to both the technical and business sides. It can’t just be chucking shit in the backlog.

So, like nosql vs. vim or tabs vs. microservices, it’s going to be another horses for courses, best tool for the job kind of decision. Each company needs to carefully consider their strategy for basically: deciding what to work on.


I liked the idea of "impact" prioritization of "work-items". (In fact I don't think this is something new, as I would assume many already prioritized their "backlog" based on how "important" that item was, although perhaps they didn't name that metric.)

However the author fails to answer one key question: what is "impact"?

I understand that a definitive answer can't be easily given, as it could range from increasing users / sales, taking head-on another competitor, putting out a completely new product / feature, etc.; but I would have expected at least some hints about this.

Because without even a fuzzy definition of "impact" we just rename "metric" (or "KPI", or whatever is the latest trend) as "impact".

Based on his example of Apple and the introduction of iPhone, I would assume by "impact" he means: a revolutionary "product" (or in other cases feature) that follows a road not taken by other competitors.


I'm eagerly awaiting the next project management fad to make writing software less enjoyable.


> I always wondered why it isn’t called “product debt” because product took the credit to get a feature faster and must pay back by investing the time to clean up. Technology is the bank that gave credit.

I would expect massive pushback trying to push this framing...


Really like the diagram. I thought at first "frame" as in the architecture, like a picture frame. Then, it was "frame" as blame why something was not delivered. But then ... if read just ... right ... it is both.


That is why you need engineering leadership that can keep the PM in check, and can veto PM decisions if they endanger the product in a way only engineers can understand, or want to understand.

And that engineering leadership needs to be independent enough from the PM, so that the PM cannot appoint redundant/agreeable scapegoats with no engineering weight.

Maybe you can still allow the PM force their hand on engineering, but that should be a procedure involving a written and signed release form so that the PM is directly responsible for their own decisions, without scapegoats.


We used to have a QA manager to tag team with, but they got rid of QA and we happily obliged. Once in a while you could rely on the operations/IT people for this, but we are trying to get rid of them as well.

So far this is really working out well for managers.


> if they endanger the product in a way only engineers can understand

If you have critical risks that only engineers can understand, then you have extremely poor engineering leadership. I can’t think of any examples critical risks I’ve encountered that couldn’t be communicated concisely to non-technical stakeholders.


> If you have critical risks that only engineers can understand, then you have extremely poor engineering leadership.

Or extremely incompetent PMs. I've had one PM be confronted with the documentation of something where it clearly said "don't do this, it's wrong and will ruin everything" (which, totally by coincidence, is what he was told by someone on the team before). He still demanded it to be done because he had heard a talk at some conference where that was recommended. It took getting the CEO into the issue via back channels to clear that out.

Imho the percentage of PMs, POs etc that have no clue what they are doing is much, much higher than in any technical role. If they're good at bullshitting, they can usually talk their way out of being blamed for their failure and switch to a different company later.


But you’re not talking about a risk that only an engineer can understand. You’re talking about a risk that somebody else in the organisation is trying to hide from their own boss. Which is a completely different thing.


For every position, there is a chance of a bad hire. PMs are no exception. Every productive employee has a chance of losing motivation and start underperforming. Again, PMs are no exception.

You organization needs ways to detect and "fix" situations caused by a person not living up to their role.

Balance of power between product and engineering is one of such circuit breakers.


A balance of power between engineering and product ownership is a terrible idea. The role of engineer doesn’t necessarily require any understanding of the customers needs. If you have an engineering team that understands them well, it’s either because of competent product ownership, or it’s a complete coincidence. Engineering should not have control over what is built, or have the power to override product owner’s decisions. Product owners should not have control over how things are built, or the ability to override engineering leadership’s decisions. They’re simply different areas of responsibility, there is no need at all for a balance in power between the two, and creating one for the purpose of avoiding bad decisions is just as likely to be used for avoiding good decisions, or decisions that conflict with somebody’s personal taste or political objectives.


Deciding what to build, and for how long, indirectly dictates you how things are built.

Deciding which engineers should have a career advancement also influences how the product is built.


It sounds like you’re describing your own experience of office politics more than anything relating to the roles of product and technical leadership.


Developers are not the audience of this article but it's subject, it's written for C-level executives, the targeted market of the company.


Actually I don't think of an elephant when you tell me not to think of an elephant, I think of the word elephant, specifically "huh, they said don't think of an elephant, now they are going to say see you can't stop thinking of elephants but I'm only thinking of the word and not thinking of the object. Why is that?"


If a prioritized list of not-yet-implemented features and issues is unhelpfully called a “backlog”, what should it be called? An “impact queue”? An “improvement roadmap”? Is this just a matter of language?


I've managed a few projects and IMO the state of the art in terms of wording is "ideas" for the long list of tasks that might be a good >idea< to maybe do sometime in the future and "plans" for the shortlist of concrete activities that we >plan< to implement soon.

As for your a bit sarcastic proposals [did I read that correctly?], I see the following problems: - "queue" implies meaningful ordering and you don't get to skip or totally rearrange a queue without a good reason in most situations that involve a queue - "roadmap" according to Cambridge dictionary is a form of plan, which doesn't quite cut if for describing the can-be-postponed-without-consequences part of the backlog.

Regarding the matter of language: I'm assuming [in this context] the presupposition that words are tools. And you can do a good job with crappy tools and vice versa. Yet there is a correlation between tool quality and the outcome. Example: the WIP limit in kanban - sometimes referred to as "work in progress limit". Why the hell would you like to limit progress? But when you change it to "in process" the whole concept suddenly makes a lot more sense. If everything is in process nothing will ever get done, so putting a limit on that ensures output.


All this discussion gets things wrong. Bring back negative feed back and pain to stake-holders. Creator must take the bullet. IT what happens when executive slaves want the paycheck without the responsibility.

Failure to test organizations is another main problem. Can one person make a small change quickly? No? Then delete. Try again. Burn it down.


This seems like a very specific set of experiences has set this developer up to make broad statements that do not fit with my reality.

> Feeling in control is one of the main drivers for happiness.

I'm not sure this person has ever developed a product, on their own. Feeling like you have some control is not the same thing as being in total control.

> The word backlog makes you think you are always behind finishing things. The frame says: Finishing means success. If we work from the backlog, we’ll have success. If we complete all the things I as a product owner have in my vision, we will have success.

> If a product fails in this frame, it is because we have not implemented the whole vision of the product manager.

No. It's a plan of things that the team might do. The backlog (and if you're lucky, the prioritization) is ever-changing. Many of the planned features aren't even controlled by the product manager, but by other dependencies (teams, markets, etc). This is why the product owner and product manager ends up being the same solitary role. It's efficient. You can pursue building ANYTHING FOREVER, so it doesn't even make sense to claim the backlog is a goal/finish line.

> In the backlog frame success is tied to implementing the items of the backlog.

No, that's your frame. It's not the paradigm. I like product managers. It's not my business to work harder, because I have an eventual productivity cadence (from collected metrics) which is what they have to work with and communicate to interested parties. Other than some light competition between peers, I'm not responsible for pushing to meet arbitrary goals. Everything is a negotiation. As a resource, I am what I am and we're all working together.


It's PR/link bait for Startup Coaching, don't read too much into it.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: