Humans are probably a lot more like plants than distributed computing systems. Everyone likes a different environment. We bear different kinds of fruit. You can't just uproot a person randomly and move them around and expect everything to be the same. We put roots out into every available nook and cranny, bending around each other to get what we need. Throwing a new person into the mix changes everything.
Reasoning about humans as cogs is inevitably going to be a really awful way to get people and companies to be productive - most of it isn't that. However, humans, like plants, are able to survive and sometimes thrive in really terrible conditions _despite_ those conditions. This is what you see at Amazon; if you throw enough money at anything then you're going to get some kind of result. But, despite Amazon, Microsoft, and Google having billions and billions of dollars to throw at this stuff we mostly still grab for Open Source tools (and so do they). That's largely because the workflows the Open Source community use actually work and that's mostly because they get the human stuff right.
The whole "man hours - overhead = man joules" thing doesn't math and using that as a basis for any kind of analysis is going to lead to weird places. There are elements of human workflows that fit into these boxes - the really mundane work. Reasoning about mundane processes through this lens is maybe a good idea to the extent that mundane work can't be swapped out for more interesting work. The problem is that what we really want is to excel at the stuff that isn't mundane (the force multiplier shit) and that needs an entirely different framework that reasons about humans as humans rather than machines.
I think you miss the point of the article a bit. I believe what he's getting at is this: there tends to arise bottlenecks in any growing organizations which prohibit them from moving as fast as every individual operating at full capacity, whatever that individual's capacity may be.
Intuitively, in smaller organizations these bottlenecks are fewer, and in an organization of 1, your individual capacity is only limited by your physical and mental constraints.
So yes, it's important to keep in mind that everyone's different but there is definitely excellent insight into how people work together effectively in large groups.
Every place were money is involved longterm becomes a ecosystem, with prey, preyed-upon, parasites and a balancing ecology, reducing the possible effectiveness.
A companys maximal performance is limited by this internal eco-system and its external eco-system. There can be short term gains as the system spins itself appart, but there is a peak antelop even for the most well adapted lion.
> That we know some of the boundaries of organizational performance and their dynamics doesn’t excuse us from using our empathy to build humane organizations. Companies are groups of people being compensated for having to spend some of their finite lifetimes not being with their partners, children, pets, or super weird hobbies. They deserve to be members of organizations which honor that time by ensuring that their work has value and meaning. There is no mathematical model to guide us to that goal.
To me this reads more like "Even though humans can be modeled as distributed computing systems, we have to remember that they are humans because ethics" which is not my point. My point is that the human element is what makes people productive; it's not some tangent that we have to account for. IMO, the boundaries outlined here don't define the upper bound of meaningful work (the author's thesis) but rather the upper bound for mundane work if it isn't made to be meaningful. In the author's model, we'd say that superlinear productivity is possible _for individuals in isolation_ which definitely breaks the analysis since it's all about trying to hit that linear improvement mark. This doesn't strictly mean that their argument is "wrong" because they're reasoning about hours worked as their unit of measure but it does mean that it's not meaningful because a person's individual outputs can vary so greatly for a fixed number of hours worked.
The way I read it, he is starting from the "cogs" premise (which may appeal to some people in some situations), and ends up defending rather humane sounding "local" principles.
One possible tl;dr is:
Objective analysis and queuing theory leads to _rejecting of_ all kinds of Talylorian cogs-and-factory-lines style organizational models and hypotheses in software work.
Regardless of the content, it strikes me that this author is a bit stuck in his own head? He uses rare words when common ones will do ("dyad"? What's wrong with "pair"?), passingly mentions obscure terms for no reason, and in general writes like the less the reader understands, the smarter the material was.
Dont mean to pile on (still better to write than not, glad it was linked, etc!) but I agree. It reads as language-fetishizing to me, too in love with little turns of phrase and eddies in the current. Noticed the same thing in Bo Burnham's Inside (lots of attention placed on aesthetic details), maybe it's a consequence of doing it all solo.
The older I get the more I admire people who are straightforward and cut through the gloss. If something is worth saying, it's worth saying simply.
From a practical perspective, I found myself in a sort of context switching throughout the piece, looking up various bits to make sure I understood their use, but also finding myself disappointed when I came to the conclusion that the use of a given piece of jargon didn't seem warranted over a simpler version of the phrase. I liked the topic so I pushed through, but it made it tougher to read than it seemed to need to be.
Or it’s just a style that they write in, plainly inspired by the stuff they like to read.
We can have a discussion about whether it’s good delivery of that style, or whether the style itself is accessible, but I’m not sure what we gain by trying to analyze someone’s personality through it.
I didn't mean it as an analysis of his personality (how can you do that from a blog post?), it's just shorthand for "I dislike this writing style in this specific way".
disagree - writing advice has skewed too much towards pg-style "write for a 5th grade reading level" making everything sound plain and bland. while thats an appropriate style for clear communication, sometimes you just want to flex your literary muscles, and English offers so many wonderful ways of self expression that are slowly being lost due to boiled down business writing.
Coda's technical chops are undisputed AND is a well known technical communicator, dont cramp his style, there are few enough of him as it is
Eh, as the sibling comment says, it's about the ROI of the new word. If you're defining something with it that would otherwise take many words to explain, or if you want to introduce a new idea, or otherwise use it as shorthand, great. If you're using a new word only once, then why even use it?
Is ROI in this case a measure of comprehension? What about the enjoyment and playfulness of metaphor and expressive language - do these have a negative ROI if they aren't sufficiently terse?
How far do you take efficiency as a measure of communication quality?
Did Kevin have it right when he asks "why waste time say lot word when few word do trick?"[0]
I find there is a time and a place for this in your work though. You don't have to use obscure words nonstop, use them where people can learn or figure out just from context. Sometimes thats not possible but it usually is. If you think it isn't, maybe you've got a new writing skill to learn and improve on.
Writing for the reader is important if you want your work to be read.
Write to think all you want but don't expect other people to read your unedited drafts.
Like how all tech blogs nowadays writes "leverage" instead of "use".
"I leveraged a hammer on these nails" sounds hilariously stupid, but it's quite literally using the word leverage for an action that has leverage, unlike tech blogs.
English is basically Latin + French + a few other sprinkles. These expressions are pretty trivial to pick up, especially if you have read much history, logic, etc.
For example, his phrase cum hoc, ergo propter hoc is a pun on post hoc, ergo propter hoc, which is a well-known logical fallacy.
Well, it's more German than either of those, especially wrt structure, conjugation, and helping verbs. But we do take a huge amount of vocabulary from Latin and the romance languages, especially in the realms of the arts and science.
English is a Germanic language that was injected with a large amount of French vocabulary. Latin and Greek are typically used in very specific jargons, such as philosophy or medicine, with some phrases escaping confinement and reaching a larger population.
English being a Germanic language is why the whole "no prepositions at the end of sentences" thing never made sense. That was a Latin rule. But Germanic languages have a property of augmenting verbs using prepositions. In German itself these are called "separable verbs".
Ich stehe auf. I stand up.
"Up" is not a preposition in this sentence. It's an integral part of the verb. That is, the verb is, in its entirety, "aufstehen" or "to stand up".
Meanwhile, this same phrase in Spanish uses reflexivity:
Good think you left greek off the list, otherwise you'd never get anywhere in statistical reasoning.
ceteris paribus, technical terms are obscure to people outside the field. Using them well is a nice way to bring people into the field, and to signal that you mean the term in a very specific context.
And that paragraph is one of the worst uses of "ceteris paribus" I've seen. For one, I'm not even talking about economics. And only tangentially about holding some variables constant while changing others.
Yes. Jargons exist for a reason. They develop because people working together come to a common agreement on specific meanings for specific terms, regardless of common usage. And it typically leads to more precise communication. "Legalese" is a great example.
We actually had a course at our Uni to stop us writing in this style. The Professors had become sick of reading white papers that said something straight forward in this strange Academic language.
I'd be willing to bet this person is very well educated or has maybe has worked in Academia.
I'm with you on "couple" potentially not being exactly two, though I tend to default to using "few" in such cases. But I can't sign up for "pair" not being exactly two.
It’s a term I wasn’t personally familiar with until a few months ago. But my introduction was via my 8 year old son! They were using it to explain concepts in class at school. Now I’m wondering if there’s some larger trend/influence that has introduced it into a collection vocabulary that I’ve missed?
I know this has been discussed and rediscussed but the notion of thinking about human organization blackboxes trading work with each other the same way we think about how distributed systems blackboxes trade work and communication with each other I think is a huge insight for helping software engineers understand the complexity of human organizations.
You want to avoid single points of failure, optimize bottlenecks, build in redundancy in similar ways etc. Etc. It's a great insight.
Increasing parallelism and optimising bottlenecks is the way to performance.
Put this way, be careful how much redundancy you add; as it is likely to increase complexity and reduce parallelism.
As an aside this is one of the major goals of Agile. Having smaller tasks increases the potential for parallelism. As well as the more obvious ability to change direction.
Agile's main innovation is organizing projects as a (mostly) always shippable series of iterations (i.e. "what do you want next"). But the actual "how" with cards, points, sprints, boards, workstreams leaves a lot to be desired if parallelism is the goal. Communication costs are really high when every one is micro-siloed, hand-offs, which are serial, are costly and involve a lot of relearning the same context, and there is more integration work which is also serial.
I think it's possible to have good parallelism within Agile but I don't think it's the Agile that makes it happen.
This is the first time I'm seeing it. If resubmissions were forbidden I wouldn't have ever known of its existence. I'm not sure what the solution is, but I'd like to think removing it from my cognitive landscape for the sake of a forum host (a stranger) isn't it.
The past button, like you said, doesn't always work like that. And I know how to search. I'm wondering why there isn't a button when people always post comments to achieve the same thing a button could.
When something gets reposted, it's fairly common for somebody to comment with previous discussions. I hadn't seen this submission before either, but I'd also not seen the discussion OP linked.
"The work capacity of an organization scales, at most, linearly as new members are added. "
WTF? This is a bad way to think about the 'division of labour'.
One man cannot ever get over a 10 foot wall.
Two men together can absolutely do it.
'Work' is not the measurement, 'productivity' is.
Even the fleeting thinker, who works only 1 day a week, but shows up every day, may be absolutely critical to making things work. Paid for that input, when it matters, not 'daily wages'.
> One man cannot ever get over a 10 foot wall.
> Two men together can absolutely do it.
Sure. But the counter example (which is what I believe the article is arguing) is that one woman can make a baby in 9 months, but 9 women cannot together make a baby in one month.
Whether an organisation's "work capacity" is bottlenecked by things like gestation periods which cannot be circumvented, or things like 10 foot walls which can be cooperatively circumvented - is a great question though.
I think you bring a great point, creativity, ideation and making good decisions. More people means more ideas being bounced around, that could lead any one person to a solution which would have not happened otherwise.
But I think there's a cap to that as well, if you bring 10 people do they get over the 10 foot wall faster than 2 people would?
No they probably don't.
So from 1 to 2 you've got great added productivity, but at 10 you're majorly wasting resources.
That's where the other advice would kick in, getting over that wall won't get any faster, you can't scale beyond a point, unless you use other strategies:
1. Build better tools/frameworks, like say have a team build a ladder, now 1 person can get over the wall as fast as 2 people could prior.
2. Diversify your offering, have the other 8 work on other stuff that don't depend on getting over this wall.
I think that's all the ones mentioned on the article, but it be interesting to see if there could be more?
Now, maybe it takes 10 people to think about the problem of how to get over the 10 foot wall repeatedly and efficiently, at lower cost, for example to invent a ladder assuming ladder's didn't exist. I'll give you that, this is an interesting take.
But did it really take 10 people, or it's more that only 1 out of 10 people will come up with this great solution? And if so, you could find yourself adding 100 people with no one thinking of the solution and a competitor with only a single person might think of building a ladder. So what's the takeaway for scaling here?
I think the point is that these concerns are largely orthogonal. Work (equiv. labor), is the time/effort that your employees are devoting to your business processes. That work can be more or less productive as you say, or some part of the work could be devoted to making other kinds of work more productive. However it happens to be apportioned, there is some amount of time/effort that that work will take. The questions discussed in the essay are not about how to decide how to apportion the work. Instead, it's about issues that arise from coordinating that (predetermined amount of) work among N workers in a company. That's where you can have e.g. contention for shared resources or critical sequential segments.
What you're saying is like looking at Amdahl's law (mentioned in the essay) and saying "WTF? This is a bad way to think about parallelism, because what if I could just come up with a better algorithm?" Amdahl's law is not about choosing the right algorithm, it's about how much time you can save by parallelizing the algorithm you have. Clearly choosing the right algorithm is important, but it is a different kind of question.
It's worse even than that. "At most linearly" is someone's idea of subtle irony. You only get linearity if you discover the existing team is less capable than the newly added members, or you're asleep and this is a dream.
If the value of new people diminishes logarithmicly, or much more likely, at the square root of the number of people, you must tackle tasks that a smaller team could not. Otherwise you'll be out-competed by a handful of smaller competitors.
Yes, you can do things a smaller team couldn't do, but you don't really have a choice, if you want to survive.
This is completely the wrong way to think about division of labour.
We started to figure this out 300 years ago for god's sake.
It's not helpful think about our tasks as a series of bits of work to be accomplished that we hire spigot-workers to do. White collar work is mostly not labour in that sense.
Thinking about your organization like a CPU doing calculations is utterly the wrong analogy.
More like band, playing a song - most tasks have some level of optimal participation you don't want to be below or over, and/or it might be incrementally harder to improve quality depending on variables. And 'talent' usually matters more than anything.
We started to figure this out 12,000+ years ago at the dawn of civilization, when specializations finally show up in the archeological records.
If you want to build a cart you need a woodworker and a couple of blacksmiths feeding your cartwright the proper materials. You’re right that five blacksmiths won’t get the cart built faster, but there’s a whole world of items, including in software, that are composites of anywhere from a handful to a hundred specialties.
I don’t feel like you’ve countered my theory about the need for higher order creation. Someone somewhere had to look at how many musical instruments there were in the world and think, “composer” and “conductor”, which not only gave you a job but suddenly put a bunch of musicians to work.
> You only get linearity if you discover the existing team is less capable than the newly added members
Could you explain your reasoning? This doesn't seem right to me. If you have linearity, this means that every new individual is exactly as productive as all other individuals
If you get twice as much work done by 10 people as 5, you need to think about what the new people brought to the table, or the old ones don’t.
If the team didn’t have a facilitator, for instance, some team members may not be coordinating, causing issues with finishing tasks.
Because despite having a factorial increase in communication paths, you still got twice as much done? Are you sure?
You have to ask these questions because at worst something bad is happening. At best someone new brought in new techniques that work better than your old ones. You need the answer in either case.
This doesn't make sense to me. Adding more people add communication overhead so you would get sub-linear return. I don't see how you can even expect more than linear return.
Self reply: I re-read the article with a more generous frame of mind, and they say this:
"The ceaseless pursuit of force multipliers is the only possible route to superlinear productivity improvements as an organization grows."
If we interpret this as saying that 'as we add another person, they should do something different to the previous person', then my complaint is addressed.
The rest of the article assumes you don't do that.
An old woman cornered me and a friend outside a few years back and ranted about synergy. I remember her wrinkled face and the manic glint in her eyes. Never going to lose the association between her and that word.
which discusses sublinear and superlinear scaling of complex networks.
The first law, “sublinear scaling,” is for systems that deliver resources. It means a city with a large population needs only ~80% as many roads, power lines, and gas stations per person as a city half its size. The second, “SUPERLINEAR scaling,” applies to outputs of socioeconomic activity. It means a large city produces ~120% more wealth, patents, crime, pollution, and disease per person than a city half its size.
“Remarkably, these two growth rates, 0.8 and 1.2., are showing up over and over again in literally dozens of city-related contexts and applications,” wrote Complexity Science Hub Vienna in a press release. “However, so far it is not really understood where these numbers come from.”
> This analysis assumes that a unit of work is fungible.
This is a fair assumption when we're talking about organizing work within homogeneous teams that have (and ultimately are defined by) a single shared work stream. Every individual person on the team may have things they're better at, more historical knowledge on, or a warm cache, but it's generally assumed that any team member should be able to pick up any task.
A good manger/scrum master/whatever will end up organizing work to take those things into account but it's an optimization not something fundamental.
That's true. But unlike your correct framing, the article goes out of its way to claim universality rather than describe the conditions where it's approximately true.
"... can we determine the supervenience of some set of factors on organizational performance, not just in a particular context but across all possible organizations? That is, are there necessary, a priori truths of organizational performance? [..] As it happens, there are."
I think they, and we, know that 'all possible organizations' does not mean what it says, and that's my objection. It's not a big deal, just inexact writing, or possibly inexact thinking if they mean it, which I doubt. They mean 'organizations sufficiently large to be organized into units with fungible roles'.
It also ignores ingenuity. 1 person can outdo a 100 with the right idea. Infact I was in a 10 way meeting today, first time in years and was reminded how superficial the discussion has to be as the number of people goes up. Simply because of coordination.
> Successful new products can be incrementally integrated with the existing products where it makes sense, and tooling, libraries, and frameworks can be developed by force multiplier teams to reduce both time-to-market of new products and the carrying costs of existing products
Yeah... This is why communication is often needed in a full mesh.
Look at major companies (ehem, one in particular) and see if you think their products make sense in their integration.
Like did chat app X team talk to chat app Y team? Likely not.
Larger orgs move slower because they're trying to build coherent larger wholes. And that's harder.
It's not hard to launch 10 chat apps a year if it doesn't have to fit into any coherent whole.
Like do you think DigitalOcean just needs to hire disconnected teams, and will end up with something that isn't a frankenmonster crying "kill me"?
Their VM service works fine. Adding another that works fine is not hard.
Building all services needed for a major customer to "go cloud"... you won't get there that way.
(Not saying DO is doing anything wrong. Quite the opposite. I'm saying they can't just hire 100 parallel teams and end up being AWS)
It is actually a fundamental insight. The same principle explains why the cortex has developed functional areas, and the advantages of mylenation.
It also underlies Dunbar's rule on group sizes and how organizations scale.
The point is exactly as you say, everyone does NOT need to speak to everyone else-- but you then have to structure the communication channels. This is a different challenge in a 15 person org than a 3 person org. Let alone a 150 person org.
If you've ever been on a team which went from 5 to 8 people, you've seen this play out. God help you if the team crosses 15 people and you don't realize this triggers a phase shift.
> Therefore, our only hope for superlinear productivity lies in changing the task which is being executed.
This is the key insight for me. If you're adding people to a team but you're not putting them in a position to affect what work that is being done you will be stuck with the linear scale as the ceiling
This helps me explain an intuitive belief that I've always held for a long time, you should have a say over what you're doing. If I can't influence what work is being done and how it's being done, I don't feel like I can contribute much.
I was hoping the rest of the article would talk about how to build an organization geared towards empowering employees, but it seems to focus on strategies for dividing up work and suggesting that companies should just become more companies under one umbrella to try and reach the linear scale ceiling, blegh!
I periodically reread this post to remind myself of the communication overhead involved when scaling an org. People love to model headcount and work capacity linearly but it just never works out that way (and Coda explains why).
It's not just coherence and contention costs (though I hadn't considered them before, and they definitely exist).
The real problem is that brain-to-brain transfer of information is really hard. It is both slow and lossy. And the bigger the organization, the more times information has to transfer from one person to another.
this article is the unholy alliance of buzzword and business speak combined with little non-trivial content. Human organizations don't function like distributed computers just because one assumes so, they don't simply do static work.
The most glaring problem is that there's no quantity/quality distinction in the piece. The core theory is:
"The work capacity of an organization scales, at most, linearly as new members are added"
This is trivially untrue because larger groups enable new complexity to emerge. 10 people can't build you CERN, even if you give them endless amounts of time. They don't hold enough information. A city isn't merely a village of 150 people multiplied by a thousand, and so forth. Quantity has a quality of its own and new behavior emerges from changes in size itself.
Factoring work into little modules as the article suggests is the same kind of error behind microservice advocacy. The interesting thing isn't the performance of parts of the system or the sum of its parts, but only of an organization as a whole, including dynamics between it's parts, which in any complex system aren't linear, dynamic or even predictable.
We can discuss if we agree/disagree, but something about this post is non-debatable: any change in the way of work is trying to innovate in a 200,000 years "business", we should take this more serious.
Maybe, remote work is a first step? we have been working side by side (at the same place) for a lot of time, and this change is the first one impact one of the most important metric: productivity.
Software is "nice" because I can flex muscles and achieve things independent of "work." Many endevours require groups of humans to do; it is tablestakes.
Writing is the same, and hence why I like it so much.
> Keep the work parallel, the groups small, and the resources local.
I think about this a lot in software organizations. Like most takes, I'm probably wrong.
Keeping work parallel is, well, hard, and context dependent. Not worth commenting on. Keeping groups small is something we have plenty of prior art on; as Bezos said, two pizza teams, whatever. I may disagree with his exact number, and he may disagree with how much pizza I can eat in one sitting, but the gist is there.
Keeping Resources Local is the most interesting one.
The biggest productivity sinks I observe in my own line of work, and those around me, is absolutely quantified proximate to: "dependency on this thing we don't control". Here's an example: Everything has to be run in our one single kubernetes cluster. That cluster has some rules about how things are deployed, to avoid tragedies of the commons, like isolated namespaces and process security levels and whathaveyou.
What's interesting, to me, is how this could so easily and correctly be viewed as either a productivity gain or sink. On the one hand: Teams don't have to create and manage their own kubernetes clusters. Big win, that sucks to manage. On the other hand: we do have to integrate and hold a dependency on the Kubernetes Team or the DevOps team for X, Y, Z of this feature development. That's a negative.
I think it turns out being related to that next Principal From Beyond Space And Time: You can develop force multipliers, but oftentimes the development and maintenance of those force multipliers becomes a non-local resource that every team suddenly gains a dependency on.
Its really easy to see the upside of a shared cluster. But let's say I'm on a team that just needs to get a static site hosted on the internet. Well, rules are rules, we use k8s; for my team and my work, this "asset" is actually a net productivity loss. We never would have hosted our own cluster in the first place, maybe we use fly.io or whatever. You can argue that the standardization or the single-pane-of-glass security controls or whatever ultimately make it a net organizational win, but I think that's diving too specific into the example: the point is that these force multipliers can actually become force dividers over time, as their complexity increases to account for a wider variety of complex use-cases, and left behind are the (significant number of) teams still on square 1 or 2.
I don't want to understate how big a problem I think this is in modern software engineering; I think this is a big contributor for why software teams oftentimes trend toward stagnation. The buzzword is Complexity, but more specifically: every force multiplier is an abstraction layer which hides complexity, poorly, then you hire on new engineers or need to train up in a part of the system you've never worked in, and the abstraction disappears.
I'm not saying any of this to suddenly present a solution. I don't know. Some of the things I think about though:
I know this is oftentimes perceived as a soundbyte, but: I really think the best engineers are those with a really strong allergy to complexity. And, unfortunately, in most organizations this isn't rewarded. We reward "woah that's an awesome, complex system you designed that accounts for fifty-six different use-cases, exceeds expectations". We reward "woah, you were absolutely on top of that incident, you identified that fix super quick". We don't reward "woah, that comment you left on that other guy's RFC about how we shouldn't implement this was extremely well-reasoned, exceeds expectations". We don't reward "that thing you built hasn't had an incident in two years".
Every organization is different; because every engineer is different. Some organizations are better than others. But I think the average isn't allergic enough; and I think its not even close.
Second; I think Bezos' model of "everything should be an API" is extremely good, and roughly zero companies take it to the extreme it should be. Teams should communicate with each other the same way they communicate with customers, in every meaningful way. Instead, what we oftentimes end up with is: oh, you've got an API that you want on api.mycompany.com, you'll have to use the API Gateway that the Service Platform team developed, then use this npm library we published to expose a standard API interface, and register your schemas with...
At a deep enough level, this guidance will start breaking down. For example, with apis: You probably do want your api on api.mycompany.com, or at least mycompany.com; so there's at least some coordination with the team that runs the domain name, and some kind of gateway may be necessary. For websites, ultimately there's one bundle to serve, and as much cohesion in design patterns between pages is a Good Thing. But, similar to the allergy to complexity, I think our industry-wide allergy to inter-team dependency isn't strong enough.
This can oftentimes surface in the very tactile details about how things are developed. Let's say your goal is: a single centralized documentation page for the API. Routes are managed by twenty different teams depending on the product. This is a massive coordination problem. First, we could ask: Does it need to be single and centralized? Or could each team, which corresponds well to a product, get their own product.api.mycompany.com domain name, then they run everything? The prevailing wisdom is: nah it needs to be centralized; and there's that lack of an allergy surfacing again. But ok; We The Company say every team needs to expose a GraphQL API or whatever, schemas on some HTTP-accessible path, register with the load balancer, register with our schema repository, and we've basically slid down the hill toward making this coordination problem as difficult as it possibly could be. What if, instead, we did something like: the team which runs this documentation platform runs jobs which regularly scan every repository in our Github org, looking for a landmark file that says back "I'm a service register me"; and they get automatically registered? That would help reduce coordination a ton! Now, what if that landmark file was, say, just a GraphQL schema file? "As long as you put your GraphQL schema file anywhere in your repository, we'll find it and you'll get documentation instantly".
I think one of the core things that every member of engineering leadership needs to do is: keep an inventory of the "typical dependencies" an average team in your company has on other teams. API gateways, kubernetes or AWS infrastructure, logging pipelines, metrics dashboards, alerting, downstream services, libraries; if a team generally has to interface in ANY capacity with something another team owns, write it down. Regularly revisit that list; query your engineers to see if its grown or shrank; then make an active effort to shrink the size of the list, or the size of every dependency on it. In large orgs, there is quite possibly no more impactful way to increase the productivity of your teams than right-sizing this list (usually down).
"informs how much shit we can expect to fit in a bag"
"the floor is lava"
"chattier than others, ... communication is essential"
"punt on coherence and just ride dirty"
"put their eggs in a single basket"
"blistering number of new products"
"super weird hobbies"
Scanning the essay, most of it is pretty tame. There are a few uncommon words in an essay long enough for statistically some of it to be uncommon. What exactly is the objection?
Everything about this article is just nonsense, from the (meaningless) title to the needless $40 words, to the made-up meta-meta-concepts that have zero practical use.
I could almost imagine the intolerable sniffles of Slavoj Zizek as I tried to read.
Having read the essay the title is a pun on two different meanings of the word. Work - what people do, is actually just work - the theoretical capacity of a distributed system. This is roughly the point of his essay - treat the organisation of people as you would to optimise a distributed system.
On an unrelated note I see that inflation is hitting $5 words particularly hard :)
In the very beginning, "emic" and "etic" are introduced (and defined in plain english inline). They are never used again. The author could have just used the plain english and got on with it.
Reading the definitions those two words do capture exactly what is being described, while plain english would be much more verbose to do the same. I had not encountered them before but they seem like useful words in this context.
Reasoning about humans as cogs is inevitably going to be a really awful way to get people and companies to be productive - most of it isn't that. However, humans, like plants, are able to survive and sometimes thrive in really terrible conditions _despite_ those conditions. This is what you see at Amazon; if you throw enough money at anything then you're going to get some kind of result. But, despite Amazon, Microsoft, and Google having billions and billions of dollars to throw at this stuff we mostly still grab for Open Source tools (and so do they). That's largely because the workflows the Open Source community use actually work and that's mostly because they get the human stuff right.
The whole "man hours - overhead = man joules" thing doesn't math and using that as a basis for any kind of analysis is going to lead to weird places. There are elements of human workflows that fit into these boxes - the really mundane work. Reasoning about mundane processes through this lens is maybe a good idea to the extent that mundane work can't be swapped out for more interesting work. The problem is that what we really want is to excel at the stuff that isn't mundane (the force multiplier shit) and that needs an entirely different framework that reasons about humans as humans rather than machines.