Hacker News new | past | comments | ask | show | jobs | submit login
The McNamara fallacy: Measurement is not understanding (mcnamarafallacy.com)
365 points by wenc on Feb 1, 2022 | hide | past | favorite | 217 comments



Quick story: I was the CFO for a company that sold to a private equity group (PEG). I took over as the CEO as the founders retired, leaving me to deal with the PEG. It quickly became apparent that the PEG managers looked at everything through the lens of an Excel spreadsheet. These guys were brilliant attorneys and analysts but lacked experience building businesses and managing teams. Ultimately, they couldn’t add much value in terms of operations or strategy, but they were great at financial modeling/quantitative analysis and forcing us to justify expenses. That may sound good at first—eliminating wasteful spending—but it ultimately led to the gradual erosion of the company culture and employee loyalty. It’s easy to cut benefits and pay given that many workers lack the leverage to do anything about it, while it’s much harder to reduce hard costs like materials and equipment. That meant employees just kept getting squeezed, and it was surprisingly difficult to quantify the impact that terminating an employee or cutting benefits would have on morale/culture/performance.

The moral of the story is that people with analyst mindsets play an essential role in our economy, but sometimes giving those people power over large organizations can have disastrous consequences. There truly is a disconnect between measurement and understanding.


> The moral of the story is that people with analyst mindsets play an essential role in our economy, but sometimes giving those people power over large organizations can have disastrous consequences.

I'm gonna cosplay an "analyst mindset":

1. Need to measure costs and benefits of slashing benefits/pay.

2. A benefit-- slashing benefits/pay allows us to hit some obvious financial goal

3. A cost-- Uh oh, I don't yet know how to reliably measure any of the costs.

4. Good analysts don't take action without measuring.

5. I'm a good analyst.

Conclusion: I cannot take the action of slashing benefits/pay

The only way to make it work is to add a step "3b: cherry pick metrics for the costs of slashing benefits/pay such that the phony metrics justify the decision management already wants to make of slashing benefits/pay." But now we've shifted from "analyst mentality" to "the mindset of the little Beetle-like bureaucrats described by George Orwell in 1984."


Having dealt with 2 PE exists, rarely is the proposal something as upfront and silly as slash everyones pay. That probably does happen for a company being restructured in the red, but the more subtle actions tend to be things like:

* Comp bands for are now targeting p50 averages rather than p75 or top of market. So you can't close new hires that are going competitors. And you can't give raises to your top performers

* The health benefits are less generous when renegotiated for the following year

* T&E that would have been approved - granted some maybe that shouldn't - but importantly some that should have for top-sales people, are no longer approvable. So your top sales people leave. Or similarly the accelerators or other measures are changed, that might look good on paper but rub top sales people the wrong way.

* Head-count isn't replaced, so teams have to take on more work

* Perks like conference attendance or hardware upgrades, which arguable aren't perks but investments in your team's productivity, are cut/limited


>> Having dealt with 2 PE exists, rarely is the proposal something as upfront and silly as slash everyones pay.

Agreed, but a gradual erosion can occur over the course of several years. PEGs and the operating company's management team have massive incentives to hit their growth metrics. If decreasing 401K contributions helps management hit their EBITDA target, many would argue that they should do exactly that. However, the impact of these decisions accumulates over time and can eventually derail a company's performance.


The reality is that this kind of stupidity is built into accounting systems. It's not even the people with the spreadsheets. They're doing what they've been taught to do. It just happens to be completely wrong-headed - if you're interested in tangible long term growth and not quarterly plunder which moves money around without seeding genuine real value.

Which is why companies can "grow" in accounting terms while actually being hollowed out as going concerns with a future.

Likewise with the planetary economy, which appears to be growing financially while in ecological terms it's almost literally eating its own seed corn.

A better approach needs a revolution in accounting. Current accounting practices are built on so many false assumptions they cannot possibly produce lasting long term opportunity, except for a tiny minority of exceptionally and unhealthily privileged people.


Just look at Intel to see a company that was managed by the accounting people. Great numbers for a few years, large dividend, etc, but now they have to spend billions and billions to put the company back on track and give it a future.


Disagree. Intel has had continuous dividends for at least 30 years.

Few giant American companies have been as consistently successful as Intel. I can think of only Microsoft, one or two energy conglomerates like ExxonMobil, a telecom or two like Verizon, and maybe a bank like Goldman Sachs.


Abbott Labs, CR Bard, Deere & Co.

Something about staying in your lane while increasing opportunities in that lane seems to be a good long-term formula.


Yep, Bard is part of BD, which is a large beast. Abbott/Abbvie always has something going for them. UnitedHealth has done well for years and years as it has grown. There might also be a stable industrial I am forgetting like a chemical company or 3M.

But overall, giant companies with 30 year runs of knockout growth and/or dividends like Intel are Cinderella stories. Google will be the next one to qualify in a few years I suppose.


Sorry should have been more precise but I was also counting in the very large stock buyback that came to a end in early 2021.


It can be more insidious than that. For example, as time goes on more and more of the engineering might be shifted from product development to contracting. In this way you see revenue growth without realising that there is a hard limit and you're throwing away the company's future.


Cash is still King. The more cash on hand, the easier it is for the business to survive in an economic downturn, and the more dividends and stock buybacks can happen for the investors.

Software engineers from a CFO perspective aren't any really different than plumbers or carpenters. It's just labor. Getting a cheaper rate on labor is far more beneficial to the company than say making sure it's employees are happy.

If the company could cut wages 50% across the board and then give the executives 20% raises for saving 50% in labor, they would do it in a heartbeat.


This line of thinking is wrong though, because software developers are more like writers than plumbers. You can fire the writing staff at a sitcom and hire cheaper writers but good luck, it will probably fail.


The issue is that the bean counters count beans and not creativity.

Can we think of a way to connect and individual’s creativity to money flow?

As an engineer I am most productive when I solve a business need without writing code. How can I associate my contributions with measurable profit?


When you write an api, add "dollars_saved", for the amount of dollars your api is currently saving the company. CFO's love that quantitative shit.


Y'all are looking for quantitative Quality measures. Good luck. If you aren't sitting on top of Spend informed by actual company history, you're hosed, and most places are hush hush af about that.


Code and documentation are debts, Features divided by lines of code is a good metric.


Dear god.

Once that metric becomes anything close to a target the monstrosities it would bring ae horrific!


The brain drain starts with the turds starts to smell. Those most able to go will be the first to go. Things decline. More leave and so one.


If your primary output is software then maybe you're right. But in most companies software enables more efficient output usually in the form of products or services. A bank uses software, but it's profit is based upon banking... etc.

It's why most Fortune 500 companies run Java, so they can easily replace one person with another.


> Getting a cheaper rate on labor is far more beneficial to the company than say making sure it's employees are happy.

Whoa, citation needed. I’m fairly certain yhis isn’t true unless you are optimizing for just the next month.


Companies exist for the stockholders, not the employees or the public at large.


that's a myth. it's the opposite that is true.


It's hard to argue against it when I see companies becoming more and more consumer hostile (cheap products that break in a year or after warranty is up, slow buggy software, incomplete game releases, ads crammed into every open space, FANG cornering their markets, dark patterns, etc.)


Software engineers from a CFO perspective aren't any really different than plumbers or carpenters

Plumbing systems are extensively documented and well-understood. The equivalent situation in plumbing would be that the entire layout of the city's sewage system (or similarly, the construction plans for the company's headquarters) is only kept inside the heads of two senior developers, likely because the Powers That Be have decided that keeping accurate documentation is not worth the cost. Preventative maintenance and quality control are impossible, because nobody knows which pipes have been laid first and are the most at-risk from leaking.

Point being: everything is just labour. That doesn't mean software "engineering" should be allowed to fall so far below the baseline we expect from other engineering professions.


What were the longer term changes you saw and their outcomes for the health of the teams and companies?


If you are a good analyst with a bonus tied to quarterly profits increase, can you be still a good analyst?

Do you sacrifice your own monetary benefit to save a company? But then what about your family?

I don’t think analysts are not smart enough or not caring enough. But institutionalized greed corrupts the integrity of everyone.


Incentives matter and PEGs want companies to grow significantly over the life of their investment. The biggest reward comes from the future sale after you’ve increased the enterprise value 10x. I’m sure that people have made poor decisions for short-term gains, but the real issue is giving immense power to someone who has never actually had to build teams, ship products, or go the extra mile to make customers happy. Being “smart” doesn’t mean you always make good decisions. Some of the most successful entrepreneurs I know aren’t particularly book smart. In fact, much of their success could be attributed to the fact that they just didn’t quit when any other reasonable/normal person would have thrown in the towel.


Ehh some PEGs do not care about long-term. Often they want to do anything that can get them some lump sum dividends and bonus payouts. Cost cutting combined with a recapitalization cookbook, selling assets like real-estate, and then using the proceeds and borrowed cash for themselves. Terminal price won't matter much due to either the initial windfall or because someone else will be holding enough of the bag after recap.


I definitely agree with you. I would further argue that the "analyst mentality" has become so engrossed in the foundation of capitalism(and always has been, really) that we see companies across many industries closing down due to the "worker shortage". When in reality the CEO's of the companies complaining the loudest that "no one wants to work" have decided that paying living wages in a deeply competitive job market is less doable than letting the company totally stagnate and dissolve due to lack of people willing to work hard for pennies.


I've long joked that Excel will be the downfall of western civilization, but it's less funny every year.


You either like Paul Krugman or you hate him but there's no denying that excel bugs influenced economic policy.

https://www.nytimes.com/2013/04/19/opinion/krugman-the-excel...


The CEOs whose own compensation has been skyrocketing because that's just the market rate for talent, right?


I think the key that leads to 3b is that costs are hard to measure. I.e., I think it's less that the PEG are coming into it with the idea that they want to cut and are cherry picking for that, it's that they will happily do whatever the easily available data tells them is right, ignoring or handwaving the harder to get data.


I worked at a company that started to go through the "you can't improve what you don't measure" phase. In general it was good for the org I was in, but I used to have to remind management that there's a corollary to that saying, which is: you necessarily improve things you measure at the expense of the things which are difficult or impossible to measure.

This seems to be a hard one for some types to truly grok. A common response is that we need to figure out how to measure it, thinking there was some single magic number that things could be distilled down to. But often even if you figured out how to measure some of them, there's always other intangibles you're not tracking. So you need to always be conscious of it.


It's fascinating to me how many smart people fall into this trap.

Can they quantify their love for their partner? No? Well, I guess they have to give up any efforts to improve the relationship.


"If you spend 10% less on birthday presents for your children, will that make them love you 10% less? How can you justify spending this much on presents?"


there's another issue with measuring too much: if you only derive value of that you can measure, you will constantly feel bad when doing things that have no apparent measurable value. So then you say to yourself, relaxing on the sofa has value because it makes you perform better to have breaks. But do you really believe this? And in any case, you still can't help measuring value into relaxing because you are brainwashed into thinking everything must be quantified in order to justify something. So even relaxing or playing video games must be seen in such a context in order for you not to feel bad. it's becoming rediculous.


I also like to point out in a few similar contexts: even if you could measure all the things, and you could create a perfect/optimal cost function, there would still be no guarantee that you'd be able to find the optimal global solution. So relax.


"the most important figures that one needs for management are unknown or unknowable" -Lloyd Nelson, quoted by Deming.


The thing I have noticed is when the anecdotes and the data disagree, the anecdotes are usually right. There's something wrong with the way you are measuring it. - Jeff Bezos

You can think many things of Bezos, but he does know how to make a business make money. Data is important, but things that are hard to measure that you only learn as anectodes is also important.


Amazon's sprinting decline into absurdly gamed reviews and scam copycat products really doesn't support that quote.


If they don't measure it, does it exist?

I'm not disagreeing here, but it could be that the fraud is a drop in the bucket of Amazon revenue. And just trying to get metrics on it, may cost more than what Amazon benefits from it.

I tonight bought a 3d printer hotend from the source company, because I couldn't be sure the product I order from Amazon would be genuine.

It's pretty bad when the products being sold on Amazon are labeled "Genuine assembled E3D V6 Hotend", when if Amazon gave a shit, it would only need to say "Assembled E3D V6 hotend."


Heh, I wonder if he would apply that to science as well?


Science advances in strange ways sometimes. Plate tectonics started basically with the anecdote of "south america really looks a lot like it fits into east Africa", but wasn't believed at first due to no hard data.

Anedoctes are not data, but when you have some with no data, instead of ignoring it you should take it as a signal to go search if there is data to support it. Great scientists and mathematicians usually have good intuition of things to research, not because they have data, but their mental models of the subject tell them that there should be interesting data there.


I believe you mean West Africa.


Your story is the story of most companies. I watched "my" company get sold to private equity, and leading up to our "soon" sale they have been slashing the one thing they can, employee benefits (it's been a quick 3 year decline). I've watched more than half our team leave, much to the chagrin of befuddled execs.

"Why are they leaving?" "Well, we used to have meaningful raises, bonuses, profit sharing, etc. but you have consistently cut those things." "Yes yes, but why they are leaving?" "...."


Are you still at this company? What keeps you there?


Before we were acquired I managed to get an offer from another company to leave at one point.

I knew the founders well, and they knew I was critical to success, so they made me quite the offer to stay. For where I'm at, my salary is far above CoL, so I don't have an incentive to leave yet. That and I got some shares that will vest when the company sells this year.

But if I hadn't gotten that bump before acquisition, I'd likely be gone. It's impossible to get a raise now. On the plus side, I now work like it's impossible to get a raise, if you catch my drift.

Once it sells, I'll move on to something I love again. Planning to do consulting with some friends who started a co-op.


Someone has to turn out the lights and record the fall for posterity. You've never Grim Reapered a company before?


I worked at a company that went through a private equity acquisition and I have a question you might be able to answer.

If you exclude layoffs and incentivized retirement, it seemed to be that a greater percentage of individual contributors left as compared to managers. Lots of managers stuck around for 12+ months, and it seemed like the percentage of managers of managers (e.g. directors, VPs) who stuck around was even greater. Almost the entire C-suite stayed for years after the acquision.

Was there any financial or other incentives given to managers to stay? As an individual contributor, the morale was just terrible. I just couldn't understand why the managers and other people in leadership positions stuck around.


> I just couldn't understand why the managers and other people in leadership positions stuck around.

Money. Investors typically carve out equity to retain key personnel (i.e., management units/stock). The units are worthless unless the company appreciates in value, so management becomes laser-focused on doing whatever it takes to increase the company's valuation. Everything else becomes a secondary concern.


I had guessed something like this was going on, but reading this just really makes me wonder what's really going on in the minds of a private equity folks. The C-suite and senior leadership were the ones who drove the company to the point that they ended up in the hands of private equity, but they were incentivized to stay.

I get that they exiting leadership knows the company and it's hard to find new people. But it just seems so crazy to me.

We had teams that had 50%+ percent of people gone between layoffs and attrition. It was comical to see how much stuff fell between the newly formed cracks. I still can't comprehend how much they must have been incentivized to stay when the earth was crumbling around them.


As a one-foot-in manager, I tend to stick around as long as someone is still on the receiving end of one of my policies. Bad practice to not observe the unpleasant consequences with future aspirations to leadership.


this is basically the micro example behind the deindustrialization of the United States. Mitt Romney type corporate raider MBAs outsourcing everything to China for cost savings to maximize short term profit

expand this over 50 years by 1000s of similar groups of people doing the same thing and you go from the "Arsenal of Democracy" to begging China for masks and ventilators


An oldie that may be an interesting read for those who have had similar thoughts: https://www.theatlantic.com/magazine/archive/1993/12/how-the...


What a fascinating article. Thank you for sharing!


That, and the industrial base lobbying to allow "free trade" and subsidize expansion into countries which have high levels of corruption, low environmental protections and worker's rights.


Not just America unfortunately.

People at the very top unfortunately now listen to the bean counters far too much.

This really does make social problems of an us vs them sort when all of the money manages to drift up with only the false promises of altruism at charity galas being the sign of any of the generated wealth making its way back down to the workers...


Great Steve Blank post about this called The Elves Leave Middle Earth.

https://steveblank.com/2009/12/21/the-elves-leave-middle-ear...


> private equity group

The goal of a private equity group can sometimes be be to extract as much money from the company as possible in a given time frame, rather than to ensure the long-term success or survival of the company.


And the goal of PEG's empolyees is to get bonuses by hitting the metrics set for them.

Metrics are the only goal that means there. It's all about making the numbers look pretty.


"and it was surprisingly difficult to quantify the impact that terminating an employee or cutting benefits would have on morale/culture/performance"

I think it is not surprising, when you consider that a precise weather forecast for more than 3 days is usually considered impossible.

When you deal with very complex systems, where small changes can have a big effect via self enforcing feedback loops - how can you expect to calculate all that?

And human social dynamics seem to be at least as complicated as the global weather.


I agree. Considering the complexity within organizations, imagine pleading your case to a couple of Stanford MBAs that are 1) incredibly smart, 2) operate in a PE culture that rewards managers that provide others with "negative feedback" to get results, and 3) believe in their hearts that cutting benefits is "optimizing" the business to maximize shareholder value. They can make a compelling counter-argument for just about anything.


I did not meant it as criticism. And I can imagine that situation, which is why I choose a different path, to not be in that position in the first place.

Not saying that my path is better, definitely not money wise, but I knew early, that I do not get along with a culture, that is focused on assigning oversimplificated numbers to people, which is what MBA's seems to be mainly about.


I agreed with what you were saying and updated my previous comment for clarity.


I worked at a place with a lean 6 sigma certified specialist who towards the end of the companies doom effectively had the lead engineer cleaning out molding machines to track down every last tiny molded part that over the course of several years of continuous running had flung outside of its target. Same guy told me if the coke machine ever stole my change that he'd help me get it back from the vendor.


All the lean sigma stuff seems like another useless management fad to me that only benefits consultants. Is that what you're saying here?


Like many management system fads, it started as a useful kernel of wisdom or obvious maxim that idiots ran with and turned into a monster. In the case of six sigma, the idea is about constantly optimizing your workflows and not accepting "we've always done it this way" as an excuse.

But most people lack the critical thinking skills to correctly apply wisdom when necessary and instead need a solid framework to operate within. That's how these things inevitably develop. Just like how Agile is supposed to be about getting working code over being bogged down in process but inevitably ends up with with half baked products that have massive issues.

Six Sigma also gets applied to industries it has no business being in. The mentality works best when you have a fixed workflow. In manufacturing it would be if you are making a lot of one thing. You can do a lot of optimizing. But I have seen it employed in organizations where every project was vastly different. Instead of a lean approach it should have, and previously was, following a house of quality philosophy. This happened to be in an industry where cost was rarely a consideration but performance and reliability were.


Great comment and thank you. It has only popped up sparingly in my industry thankfully.

What you're saying makes sense to me in that you should never fall into the "this is how I've always done it" trap, but putting a complex beauracratic process around that is just going to create a whole new problem.


> we've always done it this way

What was done for some time is at least to some extent proven o work. Any new way is not yet proven to work and can be worse (or can be better but it is not granted)


It’s an expression of distrust.

If you ever work in government, procurement people think like that because their goal is objective competitive process that that meets the minimum standard to fulfill the purpose.

There’s a certain logic to it. You don’t want to see random government employees driving around in Teslas, so generally speaking they will be in nondescript 4-door sedans. Having a human say “no” makes them accountable, so a complex process will determine what kind of car you need.

Taken to extreme, it becomes a problem. Procurement officers get lazy and focus on their process instead of the needs of the customer. So they treat humans like Ford Tauruses and allow vendors who understand how to game the process walk out the door with millions.


From what I've seen after the attempt to adopt Six Sigma within a large pharmaceutical, 6S has several inescapable vulnerabilities due to not knowing:

1) which important company components produce signals that aren't measurable (e.g. internalities like deep domain knowledge in R&D or externalities like market swings or competitor behaviors),

2) which signals are too dynamic or complex to optimize, and

3) which signals are insignificant, irrelevant, irreproducible, or simply meaningless noise.

Applying Six Sigma to R&D illustrates these caveats nicely.


I'm only speaking towards this one particularly useless buffoon, but the fact that he was allowed to wield any sort of power over anyone says something.


This is the kind of thinking that is wrecking a business I’ve just quit. Management consultant decides that we need more customers so instructs us to open the product up globally. We explain that this is likely to be both expensive (lots of testing needed due to the usual geographic differences), but more important, based on our collected data and the use case (geographically local content), this expensive change is statistically likely to result in only a single new customer (literally, one).

But “every customer counts” so other activities which are likely to result in more customers are put on hold to capture a single customer.

The problem in these cases is often that the management consultant doesn’t consider the resource limitations and opportunity costs of their decisions, particularly if they come from a much larger business. If you have only one lead engineer, then getting them to chase discarded parts (or, in my case, a single customer) makes no economic sense at all.

Quite often, the problem is not a focus on measurement per se, but rather the very human problem of focusing only on those metrics that support the analyst’s intuition.


Spent 3 years under a PE group, and the "management by spreadsheet" was very real, and utterly corrosive. The constant squeeze meant eventually everyone started to realize the only people who were going to make any real money/have benefit was a handful of PE folks and their satellites; everyone else would be squeezed until breaking. Even where our CoC were good (as mine was), we knew the eventual pressure would make it all buckle, so get out before it breaks you.

Some of us chose to keep ourselves sane.


Interesting observation. The PEG managers would probably admit that turnover had some financial cost, but I doubt they ever actually put a number on it and added it to their calculations. Am I right?


"Frupidity" is a term I've heard used for this.


Granted bean counters fail as leaders (typically) but I would strongly argue it's the job of leaders to explain up front costs and value to been counters.

E.g. yes not providing free/cheap coffee/drinks in the office saves money, but it has a damaging cost in terms of employee attitudes and getting people to talk openly in a friendly environment about what they're working on and being able to ask for help.

These things can't be directly easily measured but can be quantified in a way that bean counters see the true value of the little extras...


When company stops providing free coffee/snacks many employees will see this as a signal that a company is going down and it is time to look for a new job. Such signals are hard to quantify but they are real.


Hm… this reminds me of streetlight effect. Not everything that needs improvement/optimization can be measured properly, not everything measurable should be KPI. Tangentially related, I witnessed few times when it caused failure in application of data science to business. The company wasn’t ready, did not have proper data infrastructure, vision or skillset, but still models were built and deployed to no avail.


another example: Ford and GM accountants making engineering decisions in the 70s and 80s. Poor quality and deaths were the result.


Just to play devil's advocate, surely their approach is more rational than that. They're probably looking at it from the perspective that the business needs to have a profit margin of X in order to justify investing in it.

They probably do understand that cutting costs impacts company culture and morale. But shutting the company down probably impacts that much more.


They do understand that cutting costs will have an impact on culture and morale, they just think the marginal benefit exceeds the marginal cost. Keep in mind, PEG managers are chasing a carried interest bonus which they only achieve after covering the minimum return promised to their investors. Plus, leveraged buyouts--which PEGs frequently use--increase a company's risk of failure. Everyone's under intense pressure to perform.

Massive Financial Incentives + Highly Leveraged Balance Sheet + Intense Pressure = Risky Decision Making


"But shutting the company down probably impacts that much more."

Is that the only other option?


I just completed running a transformation of a F500 company and these were the players who shut it down.

We were on the cusp of turning the corner and the consequences of their approach are staggering in terms of people, revenue, and client relationships.

I’m looking to discover how I might more effectively engage them in the future.

Any advice?


This is ultimately a cultural challenge, not an analytical challenge. You don't win cultural battles with intellectual debate, rigorous analysis, or facts. You win cultural battles with politics and power. That means finding a way to get "the wrong people" off the bus and "the right people" on the bus.


Never framed it that way before. Definitely maps. Thank you.


Given your background I'd love to talk to you. Since HN doesn't have private messages as far as I know, would you be willing to contact me at the email address in my profile?

Thanks! Neil


Check your inbox.


While your experience sounds painful it also sounds like something that would have happened in the 90s. I don't think most tier 1 organizations still think in that way.


The company exists for the stockholders, not for the employees.

Stack ranking is still used widely throughout Fortune 500 companies, which is one of the most culture destroying management practices known to man.


Less than a decade ago I worked at a company that was acquired by a private equity group. What jscode said matches my experience. For a while I also tracked (on Glassdoor and some other sites) companies that were purchased by that firm, and seems like employees at different companies had the same experience. EBITA was king, nothing else matters.


I’m not the type to pass up making a Star Wars reference, so here goes:

“One would think you Jedi would understand the difference between knowledge and… heh heh… wisdom


Surprised not to see Goodhart's Law[0] referenced here - "When a measure becomes a target, it ceases to be a good measure". Not the same concept, but a related one (as is the Cobra Effect[1], of which the poppy-field burning is an example)

[0] https://en.wikipedia.org/wiki/Goodhart's_law [1] https://en.wikipedia.org/wiki/Perverse_incentive#The_origina...


From [1]: "It was discovered that, by providing company executives with bonuses for reporting higher earnings, executives at Fannie Mae and other large corporations were encouraged to artificially inflate earnings statements and make decisions targeting short-term gains at the expense of long-term profitability."

What a shocking revelation! Could it possibly apply to other companies that report quarterly results? ;-)


I vividly remember a "small-hands" (smaller all-hands) very early in my career, where the VP basically said "we recognize that oncall is a burden, and we want to compensate you for it. At first we thought we should give you a bonus related to how many times you get paged - but then we realized that that's incentivizing you to build flaky systems. Then we considered giving a bonus in inverse relation to how often you're paged - but then we though, no, they're smart engineers, they'll just turn off the monitoring and collect the bonus. So we're giving you a flat rate".

In the absence of some untamperable objective way to measure service health (the concept of SLAs was a distant dream at that point), can't fault that reasoning, tbh!


In a documentary about the VietNam war (I think it's Ken Loach's), there's an army guy who says this:

> When you can't count what's important, you make what you can count, important.

That's the McNamara fallacy in a nutshell. Kill counts started to become the end-all be-all of the war because bodies are the easiest thing to count, but it wasn't the relevant metric.

Tragedies are often the result of people persisting in something and failing to see the big picture. Even more so when numbers seem to tell them they're right.

It doesn't matter that the numbers are right if we're looking at the wrong numbers.


I also think that Goodhart's law is a stronger effect than the McNamara Fallacy for Afghanistan; when people are actively faking results, then that's not so much measuring the wrong things as just flat out failing at measuring things.


The most important line from the article:

Those who are concerned about falling into the trap of the McNamara Fallacy shouldn’t abandon quantitative measurements and metrics.

A key example is Moneyball: https://en.m.wikipedia.org/wiki/Moneyball

In my line of work, JavaScript, people will intentionally ignore data that challenges their favored approach. Measuring things is important, but all these examples illustrate that measures are irrelevant when ethics are absent, which is worse than measures applied too narrowly as in the McNamara Fallacy.


I believe the McNamara Fallacy is a real thing, although I would say Goodhart's Law expresses the more fundamental issue. But, the examples they give (Vietnam and Afghanistan wars) are bad examples, because in both cases the issue was primarily not that the U.S. military was relying too much on quantitative metrics, but rather that it was being asked to do a job which militaries are not good at.

In neither case, despite what many deniers and apologists proclaim, was the war "winnable" by the military, because in both cases the primary problem was that the government of the nation in question did not have the respect (and thus support) of its people. There is no strategy that an outside military can use, which will fix this, because it's not a military problem, and a military has the wrong personnel, the wrong training, and thus the wrong institutional mindset.

It is as if you sent the Red Cross in to defeat the Viet Cong or the Taliban, and then concluded that the reason they failed was that they were measuring numbers of hospital beds and counting how much medical equipment they needed. It was the political leadership, who decided this was a job for a military to perform, that was the root problem. McNamara's metrics were neither here nor there.


Yes and no. It is a difficult problem, but it has been done before (https://en.wikipedia.org/wiki/Philippine%E2%80%93American_Wa..., https://en.wikipedia.org/wiki/Malayan_Emergency, https://en.wikipedia.org/wiki/Yugoslav_Wars). There is a rough roadmap about how to go about it, something along the lines of suppressing the rebels and providing security for the majority of the populace while you reform the government and establish its credibility.

In Vietnam, this was made more difficult by the international situation and by the ease of outsiders intervening to support the Viet Cong (such as the NVA). Similarly in Afghanistan, with the additional problem that the US never realized that Afghan nationalism isn't a thing. (My interpretation; the US Army has a big, long analysis of what it thinks went wrong that I haven't read.)

The problem is that quantitative metrics hide the difficulties and provide a false sense that the situation can be resolved easily and cheaply. McNamara's body count idea makes a certain amount of sense, given that the strategy there was to cost the NVA so much that they could not continue to push troops into South Vietnam, which might have worked if that was the only major problem.


In Vietnam the problem was the 15ish year long civil war predating all the things. In Afganistan, the same thing, only longer. This stuff can't be fixed by landing some marines, it only turns it into a shit-throwing match about who has better SAMs and logistics. Which is a definition of a proxy war.


While I grant you the Phillipines was somewhat of a success, I don't think either the Malay example (where the same combat flared up again within a few years https://en.wikipedia.org/wiki/Communist_insurgency_in_Malays...) or the Yugoslav wars (where it was a war between ethnicities, and it most decidedly did not result in Yugoslavia being put back together) would count as successes. But, the Phillipines example does seem like a plausible example. Even then, it essentially ended because the US decided to let the Phillipines elect their own assembly, clearly setting them on the path towards independence. Maybe the lesson is that if your military is involved in trying to suppress the rebels, declare victory and leave, without making too many demands of what kind of government is there once you are gone.


The Malaysian Emergency (1948-1960) is pretty widely regarded as the successful counterinsurgency program. (It also featured little to no external support to the insurgents, who were massively outnumbered by the British. Did I mention that this wasn't an easy problem?) The second insurgency (1968-1989-ish) was largely handled by Malaysia itself, as I understand it, without outside forces.

I was debating the former Yugoslavia, but no one was really interested in rebuilding Yugoslavia; they wanted the region to be quiet and relatively stable. Which it seems to be today and wasn't without the continuing threat of military force from sometime during the Austro-Hungarian empire.

Before you draw lessons from the Philippine-American War, look at the timeline: it officially started when the US took over the management from Spain in ~1899 and ended in like 1913. The (new) Philippine government was created in 1902 (https://en.wikipedia.org/wiki/Philippine_Organic_Act_(1902)) with the cessation of the official war. The US kept a military presence as well as official control until WWII, with apparently a good bit of support from the Filipinos.

The US was very specific about the kind of government it set up in the Philippines and not only didn't leave but stayed and supported the government against the remnants of the insurgency.

If you're interested, I strongly recommend 21st Century Ellis, a (small) collection of articles by Pete Ellis, a Marine officer in the Philippines who went on to be part of WWI and to write the basis of the US Marines' part of the pre-WWII plans against the Japanese Empire. (He drank himself to death while spying on the Japanese and the guy sent to recover his ashes was himself killed in the Kyoto earthquake. It's all very Lovecraftian.) His "Bush Wars" (?) is roughly the basis of US counter insurgency strategy that the US couldn't bother to follow in Afghanistan.


> In neither case, despite what many deniers and apologists proclaim, was the war "winnable" by the military

Basically, to win that war the US military should have been able to previously educate its politicians (i.e. the US politicians) and remind them that as "war is the continuation of politics by other means" so in countries like Afghanistan and Iraq a greater focus should have been given to politics before trying the war thing. After all that's what a guy like Clausewitz did, a general and military theorist whose work ended up educating lots and lots of politicians.


To quote C. Wright Mills, writing in 1960:

> They know of no solutions to the paradoxes of the Middle East and Europe, the Far East and Africa except the landing of Marines. Being baffled, and also being very tired of being baffled, they have come to believe that there is no way out—except war—which would remove all the bewildering paradoxes of their tedious and now misguided attempts to construct peace. In place of these paradoxes they prefer the bright, clear problems of war—as they used to be. For they still believe that "winning" means something, although they never tell us what.


From "The Causes of World War Three". Of course, America's also had a lot of trouble since WWII simply defining what 'winning' a war means.

Timely; I bought Mills' "The Power Elite" only yesterday.


While we're here I might as well add a link to Scott Atran's RSA lecture about his book Talking To The Enemy, where he discusses his findings about radicalization and deradicalization, a topic very much related to the discussion here. It's one of those youtube videos I wish more people would watch:

https://www.youtube.com/watch?v=6ijmBd69878


I do not believe that the war was "not winnable" in Afghanistan. The Soviets managed to defeat the Taliban precursors (Mujahideen), and left behind a Soviet-propped government that lasted longer than the Soviet Union did (3 years after they left). That's as close as you can get to an example of a "win" in Afghanistan -- so don't say that the war was not winnable.


McNamara basically admits this himself in the documentary The Fog of War:

https://en.wikipedia.org/wiki/The_Fog_of_War

It's worth a watch but it's very soft on him and his role. Still a very good documentary.


Was looking for this comment. In McNamara's reflection, a tenant he called out was to understand the enemy. US didn't understand the Viet Congs motivation for fighting the war (freedom from colonizers) whereas the US viewed the war as a larger Cold War. This is the same as what happened in Afghanistan that we didn't learn.


I'd argue the US has not tried to understand the enemy since the Cuban missile crisis. There are more failures outside of Afghanistan, and I think the US is going to walk right into another one.


Great documentary. I felt it was a person trying to come clean and ease his conscious on his death bed.


Agreed. Great documentary. The part where Castro said he urged Russia to launch their nukes from Cuba to the US knowing it would destroy Cuba was chilling. Humans are not always logical. Don't assume somebody wont drag an entire country or the world to total destruction for some deranged cause.

I didn't get the death bed vibe from McNamara but I definitely felt that he was genuinely reflecting on the past.

The documentary on Rumsfeld was the polar opposite. I could also see Rumsfeld not wanting to give the enemy of an ongoing conflict any shred of material. It makes for a less interesting documentary.


Morris himself said something like he didnt really feel he got to know Rumsfeld and had no real idea what was in his mind compared to his previous interview with McNamara.


Guess Donald himself was one of the “unknown unknowns”


If that was him on his death bed he must have been seriously sharp in his prime.


I haven't live in that era, so maybe it's not my place for me to say whether Morris was particularly soft on him. I think the fact that he said that it was the president's responsibility, that revealed a lot about him.

On the other hand, given the Fog of War did play back a recording where Johnson had a much different view on Vietnam than JFK, I wouldn't put the burden solely on McNamara's shoulders either.

Regardless of what one might think of his role, it was still quite enlightening, and I think more people should watch it. I think the lessons outlined in it are useful, but too few have taken heed of it.


I haven't seen that one, but I've been watching the Ken Burns documentary recently. It seems suitably fair. Where maybe some of the proposed "McNamara Fallacy" breaks down from OP, according to archives that they go through in the documentary he knew his approach wasn't working for a long time, he just did not (or would not, or could not, depending on your perspective) say so publicly and didn't seem to have any other way to measure progress.


> In another case of military metrics gone wrong, the US military reported success in undermining Taliban financing after it paid Afghan farmers to destroy their crops of opium poppies. What went unreported, however, was that the farmers planted larger fields of opium poppies in response, in the hopes that they might be paid by the US military to destroy the crops again. When US payments didn’t come through, the opium was harvested and entered the international drug trade. Much of the profit went to support the Taliban’s anti-American military operations.

That's pretty ironic because its a near perfect replication of the Cobra effect, the canonical example for bad incentives leading to unintended outcomes: https://en.m.wikipedia.org/wiki/Perverse_incentive


The core premise of the McNamara fallacy is not wrong. However, this article makes a critical mistake about the origin. McNamara was not just a business executive that believed he could take his knowledge to the Defense Department. He was an officer in the Army Air Force Statistical Control office that organized the Air Force during World War II into the successful machine that it became.

The fallacy is not that success in business is transferable to war, The fallacy is that the successes in World War II were transferable to Vietnam.


A. Please do not center justify paragraphs, this is really annoying and makes things very hard to read.

B. It is very tricky to make conclusions about life and business from US conquering military campaigns from within the US, because these campaigns have been shrouded with so many layers of lies self delusion and bullshit, that a lot of the generally accepted facts are just not true, and thus not able to teach us anything.

By the way, I do not think America is in any way unique about this. Every country's conquering wars are carefully explained with heaps of bullshit from that country. There are some very rare exception as in with Germany and Japan and WW2, but usually every country carefully shapes the official story to make it so that the conquering wars are not completely based on greed and lust for power, when they usually are.

So lets talk about McNamara and Vietnam. The problems there had little to do with some kind of duality between the quantitative and qualitative. In fact if McNamara honestly followed quantitative analysis, he would reach very different conclusions about Vietnam. If, for example, he commissioned a very simple very quantitative opinion poll among Vietnamese about whether they would prefer to be ruled by a puppet regime installed by a foreign country or by the independent patriotic leader that just defeated the French colonialists, he would see that the official reason for invading Vietnam was just not valid.

It is not clear whether such a poll has been commissioned but McNamara probably knew what the results would be, because the US never permitted a free election in South Vietnam. Thus, faced with a simple quantitative fact against the war, the quantitative managerial McNamara ignored it and suppressed it.

So no it is not about the quantitative vs the qualitative, it is the usual problem with wars of conquest by rich countries. People in positions that would benefit from the war carefully and persistently create a web of lies and delusions that is intended to drive the country to war.


I think the most HN-relevant elements of the fallacy are not about quantitative vs. qualitative.

> The McNamara Fallacy is to presume that (A) quantitative models of reality are always more accurate than other models; (B) the quantitative measurements that can be made most easily must be the most relevant; and (C) factors other than those currently being used in quantitative metrics must either not exist or not have a significant influence on success.

(A) is quant vs. qual, but (B) and (C) are about fooling yourself into thinking you're doing something useful just because you're quantitative.

And while I agree with pretty much all your commentary about the war, I think this would be a very qualitative (not quantitative) assessment on McNamara's part:

> It is not clear whether such a poll has been commissioned but McNamara probably knew what the results would be, because the US never permitted a free election in South Vietnam. Thus, faced with a simple quantitative fact against the war, the quantitative managerial McNamara ignored it and suppressed it.


What is the name for the fallacy of thinking that wars are always meant to be won? Throughout history wars have been fought without any intention of "winning". Sometimes a war can serve a religious or ceremonial purpose, one that doesn't require a clear winner. Other wars have been fought for completely economic reasons. Others are proxy contests whereby greater powers can demonstrate their abilities without directly engaging each other. The false thinking is the assumption that participants always want or even care about winning.


Wars might have unclear objectives and mission-creep, but I don't think anyone is fighting to lose.


The question is about what "winning" means for the people who start wars and keep fighting them. It might not be the same as for you.

To quote Julian Assange, "The goal is to use Afghanistan to wash money out of the tax bases of the US and Europe through Afghanistan and back into the hands of a transnational security elite. The goal is an endless war, not a successful war".


I just don't buy that line of thinking at all. If that is truly their goal, there are way easier and subtle ways to do it without causing a collapse in political capital for the parties and politicians involved, which the Vietnam, Iraq, and Afghanistan wars did.


Such as? What other mechanism makes it possible to pay 700 billions per year to the private sector for something that the taxpayer can't see, because it's secret?


Medicare for All would cost well over $4 trillion per year according to the Lancet. That would be going to privatized hospitals, doctors, big pharma, medical consultants, supplies, equipment, diagnostics, lab companies etc...

But I would point out to you that the military's budget is $700B regardless of whether the US is engaged in foreign armed conflict or not, and half of that is personnel costs like salary and housing allowance. Only about $300B goes to procurement of all types (weapons, clothing, food, cleaning supplies, fuel, paint, etc...)


“Medicare for All” can’t be hidden from the taxpayer though. Also it doesn’t exist.


I would assume the expectation came about after WW2 where losing meant devastating ramifications for the losing party.

On the other hand in a situation where warfare is considered with less weight and investment feels like it’ll embolden states to return to a state of frequent warfare.


Other can be fought just to cause destruction, or maybe that falls under your economic reasons.


Two examples come to mind:

- Investing your money with the fund manager who has the highest returns over the last year, or even 5 years, is called "performance chasing" and generally has worse returns than investing in an index. Think investing in dot com stocks in 1999, and real estate in 2007.

- In Moneyball, about using stats to improve a baseball team, they didn't care how many home runs someone hit. In fact, getting runs that way was considered bad. They wanted to get players on base and them get them home, even if through walks. There's a part of the book where a player is getting a lot of home runs, and therefore increasing the team's score, but the manager doesn't like it. "It's a process", "we have a process" he keeps saying.


McNamara cut his teeth on real world problems in the AAF's Office of Statistics and worked with General Curtis LaMay to plan how best to use the B29 bombers. He became an expert with statistics and other math tools and thought they could be widely applied to business and, of course, war. Once he had these marvelous tools he found nails everywhere.

For instance the rate of German tank production was accurately estimated by collecting the serial numbers of all the tanks (and some tank components) that were knocked out. Best methods of attacking submarines was worked out this way as well as well as where to place armor in bombers, etc.


> Once he had these marvelous tools he found nails everywhere.

He also found two presidents and a sycophantic media that hung on his every word.

His figures were excellent. In the earliest days of Vietnam, long before anyone outside of Asia could find it on a map, he produced eerily precise predictions of the costs -- in lives, dollars and time -- of the future conflict. He told them what a civil war in the jungle would look like and they pulled the trigger.

As far as the value of measurement goes; I think most of the low hanging fruit has been picked (a consequence of the "Information Age") and what we struggle to 'measure' today is far less tractable. As a result our measurements are frequently corrupted in the service of prevailing agendas or not permitted at all for fear of undesirable results.


Does the McNamara Fallacy have any application to our response to COVID? I often hear pundits of all sorts, medical doctors, epidemiologists, politicians, CDC scientists, etc, make statements such as "the data shows this" and "the data says that" and then follow up with "therefore the science says we must all do such-and-such". I hold a rather narrow, rigorous - maybe closed minded - opinion of what is 'science' (so to me 'social science' is an oxymoron) thus I have a degree of skepticism when data analysis is relied on to heavily for making conclusions that are then called 'scientific'.


> Does the McNamara Fallacy have any application to our response to COVID?

In my opinion, yes. Many politicians and media are fixated on stats like "Covid cases" whereas society depends on multiple variables to function including the health of the economy, mental health and social cohesion of the population, people having hope for the future, the freedom to make choices for yourself, and people being able to exercise and not living unhealthy lives, etc.

> "the data shows this" and "the data says that"

This kind of objective data is generally science.

> "therefore the science says we must all do such-and-such"

This kind of thing is a policy or risk-management decision being wrongly conflated as science.


More like the data about COVID is used as fig leaf for metrics in other areas actually driving the decisions, or ideology/dogma.

If you go with hard data and experience, you'd do hard moves like China (and many other asian countries did). In fact, similar moves have been done in the past in Europe (on the communist side of Iron Curtain) to stop epidemics, including even manual contact tracing.

But because a non-trivial force in decision making has strong other incentives, and because of dogma like disbelief in aerosol transmission, we end up with really bad decisions with fig leaf of data analysis.


> because of dogma like disbelief in aerosol transmission

It's not just that COVID can spread via aerosol. It's more so that the western world extrapolated the definition via a study on TB, neglecting that TB needed to infect deep in the lungs.

https://www.wired.com/story/the-teeny-tiny-scientific-screwu...

There is a scientific paper version of this, btw.


I don't understands what the fallacy is. It is that an overreliance on data can overlook factors not in the data? Why is that a surprise. You would also have to show that not relying on data would generate better results.


It's probably one of those things that might seem more obvious in simple cases, but might surprise folks in more complex cases.

For example, a simple case: Say you go on vacation for a few weeks with a certain amount of cash to spend. Upon arriving at your destination, you immediately purchase some indulgence, and, hey, you're feeling better! Why not immediately keep spending as much as possible to maximize?

For example, a complex case: Say you're leading a country. You set up policies that, after a few years, seem to have led to a higher GDP than was previously expected. Does that suggest that the policies are leading to a better future?

In the simple cases, like the vacation-example above, it's easy enough to understand the scenario and what's going on. And we can imagine that, hey, immediately spending all of the cash might lead to poor consequences for the rest of the vacation, even if they display some quick-satisfaction at first.

But in more complex cases, like with a country's policies leading to a higher GDP, stuff can get trickier. We might say that it's the lack of a top-level model: unlike in the vacation scenario, where we were easily able to predict that there'd be a lack of money later in the vacation, it might be harder to say what else might be going on besides the GDP going up. And all other things held equal, presumably a higher GDP would be better than a lower GDP, and therefore all evidence points toward the policies being a good idea, right?


GDP is actually a great example. It's actually very easy to increase a country's GDP if that's the only thing you care about. You just borrow more money and then spend it. GDP is literally a measure of money changing hands inside a country. If you borrow as much money as you can and then spend it on things like infrastructure projects, you can instantly increase the GDP. Banks see the growing GDP figure and assume that its a good thing and let you borrow more money. That is, until they don't.

This happened with Brazil in the 1970's, when the oil crisis made Brazil believe that they were going to face a downturn. To overcome this, they borrowed money and went on an infrastructure spending spree. This made Brazil look like an economic miracle, growing when everyone else was struggling. This encouraged more banks to lend to Brazil. The problem is the money was spent on short term growth instead of things that would more systematically grow the economy over the long term. The government (and lending banks) was substituting year-to-year GDP numbers for economic health. Once credit tightened in the early 1980's, Brazil's growth plummeted.

This is of course an over simplification, but I think we are seeing similar problems in modern economies. People often cite China's GDP growth, but they don't balance that out with the debt they are taking on in order to finance that growth.


> It is that an overreliance on data can overlook factors not in the data? Why is that a surprise?

The vast majority of PMs I've worked with don't seem to understand this at all. If it's not in the metrics, it doesn't exist!


I think the fallacy is when factors not captured or tracked by data are treated as non-existing or consequential. In Vietnam, they tried to quantify how pacified each village was as measurement of success. It did not work. This isn't saying no data is better, just that data on its own does not win a war, or the hearts and minds of villagers.

See also: https://www.theatlantic.com/technology/archive/2017/10/the-c...


My understanding is that the fallacy is cherry-picking a few data points that are easy to measure and then claiming that these are the most important data points, and then optimizing execution for moving them in the "right" direction. This is described in the metaphor of "searching under the streetlight". The key to this con is to act confident and to challenge any objections as lacking supporting data.


It probably tries to make the point that when a good metric becomes a target, it ceases to be a good metric - either through people gaming the metric, or because of diminishing returns.

But I admit the article gets to its point in a muddy and circuitous way, typical of business school anecdotes.


> Why is that a surprise

It's surprising to many.

Many people will decide they are "data driven" and act like this gives them infallibility and total correctness. Lots of people need to be told this.


The real question isn't "Why is that a surprise?"; it's "Why does it keep happening?"


There are a bunch of factors and implications from the piece, which I had hoped they'd go into. But some of the more obvious implications which are often more organizational than they have to do with the data per se:

- the data you are collecting may not include factors that are consequential to outcomes. Treating these unmeasured factors as inconsequential is hazardous. The piece specifically mentions this effect. This effect is not a surprise but yet many organizations fall into this trap so it seems to be worth mentioning.

- the difficulty of measuring some factors will result in their exclusion from the dataset. Some things are intrinsically hard to measure, and many organizations will as a result refuse to measure them, and make decisions without them. There needs to be a conscious and active process organizationally to resist this and find effective ways of measuring them.

- the difficulty of measuring some factors will result in easier but less useful proxies being used in their place. Same as above just with a somewhat different outcome. Organizations are often blind to this happening as they come to believe the proxy is as good as the real measure (or sometimes even that the proxy is the measure). A good industry example of this is clickthrough rates being treated synonymously with audience interest or content quality.

And a point I wish the piece made but did not:

- measuring something does not grant automatic understanding of the phenomenon and gives you no predictive power. It's one thing to quantifiably know the blue button gets more clicks than the green button, but that grants you no insight into why. Many tech companies fall into this trap - where despite investing heavily in experimentation their modeling of the product space doesn't improve over time, since they fail to take the step to convert observation to generalizable hypotheses that improve their model for the product and market.

This last effect IMO is huge in our industry, and is why the same low-level experiments are being re-done over and over again. This creates more product churn and reduces your product velocity - since the lack of proven product models means you're mostly flying blind and using ex-post-facto experimentation on live users to figure out what to do.

Experimenting on button colors is all well and good, but the end result of that data should be a color theory that explains what colors to use when, not forever A/B testing every single button color for the rest of time.


If understanding is what you seek then https://www.youtube.com/watch?v=eNDsd798HR4 is a nice long discussion on McNamara overlooking factors not in the data. The fallacy didn't earn its name for simple reasons.


The US did not lose the war against the VC, they stopped supporting south vietnam's war against the VC and withdrew because the war became unpopular. The distinction is important because ultimately it wasn't America's war and the VC did not overwhelm the US military into surrender. It was the civilian government of the US that directed an end of their involvement. If anything, China won a cold war against the US who feared a direct conflict and caved in.

With Afghanistan, the war against the taliban government was won 15+ years ago. The US thought it could play nation-builder like with Japan. The continued involvement in afghanistan was part of the ongoing "war against terror" to reduce the chance of afghanistan becoming a heaven for terrorists again. The war against afghanistan was won, what the US lost was a prolonged occupation. If you look at history, the victor would more or less burn the invaded country to the ground and enslave its people to avoid similar long term issues, one can argue it was both self-interest and an ideology of being humane to civilians that resulted in the waste of post-war afghanistan occupation.

Regarding metrics, I am fighting this right now. I found out what is expected of me early on and worked to extract that value out of myself and support my team but in the end my struggle is being able to explain metrics that reflect a years worth of work that only capture at best 30% of what is expected of me. Either way, can't complain, management isn't easy but meeting metrics is. what I dislike is if I have good metrics and stop there, it isn't enough. Metrics will always set a maximum ceiling on quality, if management can accept that I have no problem with metrics driven performance appraisal so long as it pays.


> The US thought it could play nation-builder like with Japan. The continued involvement in afghanistan was part of the ongoing "war against terror" to reduce the chance of afghanistan becoming a heaven for terrorists again.

I would more easily characterize US external politics as nation-destroyer than nation-builder. Even if you disregard the evidence, it's quite a stretch to believe that the situation in Afghanistan is solely due to incompetence despite US good intentions.

> If you look at history, the victor would more or less burn the invaded country to the ground and enslave its people to avoid similar long term issues

Even if this has happened with some frequency I'm not so sure it's the rule; certainly there are many exceptions. In any case, it's no argument for doing the same.


> Even if this has happened with some frequency I'm not so sure it's the rule; certainly there are many exceptions. In any case, it's no argument for doing the same.

The only time where there is nation building is after permanent colonization or assimilation into an empire. The US did pretty much burn the country to the ground, my argument is they should have left right after. Same with Iraq. If theu harbor terrorists again, give them round two but worse. Let the people self-determine and fight for whatever government will prevail. They shouldn't have worried about what happenes after they left so long as they made the price of harboring terrorists clear.


“There’s the old apocryphal story that in 1967, they went to the basement of the Pentagon, when the mainframe computers took up the whole basement, and they put on the old punch cards everything you could quantify. Numbers of ships, numbers of tanks, numbers of helicopters, artillery, machine gun, ammo—everything you could quantify,” says James Willbanks, the chair of military history at U.S. Army Command and General Staff College. “They put it in the hopper and said, ‘When will we win in Vietnam?’ They went away on Friday and the thing ground away all weekend. [They] came back on Monday and there was one card in the output tray. And it said, 'You won in 1965.’”

https://www.theatlantic.com/technology/archive/2017/10/the-c...


I'm amazed they didn't mention McNamara's 100,000. There's so much to learn trying to not repeat this guy's mistakes. Project 100,000 is an enlightening piece of history, to say the least.


This article is good, but not great - the author only gives one example of how quantitative-only reasoning can be bad (the example of the poppies). The other "example" is just the US military lying.

There are also no specific examples of non-quantitative reasoning that, if ignored, would be damaging.

I feel like the Wikipedia article does a better job explaining this: https://en.wikipedia.org/wiki/McNamara_fallacy

Also, why is there an entire webpage dedicated to this?


The example with the US military lying certainly demonstrates another issue with over valuing metrics - making sure you have good data. People who get wrapped around metrics also tend to not look at the data they are being presented. One of the issues McNamara had was he was receiving inflated body count figures. [0] Not only was he measuring the wrong thing, he was measuring it badly.

[0] https://en.wikipedia.org/wiki/Vietnam_War_body_count_controv...


It's also a good example of perverse incentives or "gaming the metrics" (in a horrifying way). If the main measure of success was body count, and every kill was considered an enemy by default, this encouraged just killing people and counting them as successes.


I was in Vietnam in a Naval Support Group. I had conversations with army officers that had combat experience and learned that since no statistics were collected about dead civilians all the dead were counted as enemy soldiers - and everyone from the top down knew it.


The webpage appears to be made to serve as a warning to data-driven businesses to not fall into the same perverse set of incentives that McNamara created, to encourage business managers to diversify their accounts of the success of their business beyond just the quantitative narrative.


I'd be more cynical and suspect this is some kind of elaborate SEO strategy. Submit the site to social networks -> wait till gets some love from the Google algorithm -> put ads on it -> profit.


Possible, but this seems of rather a niche interest for a successful SEO strategy. I could grant that it would then be targeted specifically at the HN sort of social networks.


I don’t really understand the narrow definition of quantitative analysis referenced here and elsewhere. If you know your metrics are failing you, then you implicitly have a more advanced model of success. Define this more advanced mental model so it can be invalidated.

If your mental model is not defined narrowly enough to be invalidated, then how can you improve upon it? I think the lesson here was that the measure of success was defined poorly, not that quantitative analysis is flawed a practice.


I wonder if it gets widespread to oppose the dominant hyperquantitative, every fart tracked in jira, can't measure - can't manage trend.

Which was never pushed by anybody from the industry, but got spread by project management frauds.

https://deming.org/myth-if-you-cant-measure-it-you-cant-mana...


It should be remembered that McNamara cut his teeth working for the office of statistics with Curtis LeMay in the use of B29 aircraft. He was proud that he was part of a larger effort using statistics and other math that helped to win the war and, with that hammer, most everything could be a nail.


The article attacks measurement based approaches, in general, which I don't think is supported by the anecdote of McNamara's failings in Vietnam.

Kill count was simply a terrible metric of victory.

Vietnam was a civil war, which requires a political victory. This means allegiance of local leaders and populist support – both of which are measurable values. A kill count is often inversely correlated with political support.

McNamara's failing here is more like the XY problem [1]. He incorrectly assumed that a brutal display of military power was the way to political victory and so he measured the wrong value.

[1] - https://en.wikipedia.org/wiki/XY_problem


Drunk person, near streetlight, lost key ...

Of course it is much easier to manage things that you can measure.

That does NOT mean that if you cannot quantitatively measure it, that it can't be managed

It certainly does not mean that if you haven't yet developed an effective metric, that it does not matter -- it's just mind-blowing that McNamara went that far.

The way to do it is to FIRST figure out what matters.

Then figure out if it can be measured, and if not, how to qualitatively assess it. Then also figure out how to ensure that the metrics don't become the goal (with perhaps the sole exception being if you are in sports and the goal is winning races/games, so that's a pretty good metric in a suite of metrics).


This isn't a great article IMO (I say as someone who is a big fan of qualitative user research, mixed methods, etc).

Per the first example, many types of attitudinal data can be quantified and conflating attitudinal data with qualitative data is itself a fallacy. It's possible that there were quantitative attitudinal signals that could have been captured or created as inputs to a more accurate model.

Per the second example, this is more a question of data validity than the metric itself. If the metric could be validated through better design and gamification prevented then it would likely still be a helpful indicator. Granted this is a very hard problem.


My thoughts, too. It's not that quantitative data is bad, he just picked the wrong one.


That's the rub, ain't it? How do you know which is the right one beforehand?


That's a good point, you don't always know, so your quantitative data starts with qualitative (you decide what to measure).

But, for the common cases you can follow the industry standard measurements. They also learned from the war question mentioned in the article, now they just measure public support, too.


Maybe a better way of thinking about it is that a process that gathers numbers can be useful, but understanding the context in which the numbers were gathered (methodology and whatever else is happening at the time) is more important than the numbers themselves. Without context, you don't know what you have.

For example, surveys gather numbers, but if you don't understand how the people who answered the questions interpreted them, you don't know what the numbers mean. Asking people what they really meant by their answer is only possible for the people doing the survey.


I just wrote about this (really briefly) on LinkedIn a week or so ago, because I've seen a growing trend towards "management by spreadsheet" and I think its an enormous error in terms of team leadership/management. Much as jscode says in the comments: "The moral of the story is that people with analyst mindsets play an essential role in our economy, but sometimes giving those people power over large organizations can have disastrous consequences. There truly is a disconnect between measurement and understanding. "


TDLR: quantitative analysis is necessary, but not sufficient for success


A different view of the problem:

https://www.nngroup.com/articles/campbells-law/


> What McNamara didn’t keep track of was the narrative of the war, the meaning that it had both within the military forces of each side, but also in the civilian populations of the nations involved.

I'm probably not following and not saying it's easy but aren't there some metrics you could track that with? How was it determined/measured later that the meaning was important? Couldn't you at least do polls in your own country?


But that’s the point - it’s not just how many, but what they are willing and able to do, their positional and operational strengths and weaknesses, and so on. Condensing all that into a number is impossible.


These military positions require "not understanding" as a prerequisite. Do you think you'd be put in charge of something like the war in Vietnam, if you didn't tell the president you could win the war for them? You'd just have been sidelined or fired if you gave a realistic opinion.

Which is not unique to the military, of course. Have fun letting the CEO know their sales goals are unobtainable.


"In war, whichever side may call itself the victor, there are no winners, but all are losers,"- Neville Chamberlain.

This is a really interesting look at this type of mindset. I have often wondered what it is to "win" a war, and the quote that came to mind was the one I posted above. Measurement is certainly not understanding in all situations.


I think Clausewitz'saying "War is a mere continuation of politics by other means" was pretty close to the mark. Does anyone ever "win" politics? You have upsides and down and some things get pretty well settled but genuine victories are vanishingly rare whereas bitter tasting compromises seem to come about even when one side seems to hold all the cards.


"The moral is to the physical as three to one." - Napoleon I


Or, put slightly more succinctly: "you get what you measure"


Tough to share a last name with this guy.


Even stranger if it was the middle name too


At this point "fallacy" just strikes the same note to me as "misinformation". It's mostly used as a political device to get ideas you want to be true into social consensus but the reality is usually more ambiguous.


Can someone tell Microsoft this?


From a game-theory POV, all wars are caused by misunderstandings / partial information.

If each side really knew the true strength of the other side, it would be clear which would win and therefore actual war is illogical and unnecessary.


That's a fallacious line of reasoning on several fronts.

A) The beneficiaries of a war need not be the ones who pay the cost. If you personally stand to benefit from the war itself, regardless of win or lose, you want the war.

B) It's applying a rational economics approach when it's pretty clear that humans aren't rational. Japan kept fighting even when it was clear they would lose. Pride, image, your future after the war. All things that impact the decision that gets made.

C) War isn't typically total like it was in WWII. Most wars are just skirmishes or prolonged skirmishes that don't meaningfully change borders. In such a scenario, a single "war" loss may still be an important fight to have in the middle of the overall conflict (i.e. to signal to your opponent that you're not a pushover and that continuing on their current path has consequences).

D) Win/lose isn't the only outcome of war. Lose/lose is also another scenario. Additionally, you might lose the war but winning may not be the goal. For example, maybe it only takes me 1/10th the resources for me to wage the war that it takes for you to win. A single war might cost me 100 units, but it's important for me to be able to cause you to spend 1000 units because then I have a 900 unit advantage in some other area.

There's many more, but I don't think imperfect information is the only reason war exists.


I really don't think the GP is wrong here. Yes there are ideologues, but there were plenty in the Japanese high command who understood logistics and understood that war with the U.S. would end the Japanese empire. It was the same way with Germany. The people who understood logistics knew it would be absolutely impossible to win a two front war and nearly impossible to win a one front war with the U.S. having a thumb on the scales just with lend lease alone. Germany sent their best generals to the east and they knew they had no hope but to delay the inevitable, and no alternative but to listen to Hitler. They continued to fight because to deny Hitler's madness often meant less than favorable outcomes for these commanders than making peace with Allies in 1945. The war in Japan ended when it became impossible for the emperor to continue with the cognitive dissonance and ignore the fact that the fight had ended years before it even began. Logistics have won every war there ever was.


> Germany sent their best generals to the east and they knew they had no hope but to delay the inevitable, and no alternative but to listen to Hitler. They continued to fight because to deny Hitler's madness often meant less than favorable outcomes for these commanders

That's not entirely true, though. Modern WW2 historiography acknowledges that this vision of WW2, that "Hitler was mad and his generals were afraid to contradict him", is mostly the version the surviving German generals gave in their self-serving memories, as a way to save face and paint themselves as more useful to the West during the Cold War.

More recent assessments contradict this version and show many German generals were enthusiastic about war with the Soviet Union, they fully expected to win, and in many cases goaded Hitler into making fatal mistakes he himself wasn't as sure about (an example: Operation Citadel, the battle of Kursk). The ones who knew the enterprise was doomed? The logistics guys, who did the calculations and predicted when the Wehrmacht would overextend itself and run out of supplies -- not the generals!

Professor Jonathan House, an American military historian specializing on WW2's Eastern Front, has a whole lecture about this, available on YouTube.


(1) Externalities are handled in the game theory literature

(2) Preference definitions are not in scope for most game theory lit; what you're describing are preferences and expected utility, which ARE in the literature and metamodel

(3) This is not really true. It depends on how big you are and how big your coalition is. Consider that Saddam Hussain is no longer in charge of Iraq.

(4) Yes, potential outcomes are considered under game theory.

I get your point that OP's statement feels reductionist, but I wanted to explain that the OP is actually correct.


I took OPs statement at face value:

> If each side really knew the true strength of the other side, it would be clear which would win and therefore actual war is illogical and unnecessary.

No, knowing the true strength is insufficient. I laid out other kinds of pieces of information that also influence the war being logical and necessary as well as the war being illogical and unnecessary and still happening.

Now, you could strengthen the argument and say "what OP meant was that if we had all the information needed to build a model to accurately predict the exact outcome to a high degree of confidence". To me that's the same as the "god of the gaps" argument though because at the most reduced state you end up with the classical position of "well, if I had the state of every particle in the universe I could predict what would happen". The simulation aspect is physically impossible as far as we know according to [1] because the amount of compute needed would be the universe itself. So now you have to start building a model that trades off error and complexity so that you could actually make a decision before the heat death of the universe.

[1] https://arxiv.org/abs/1704.03880


You didn't work at Rand in the 1960s, did you? :-)

Anyway, the OP may be correct, but is also completely useless since it is impossible to know your own true strength, much less that of the other side.


LOL, no, but, reading the research of that period, it would have been fun!

Bell labs in the 1970s and 1980s as well.

It's not totally useless; the game theory framework is still helpful if you define your expected ROI (and your competitors). It is decision and strategy under uncertainty, of course, but its still a useful decision approach.


> If each side really knew the true strength of the other side, it would be clear which would win

This is too simplistic. Often wars are heavily influenced by things other than just the strengths of both sides. For example - What if the ground hadn't been soaked from days of rain at Agincourt and Waterloo? What if the Germans hadn't held back their armor reinforcements for so long on D-Day?


They'd still lose. Wars are fought not from individual battles but from logistics. Modern military's don't even engage if there isn't a lopsided power imbalance favoring their success.


So the rational decision is, if you are attacked then you should surrender immediately?


if you find yourself in a fair fight it's time to re-think your strategy


It only takes one side to engage.


Look at the orders of magnitudes differences in casualties in desert storm, iraq, and afghanistan. When Americans battle they ensure they have a massive power advantage for every fight and this is abundantly clear just from the casualty data. Even if the other side engages first like in desert storm, Americans ensure that they don't land boots unless they have a tactical advantage which they had in Desert Storm thanks to technology that Saddam and his army, which was one of the largest and experienced land forces in the world at the time, could not begin to compete with.


Yeah like in Black Hawk Down.


In real life the battle of mogadishu saw Americans with 10x fewer casualties. Most modern battles have numbers like this with an order of magnitude fewer casualties on the American side, because American logistic planners try to ensure a massive power advantage.


And yet...


And yet what? Are you not convinced by the casualty data?


I don't seem to recall Mogadishu being any kind of "victory". It's almost like the casualty data didn't tell the whole story.


Is that true? What if the situation is that I am currently stronger then you but can expect to become weaker in the future? Why not force a war now if I can expect to win?

I still haven't finished it (too depressing) but my reading of the book The Sleepwalkers about the events leading up to WWI is that all the parties thought they were in that place; even if they all had a full picture it's certainly possible those beliefs would in fact be true.


If I know that you are currently stronger and would ultimately win a war, I should logically concede now since I will lose anyway.

If I know that you are currently stronger but we both also know that I can draw out a war such that I eventually become stronger, then you shouldn't go to war since you will ultimately lose.


> If I know that you are currently stronger and would ultimately win a war, I should logically concede now since I will lose anyway.

You'll often lose less if you fight to a loss.

In fact, you often won't have to fight at all if you make it clear to your adversary that you will fight to a loss.

If I'm a street thug, I'm not going to try to mug someone who clearly communicates that their response to a mugging is detonating their suicide vest. I will, however, repeatedly victimize someone who clearly communicate that they are a pushover.

This is an iterative game, against the same players. Loss aversion is not a winning strategy to pursue in the long-term. Something resembling tit-for-tat is.


This seems untrue.

There are also genuine uncertainties about the future: things that neither side knows but will impact the outcome. Because of that, there is always a range of outcomes for a conflict, and so both sides would still have an incentive to carry out the war.

Consider, for instance, if there are two countries (A and B) looking at conflict with perfect information. Given their relative strengths, the war will cost both $1 billion, but A will win and reap $1 billion + $1 in plunder. In your model, B will always surrender and give A $1, but the reality is there is a big spread about both the costs and the gains. Lots depends on how well each side fights and responds, even with perfect knowledge of the other.

So...you know, if it's the US versus...the Philippines[1], then sure, the uncertain range of outcomes is small enough that rational thing for the Philippines to do is surrender. But for countries that are even reasonably well-matched, they would want to fight with the goal of making the conflict unattractive for the other side. Probably the best historical example of this is the USSR in Afghanistan.

[1] I am sure the US would never invade and occupy them just because the US found it convenient ;)


> Consider, for instance, if there are two countries (A and B) looking at conflict with perfect information. Given their relative strengths, the war will cost both $1 billion, but A will win and reap $1 billion + $1 in plunder. In your model, B will always surrender and give A $1

In the perfect information scenario, B will surrender more than $1. B is better off at any tribute level up to $1,000,000,000.


Sure, I didn't go far into detail, but it's actually worse than either of us has said because once B surrenders anything ($1 or $1,000,000,000), they are at even greater disadvantage and "logically" should surrender more money. In fact, they should surrender the entire country, as once they are at a disadvantage A can keep coming back forever.

Effectively, the idea that war would not happen with perfect information suggests that the logical thing to do is for all countries to surrender control to the strongest country immediately.


The norm is that you pay regularly occurring tribute until such time as you feel you probably don't have to anymore. But there's no norm of surrendering control; that's a conquest, not a tribute.


There is a lot to disagree with here. First, the article itself provides an example of an army with a superior fighting force that had far fewer casualties than the other side, yet still lost the war. That was due to the civilians back at home lacking the stomach for a long, protracted war abroad. So even if one side has superior strength, and even if the side with the superior strength wins every battle, they might still lose the war.

Another issue is your implicit assumption that the side with the superior fighting force will always win the war, but I don't think you can make that assumption. Just like the better football team can lose to the underdog, there is a stochastic element to warfare. A sudden bit of bad weather can turn the tide of a battle. There are countless other elements that are unobservable and unpredictable that can decide which side wins.

There are also asymmetric payoffs to going to war. A nation might have a slim chance of victory, but the cost of defeat or surrender might be genocide or subjection. How do you assign a payoff to the choice "surrender" when the outcome is the destruction of your nation as it once existed?

[0] https://en.wikipedia.org/wiki/Strategy_(game_theory)#Pure_an...


Wait a minute. Shouldn't the goal of the defender be to make it clear to the attacker that they would incur losses that outweigh possible wins for the attacker, thereby making the attack a lose-lose scenario? This is only possible if the defender makes it credible to the potential attacker that they will defend their country even when the costs would be extremely high and they cannot possibly win the conflict. In other words, at least from deterrence point of view it's not about winning, it's about making sure the attacker would overall lose more than gain (which is not the same as winning against the attacker).


i'll add that there's a lot of people that see death as the door to paradise and fighting for their cause as the key to that door. No amount of logic or information is going to stop them.

when you're dealing with the mentally ill, all bets are off.


Does this have any QM implications?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: