Hacker News new | past | comments | ask | show | jobs | submit login
Leverage Points: Places to Intervene in a System (donellameadows.org)
62 points by openfuture on March 26, 2017 | hide | past | favorite | 11 comments



I've recently started reading up on systems theory and cybernetic systems and contrary to what the names suggest learning more about the theory does not make you better at designing systems. You get much more attuned to how large and complex systems fail but there is no crank you can turn that will provide insights and then let you implement changes that will lead to positive progress.

No matter what you do at the end of the day you have to still convince people that what they're doing is probably wrong and no one ever wants to hear that. Cognitive dissonance and sunk cost thinking almost always trump any kind of analysis and the system continues to operate as it has always operated until a large enough shock shakes it and changes are made. Most often the changes are made too late and great human misery is the result. Current climate issues is one prominent example that comes to mind. Uber and Zenefits being the other examples that I can think of were earlier interventions would have helped.


I've been learning about systems theory for about five years now, and I've been frustrated by the same things, but I've come through them to something which I think provides a path forward.

I think changes which lead to positive progress are hard, and the bigger the problem or the higher up Meadows's leverage list, the harder the changes are. Cognitive dissonance and sunk cost thinking and other cognitive biases do increase the challenge as well. But I do see examples where I and others were able to make positive changes from our understanding of systems theory. I see making these changes as a marathon, not a sprint, so try to adopt an attitude of celebrating every step of the way ("dayenu"), and I also look to work like Nancy Leveson's[0], which tries to systematize the process of turning a systems understanding into actionable feedback.

It's often perceived to be the case that earlier interventions would have led to fewer losses, but it's important to adopt an understanding that the past is past, and all we can do to move forward is to change our behavior now. As the man says, the best time to plant a tree is twenty years ago, but the second-best time is today.

[0]: https://mitpress.mit.edu/books/engineering-safer-world


I have that book. It is were I learned the term "socio-technical system". It is a great book and I recommend it to people whenever I get the chance. It is also much more than just safety engineering and is one of the more clearer expositions of cybernetic systems that I've read.


Like the sibling comment says it's a marathon. Especially because the strongest leverage points require people to change their way of thinking which requires first gaining their trust and then feeding them quality information.

This book[1] is an attempt to apply systems theory to our political system. I recommend it as a concise explanation of a perspective that highlights the faults in our current way of thinking. What is needed is a common reference system for people to have productive political discussion and I feel systems theory provides that, if only the information was more widespread.

[1]: http://www.oftwominds.com/ARBW.html


You can't control a system that you can't control, you can only influence it. If a company wants me to fix a technical problem with their system, I can't actually fix it unless I have the relevant access and control, such as SSH keys and username / passwords to web interfaces. I can suggest fixes, but I can't actually fix it.

Plenty of things are simply too big for one person to do.


Homeostasis (of various kinds, but certainly risk homeostasis) is a huge problem for humanity, which becomes more dangerous with increased technological capability and synchronization. Systems theory doesn't solve it, true. I hope more than just you and I are worried about it.


One of the more interesting theory-driven rabbit holes I've dug into was the bullwhip effect in inventory management, which was a failure mode model popularized by Jay Forrester, who was Donella Meadows' mentor and colleague. One of the interventions that can be taken to minimize susceptibility to the bullwhip effect was to decrease information flow latency. In other words, faster information about demand fluctuations improves the ability to respond to them. This type of intervention is ranked #5 by Donella in this article.

The problem with that intervention is what happens after you've improved your information flows. Faster information flows about demand leads the profit-seeking enterprise towards tighter inventory tolerances. They eliminate "extra" safety stock that is no longer needed due to their faster information flows. And it works out phenomenally well...for a while. But strong demand fluctuations can still appear, and without equivalent response mechanism improvements, the risks become more fattailed: failures become less common but much worse in severity. And in context of this article, I'm claiming that a leverage point ranked at #1 (the profit incentive) overpowered a leverage point ranked at #5, ameliorating its benefits.

The new topic du jour on HN seems to be self driving cars, and one of the many claims is their ability to improve traffic, and one of the other claims is their ability to improve safety. And interestingly, a systems model of these claims would likely show situations that are suspiciously similar to a bullwhip effect. In other words, it is a stateful model with latent information flows (roadway conditions), physical responses to the information flows (brakes), and buffers (space between vehicles) which protect from failure due to information latency and reaction capability.

Self driving cars can improve upon baseline reaction times to changing road conditions. They have more and better sensors and well known algorithms for detecting dangerous situations. This fact isn't speculative at this point, we already have some proof of it [0]. The question becomes, what do we do with that improved information flow? Do we tighten buffer tolerances? If so, you improve roadway capacity the majority of the time and possibly still reduce the risk of accidents...but what happens to accident severity? Maybe traffic throughput isn't the be-all objective that we want it to be, and we should be content to let that information flow improvement result in increased safety and traffic resilience instead.

[0] http://www.nbcnews.com/tech/tech-news/tesla-autopilot-begins...


Society has a risk homeostasis, which partially compensates for improved safety by taking more risks. Seat belts and air bags both resulted (over generation time spans) in higher driving speeds. So the argument goes that eventually self-driving cars will drive faster and closer, and overall traffic deaths will remain constant.

The difference with self-driving cars is that the policies affecting risk (how fast, how close) will be decided by vehicle makers, who will be very risk-averse. Risk homeostasis is mainly driven by individuals thoughtlessly taking risks. But when enterprises make deliberate decisions, like they do in commercial air travel, safety can be improved indefinitely.


That or the proliferation of ever larger numbers of large aircraft has kept us in the usual homeostatic situation, but with an ever-larger (growing) pool, so that the lives sacrificed in order to restore full attention to safety go much further, over many more aircraft. Only once you've hit max airliners can you judge whether homeostasis has been eluded. Until then, increasing safety is most likely just one more consequence of increased economic scale (and concomitant efficiency), rather an independent change. Same with the price of the tires.


I'd agree that at least in this case the manufacturers of self driving cars would have the incentive to maintain buffers. They'd be on the hook for lawsuits.

But they aren't the only players in the system. Traffic is a political problem, and how politicians react to traffic issues regularly becomes a campaign issue. I have no problem imagining a world where a politician campaigns on requiring self driving car manufacturers to standardize on rules that reduce traffic.


There's a WikiPedia page on this that provides a nice overview - https://en.wikipedia.org/wiki/Twelve_leverage_points




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: