I recommend my book, "ReModel: Create mental models to improve your life and lead simply and effectively" https://www.amazon.com/dp/B00F2RS1G6, which gives you an exercise to write out your mental models so you can both learn from experience about models and how they work, and identify the main models you use in your life. There's a free excerpt here: http://joshuaspodek.com/wp-content/uploads/2013/10/ReModel_b...
>You told the Guardian that without meditation, you'd still be researching medieval military history — but not the Neanderthals or cyborgs. What changes has meditation brought to your work as a historian?
> ... the entire exercise ... is to learn the difference between fiction and reality, what is real and what is just stories that we invent and construct in our own minds. Almost 99 percent you realize is just stories in our minds. This is also true of history. Most people, they just get overwhelmed by the religious stories, by the nationalist stories, by the economic stories of the day, and they take these stories to be the reality.
To understand meditation, one has to understand how our brain works. Once you fully understand the brain, it's a no-brainer to meditate regularly. If you are game, I recommend everyone read the Happiness Hypothesis by Jonathan Haidt.
> To understand meditation, one has to understand how our brain works.
I'm fairly sure we're still decades if not centuries away from this. We barely understand the broadest strokes of the brain, and can only really treat its deficiencies by bathing the entire thing in chemicals.
I did not mean from a comprehensive understanding but there are things we can understand such as left vs. right brain, role of our limbic system, role of emotion when it comes to reason and decision making, etc. These concepts will give us a window into how our brains work and the need to train it using techniques such as meditation. Sorry, I was cryptic in my earlier comment.
+ minority rule
+ signalling theory
+ hypergamy
+ purity spirals
+ sexual selection
+ iceberg principle
+ principal component
+ emperor has no clothes (shared vs common knowledge)
+ Hajnal line
+ trapdoor function
+ invisible hand
+ information hiding (encapsulation)
+ loose coupling
+ iatrogenics
+ ergodicity
+ entropy
+ unbundling (see pmarca’s tweets)
I wonder what could be added to this list from models explained in Eliezer Yudkowsky's Rationality: From AI to Zombies book (which is available online: https://intelligence.org/rationality-ai-zombies/)
PS: Created an OPML file for 113 models with explanations:
For example, the section on power law says "The central limit theorem does not apply and there is thus no 'average' earthquake. This is true of all power-law distributions." But power laws with power -k have a well-defined mean if k>2, and finite variance if k>3.
The section on feedback loops implies that runaway is only possible with positive feedback, and that loops with negative feedback are stable.
Others are mutually exclusive. Tendency to stereotype is bad, but Bayesian method is good. Clearly both can't be right, and no rule is given to differentiate between those cases.
> Most rational people.. one of the finest thinkers in the world..
Is this advice coming from a professor/scientist/philosopher? No, they are the investors who make the most money. In a neo-liberal society, the richest people get the intellectual authority to tell us how to make intelligent decisions. Incredible! /s
Allow me to change your view. What irks me are sentences like these
> The great investor and teacher Benjamin Graham..
> Smart people like Charlie Munger..
> Vice Chairman of Berkshire Hathaway and one of the finest thinkers in the world..
There is no mention in the article as to why I should believe they are as smart as the author believes them to be. (Hint: They made a lot of money and their advice seems rational, so they should be smart?). No doubt they are great investors (they made a lot of money), and I would follow their advice in money matters. But I question their credibility as extraordinary thinkers.
Notice how this type of content is marketed and perceived. Specifically, any advice by investors/CEO/Entrepreneur (or other individuals whose primary experience has been in making money/increasing market value) being touted as general success mantras (including in your personal life). Principles by Ray Dalio being a recent example. And as I mentioned, I believe this is a product of the neo-liberal ideology that has taken over our society.
Does anyone follow the mental model way of thinking? I am curious how do people apply so many models in their daily life. It is possible to add couple of models like Hanlon's razor etc but 100+ seems too many.
I doubt event the author incorporates all 100+ models in their every day life, but I don't believe that is the point.
These models provide some kind of an insight into some of the underlying concepts of different domains. Some of models are generic enough to have a very wide range of applications (ex: Hanlon's razor). While others are a bit too specific (ex: Fat-Tailed Processes).
The common thread across all models is that once you understand them, you have an access to a new vocabulary which can be used in almost any domain (ex: The military mental model of Asymmetric Warfare in the domain of business. In general, if you are a business person I suspect you are going to find almost all of Military/War mental models extremely useful).
The important point is to understand this metaphorical method of thinking, of how different things can be connected through abstractions, and then generate insights based on that. Once you start doing that, you'll accumulate a lot of different mental models without even realizing it. Almost everyone already uses mental models. Doing it consciously makes you that much better at it. :)
Sure, folks successfully use models/patterns/theories/beliefs/instinct to reduce complex realities into comprehensible problems with tractable paths to solutions.
But I think the enumeration of models in one's "toolbox" is less important than being able to apply them (whatever they are) skillfully.
The OP just sounds like fodder designed to appeal to biz-types who crave and worship snap decision making.
"A mental model is an explanation of someone's thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person's intuitive perception about his or her own acts and their consequences" this is from wikipedia https://en.wikipedia.org/wiki/Mental_model - but I remember also similar definitions from other sources.
I would say all thinking is about building a mental model - it is what lets you predict how something will behave in a particular circumstance. Some mental models are not very reusable, some are applicable in many places - and it seems useful to collect the more universal models.
Some find it easier to apply others' mental models. Others find it easier (and more fun) to develop their own. The latter is as easy as "what principles can be extracted from what I'm learning?" and is easier on recall because the models come from one's own experience. In this way you can say we use thousands of mental models daily. In the former case--referencing others' models--those who do this tend to be adept at prioritization. They remember the models because the application is important to them. Original/own-model makers do not typically prioritize as intensely and are known to get very deep into theory. They are found all over academia.
I'm not sure I understand what the question means. I never specifically determined that "I shall henceforth follow the mental model way of thinking", but if I know a general principle that may apply in a concrete context, then yes, I will apply it to see what comes out the other side. Given that, I also didn't sit down and learned these. It's just stuff I picked up along the way:
At some point in elementary school, a teacher said that 90% of the energy is lost at each level in the food chain, meaning that if a type of bear would start eating whatever the fish eats instead of the fish itself, that type of bear would suddenly have 10× more food available. For some reason that stuck with me, and it can be applied in all sorts of systems where there are frictions with each layer further away from the source.
Similarly, I have throughout life noticed that things that have been around for a long time tend to stick around, and things that explode in popularity are likely to die off just as suddenly within a short period of time. I mentally internalised this rule of thumb as "I won't predict the demise of any technology sooner in the future than the time it has already existed for." I learned from this list that I'm not alone in this observation – it's called the Lindy effect!
The one labeled Via Negativa in the submission I learned through the lyrics of a song, which went something along the lines of, "You can't take the kid from the fight, so take the fight from the kid." At first I just thought that was a funny play with words, but then I encountered a situation where someone threatened to expose one of my actions which I knew would have been received negatively at the time; I realised there was no way I could prevent them from doing so, but I could work preemptively and shape the situation in which they would do so such that they would not have been believed. In the end, that preemptive work disincentivised them to the point where my action was not exposed. (It sounds like I murdered someone but nothing like that. I made a judgment call and I believe if I could round up the affected people today and tell them, they would be grateful for it. It's just at that point there was insufficient information and I couldn't explain my reasoning leading to the decision.)
Multiple models can be combined with success too. My fiancée was worried about her older relatives who have – fortunately – been spared many of the ailments that come with age. She reasoned that, "They have been doing so well that one of them is bound to become really ill any time now." I tried to argue that she would be right – if we knew the stochastic variable that represents their rate of illness. But we don't. And this time they're doing well? It's evidence that their rate of illness is higher than we expected, nothing else. (Not to mention that they're not independent draws – since they're relatives they either share genetic material or chose to spend the rest of their lives together. They probably have more in common than we think, and that may very well include a great resistance to illness.)
If you are observant and always, at all times, constantly, try to figure out the fundamental rules that underlie a phenomenon, you'll just learn these models as you go along.
I don't have time to dive deeply into the full list right now, but just based on a smaller sample (n=43) I have multiple times observed and acknowledged about 95% of them in existing systems, and I routinely apply 80% of them.
Love this. But am irked by how incomplete these models seem. "First Principles", "Probabilistic Thinking". Both seek to formalize causality in events.
Would speculate Buffet, Munger, et al don't actually bother much with cause and effect when entering large positions. They look at trends, price-action, cash flow and other mathematical micro-economic indicators. As well as a bias on the long term survivability of betting on America as a macro whole.
Michael Abrash, now Oculus Chief Scientist, has this great definition of "Zen Optimizations" in his famous graphics programming black books. There is the way a thing is advertised to work. There is the way you believe it to work. And then there is the way it actually works!
In psychology there's this concept called body mirroring. Monkeys do it, as does humans. It's the idea that the subordinate ape tends to mirror the posture of the dominant one. Mental models are obviously a useful tool, but I always found it funny that people who write posts about it keep reusing the phrase "latticework", as if it has any useful meaning, instead of just removing it to make the content more concise. I think of it as a linguistic version of body mirroring.
I disagree with the idea that 'latticework' has no useful meaning. Here's the way Munger uses the term:
You've got to have models in your head. And you've got to array your experience both vicarious and direct on this latticework of models. You may have noticed students who just try to remember and pound back what is remembered. Well, they fail in school and in life. You've got to hang experience on a latticework of models in your head.
To me, 'latticework of models' onto which one hooks examples and personal experience is a rather neat analogy.
I can't recall a joke about a President looking for an economist with only one hand. In economic forecasting there are mental models of the type: On the one hand, ... On the other hand. Polya how to solve list many ways to develop useful math analogies, but it seems that end result is not very satisfactory. I think that to make an intelligent decision you have to feel deeply involved in the subject, then some kind of unconcious force do wonders, it seems you can benefit learning or focussing in certain patterns, but I think that deep is better than breast. Also there is the theory of the hedgehog versus the fox https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox
This is great! I try to keep track of such lists, and this is the best I've seen so far.
I put up a Mediawiki instance five years ago (wikilogic.org) to publish my own list, but didn't follow through. Interestingly I put a FS link in the bottom right : )
A couple of tweaks I would make:
- Putting 'Inversion' at the top--I think it #1 in terms of underuse.
- Aside from Occam, Bayes, Pareto, Dunbar, and Pavlov, I would drop all models named after people. The chances of someone remembering something are much better if they don't need to learn a name in conjunction! (e.g. see the other comment about the difficulty of learning "Hanlon's Razor")
To take this material to the next level, I imagine the next step is to build case studies.
Naming things is hard. To drop the person's name from the model you'd ideally have a replacement that is both concise and expressive; a concise name allows us to effectively talk about the model itself, which is also important. Often, where such a phrase exists it's already used as an alternative:
* Occam's Razor: Law of Parsimony
* Pareto Principle: 80/20 Rule
* Hanlon's Razor: arguably an extension of the Principle of Charity
I do agree that moving away from the people names would be a benefit if (and maybe only if) we can find a similarly concise replacement.
I like the idea that mental models are the most condensed form of knowledge. But I don't quite agree with the premise that there is some universal list that will be equally useful for everyone. Investors like Charlie Munger need to have a very broad knowledge - because they need to analyse all kinds of businesses - but for other peoples a list concentrating more on their field of work would be more useful.
His newsletter is actually the only newsletter that I get weekly and skim it regularly.
I agree with other commenters that sometimes the material is weak and more like "Do these 5 things that successful people (bankers, investors, entrepreneurs) do and be also succesful" but generally there is quality content on this blog.
Not enough mathematics. A substantial chunk of these are derived by analogy from analytical domains which are kind of on the fringe between pure math and applied math, like game theory and financial theory. There, the models and their primitives being described are part of a more coherent body of knowledge that lets you apply the concept precisely.
I'm going to get a reply that "oh but reading this Farnam Street stuff helps build intuition" and you might very well be right, but this knowledge is only developed to its fullest extent when it's precise and considered deductively. Taking a microecon course made it difficult for me to stop seeing microecon for at least a year. Freakonomics wouldn't have been technical enough.
I don't think I agree. At least in most cases. It might be important to learn the math in order for the models to make sense, but once you learn the model you don't need the math. For example, most circuit designers don't use any math (other than simple arithmetic and algebra) when designing circuits because they have heuristics that allow them not to think about the math. In that case, thinking about the math just gets you bogged down in the details.
https://www.defmacro.org/2016/12/22/models.html