To some extent, to succeed in the modern world, you have to override your instincts with a thought-out response to things. Whatever heuristics we've built up over thousands of years are interesting, but some of them don't apply anymore. (If you were a prehistoric human and you encountered a beehive, you'd eat all the honey. But now you can get hundreds of beehives worth of honey in one trip to the grocery store, and your body's not going to tell you to not eat it. It will tell you the opposite -- this tastes great! You just have to learn that it's bad for your health and choose not to do it, no matter how good it would taste.)
Basically, the human brain is a neural network trained on very old data. Fortunately, it is also very adaptive and can ignore that training data with some conscious effort. You need to operate your brain in that mode more now than you did 10,000 years ago.
"Strong opinions weakly held" means "argue your point but sometimes you're going to lose." Evolution might not have rewarded losing over the course of millions of years, but whatever meeting you're arguing in doesn't have that kind of staying power. You can lose the argument and the human race will survive another generation. Whatever argument you're having probably doesn't matter in any meaningful sense. It's not life or death.
The author of this post knows what "strong opinions, weakly held" means. They explain it in their own words, and take effort to explain the difference between the catchphrase understanding and application of it versus the genuine application of its principles. The author also presents arguments as to why even the genuine application fall short, which I don't believe you responded to.
(If it seems I'm picking on you: yours is the top post in the comments, and I don't feel like you're engaging with the post beyond a surface level.)
To go further, the Author also suggests an alternate technique along the lines of Annie Duke's Thinking in Bets and Superforecasting where probabilities are assigned to estimates. I had previously encountered such practice in Scott Alexander's blog and thought it clever: have skin in the game and keeping track of wins and losses.
This whole section of the author's blog is very interesting. It goes into the US Intelligence Community's interest in superforecasters and the Good Judgement Project, a kind of elite tournament of forecasters. It talks about how superforecasters estimate by overcoming cognitive biases and questioning assumptions. And then it ultimately points out though that there are limits to forecasting and low probability events do occur all the time.
Bee Facts: You don't get hundreds of beehives worth of honey from a trip to the store. Hives typically produce multiple liters in a single summer. Last summer, our Hive made around 3 liters and there was enough for all the family to have their own jar. Many apiarists have multiple hives and a group of 10 hives could fill the grocery's honey section in September.
Bee Facts: Honeybees have a radius of collection of around 3 miles, which means one hive collects pollen and nectar from around 28 square miles of land.
I agree, but that doesn't make the "old data" problem go away.
It's worth noticing that in the first part (Paul Saffo forecasting), the problem is all in one head. Saffo recognizes an unintuitive methodology: strong opinions weakly held. He's only really dealing with himself, his intuitions, tactics that work for him.
The second part gets social:
"you’ve decided, along with your boss, to build a particular type of product for a particular subsection of the self-service checkout market."
This is a good example, imo. These decisions are made with imperfect information, and they incorporate a lot of interlocking opinions. Ex ante can only get you so far. You need to make a decision and start rowing to progress, change paths when necessary.
Multiple people are involved. Credibility is on the line, among many subtle subroutines of group psychology that we aren't even aware of. The solution from Tetlock, express your opinion as a probability, is a solution to the problem in its group form. When people are in meetings making big decisions, a lot more than the big decisions is going on. Human power dynamics are taking place. Being right can play a very small role in these.
I think the author has a blindspot. "This is not how the human brain works" is one kind of challenge. "This is not how human groups work" is another. It isn't a derivative of the first.
>Multiple people are involved. Credibility is on the line, among many subtle subroutines of group psychology that we aren't even aware of. The solution from Tetlock, express your opinion as a probability, is a solution to the problem in its group form. When people are in meetings making big decisions, a lot more than the big decisions is going on. Human power dynamics are taking place. Being right can play a very small role in these.
Except expressing it in a probability form doesn't remove the group dynamics problem but rather makes it worse I feel. Or rather you're back to square one and gain little. The probability is no longer your confidence but the number you need to give to satisfy group dynamics. Not too high or you may be punished if the view turns out wrong. Also not one that disagrees with the people in charge too much. The original approach implicitly argues that without power dynamics you make better decisions.
Probability really doesn't help in a group context. In most cases you are pulling a number out of your ass, people don't have intuitive understanding of probability* , and it implies a level of certainty in the likelihood of an outcome that doesn't exist (which leads to bad decisions because you are building on a faulty basis).
*look at how many people consider Trump's election impossible when the odds clearly indicated he had a non-negligible chance of success.
Maybe a deliberate focus on having some 'out of band' information would help, as an antidote to group-think.
A good example would be the early attitudes to face masking from the governments, medical experts, and media of certain countries. Some recognition and consideration of the deep experience and expertise of medical experts in Asia countries where SARS occured could surely have saved many lives.
> "Strong opinions weakly held" means "argue your point but sometimes you're going to lose.
The definition is not about arguing with others (so there’s no “looser”), it’s about arguing with yourself to avoid being wedded to poorly thought out points of view. From the article (emphasis mine):
> Allow your intuition to guide you to a conclusion, no matter how imperfect — this is the “strong opinion” part. Then – and this is the “weakly held” part – prove yourself wrong.
> The definition is not about arguing with others (so there’s no “looser”), it’s about arguing with yourself to avoid being wedded to poorly thought out points of view.
That's one definition, but here is an exercise. If you meet someone who identifies with the phrase "strong opinions loosely held," ask yourself whether they're an argumentative asshole.
I have found that in software that is wrong. If you want to succeed in software you need to do what is most popular which largely follows innate human behavior. Unlike many other professions there is no licensing, accredidation, or common ethic which means there is no standard baseline of competence.
This is visually explained when thinking about the difference between success and capability and then viewing that difference on a graph, such as a bell curve.
> If you want to succeed in software you need to do what is most popular which largely follows innate human behavior
I totally disagree. Many of the most valuable, respected, and successful software engineers I've encountered have evinced deep distrust of what is currently popular in the field. Their avoidance of (and advocacy for continued avoidance of) new/trendy software fads, particularly on the frontend, enabled excellent engineering, and instilled a powerful attitude of pragmatism and productivity among their colleagues.
It's anecdata so take it with a grain, but in organizations big and small I've observed that some of the most successful and capable engineers are distinguished by antipathy towards popularity-driven engineering.
> Their avoidance of (and advocacy for continued avoidance of) new/trendy software fads, particularly on the frontend, enabled excellent engineering
I absolutely agree with that. You are describing an expert, which is not likely somebody successful or respected, particularly with regard to their front-end web technology peers. Expertise is not frequently revered by other developers, particularly when compared with compatibility. That distinction is even more evident when applying for jobs at other locations.
Maybe I've just been lucky with where I've worked, but I've often seen those people get significantly rewarded for their anti-fad efforts--both in terms of money/promotions and respect from their colleagues.
A complete disdain for jQuery is what allowed me, as a full time JavaScript developer, to become promoted to a senior and nearly double my salary. That is only because I was working, at the time, in a niche area where jQuery failed amazingly in production. I was deemed an expert and rewarded for it, but that is astonishingly rare. Now you cannot get hired, much less attain any kind of success, without something like Angular or React even you can write superior code without those in half the time. That is why I only look for jobs that primarily deal with writing for Node, but even still competence is not well appreciated when interviewing.
I think it's (as is often the case) a little more nuanced than this.
There's roughly two ways to follow trends "successfully" in the software world:
1. Eager and somewhat blind.
This fits with your comment and follows the kind of innate behaviour we're discussing. And it will bring you some success: you'll always be up on "cool new thing" as long as you're afforded the freedom to jump from project to project and don't fall into a rut where you're tasked with maintaining the last crappy "cool new thing" you championed that noone wants anymore.
2. Pragmatic skepticism.
You look at network effect and community support as rational, technical pros when weighing tech choices. You're successful because, while you're usually up on "cool new thing", you approach tech tentatively enough not to fall into the vendor lock-in hole. If you're ever tasked with maintaining some old thing you made, you're less screwed as you never jumped with both feet.
Of the above two, the first is a much more common success story, largely because the software community loves reinventing the wheel, so most engineers don't fall into that particular maintenance rut. There's no moral hazard with approach number #1, but I still wouldn't go as far as to say it's the right way.
Anybody who's done hiring will attest that "formal barriers to entry" (i.e. accreditation via certifications and degrees) say very little about a programmer's ability to produce valuable output. Hiring would not be "the big problem" in tech if one could simply outsource the hiring decision to such a formal barrier to entry. But, since formal barriers to entry are demonstrably useless signals specifically for hiring programmers, we resort to all sorts of other superstitions which, when put together, at least bestow a visceral sense of being able to divine a person's ability to produce valuable output. People and companies all seem to have their own preferred superstitions, almost none of which include your "formal barriers to entry".
> But, since formal barriers to entry are demonstrably useless
Could you provide an example of such a barrier? Bar exams and medical review are extremely valuable to other industries. They demonstrate both competence and ethics.
Since software is utterly lacking anything resembling a formal qualification barrier it’s hard to say a thing isn’t valuable. It doesn’t make sense to disqualify something that was never there in the first place.
Industry ethical standards are also hard to disqualify because they similarly don’t exist. To someone who has never met such a burden for any professional industry it’s an easy thing to deem as worthless. In other industries compliance to ethical standards are more important than perceptions of competence.
I did mention more than one example in the first sentence of my reply: CS degrees and certifications.
> Since software is utterly lacking anything resembling a formal qualification barrier it's hard to say a thing isn't valuable.
This is circular logic by either of our propositions. For if my statement were true (that "formal barriers to entry [in the context of software, as explicitly mentioned multiple times in the thread] are demonstrably useless") then it is not valuable as a barrier to entry [in the context of software, as explicitly mentioned multiple times in the thread]. However since you also assert that "software is utterly lacking anything resembling a formal qualification barrier" and this necessarily includes CS degree and/or certifications then that "thing" is equally not valuable as a barrier to entry.
What you've done is conflated your relative moral value judgment (approximately that there "should be" a formal qualification barrier to entry) with another separate relative moral value judgment (that existing barriers to entry are not useful or are so useless as to fail to exist). These are two different propositions. Also please do not attack the character of your interlocutor, for your suggestion that I have "never met such a burden for any professional industry" is wrong but also just pretty lazy.
If you believe you have a way of generating a useful hiring signal that beats doing a work sample, I'm sure any number of people around here would throw money at you to build a company around it.
You inherited the DNA which built your brain from your parents. I believe they refer to the historical conditions which created selection pressures for your parents’ ancestors to survive rather than fail to live long enough to reproduce.
DNA is not a neural network (that we know of), and I wasn’t meaning to imply it was. However, the actual neural networks (that is, meatspace neurons arranged in a live human brain) are tested simultaneously alongside the DNA and they succeed or fail to be propagated as a result of their fitness in the environment; the interaction between DNA, behavior, and acquired traits was once thought to be solely fiction, but it is actual science[1], though we don’t know much about how this works yet. The brain is as involved with your behavior and your DNA; I think that was the common thread in OP’s point.
My take on "strong opinions, weakly held:" It's about combating your own confirmation bias by being willing to do the work of re-examining your conclusions in the light of new evidence. It's hard to do b/c we get emotionally invested, and also because once you make a conclusion an assumption, it becomes implicit & fades into the background -- so it's harder to question.
On the flip side, it's also about trusting your own reasoning above the crowd -- you are thus able to pick up the $20 bill on the ground instead of being sure it's fake b/c nobody else has picked it up already.
Honestly I DON'T trust my own reasoning. I can do this adversarial process as described in this article on my own, but that's incomplete. People outside of me have information and ways to view things that I don't. Others work as a discriminator and you update your opinions and views as more information comes in and challenges your own ideas. But it is because I don't trust my own reasoning that I read textbooks, seek out experts, and try to get as many views as possible, because no one has the complete picture.
I have this and it's a real problem for software development because everyone expects a senior to be gung-ho about their solutions and perfectly confident. Whereas I'm always asking for second and third opinions about things and people (largely management but occasionally other developers) view that negatively.
Baseline political paradox - humans demand extraverted confidence from leaders more than they demand competence.
If you're confident you can make mistake after mistake and no one will care unless everything comes crashing down - and even then there's a fair chance you can make it someone else's fault.
If you're merely very competent, taking time to research good solutions looks like indecision, not leadership, even if the results are far better.
I think this might be due to a difference in optics. What you see as 'asking for a second or third opinion' might be seen by others as lack of independence, leadership and ownership.
Mostly everywhere senior developers are expected to self-manage and make difficult decisions on their own, unprompted, and just present working solutions to the business. It's okay to take time on this - solving it on your own, consulting literature, peers, etc. But looping in management into technical decision making might make you look like you can't make decisions on your own, or need micromanagement. Management, business and product care about things being done - not technical approaches or having to choose between multiple solutions.
I agree with this. Even if you as an engineer aren't 100% sure, sign-offs from other groups or extra opinions are something for a project manager or product manager to fish around for. There are some optics to consider there too -- e.g. the implication that the PM things you're a fool -- but having them fish around for buy-in and alternatives sells better than you doing it -- it makes you look weak or unprepared.
Context matters here too, if you know Alice and Bob on Team [X] are just great at crypto or DB work or whatever, then getting their feedback before implementing a feature might make sense / be a standard due-diligence thing.
Consensus decision-making has its place, but not when it delays simple decisions or acts as a way to diffuse responsibility. Can you decypher whether your second-guessing is coming from an emotional or a rational place within yourself?
> whether your second-guessing is coming from an emotional or a rational place within yourself?
I'd say it was rational because I know full well I've made some howlers over the decades and a lot of them could have been avoided with a second (or third) pair of eyes.
You should trust your reasoning, that is, the deliberate process/method by which you arrive at conclusions, but you should be constantly seeking score your levels of uncertainty, and, yes, falsify your hypotheses and disprove your theories. Your reasoning, however, should be well trained, reliable, and trustworthy. We should have a high level of confidence in our methods/procedures.
> But it is because I don't trust my own reasoning that I read textbooks, seek out experts, and try to get as many views as possible, because no one has the complete picture.
“
[T]hat core thing of: are you good at thinking probabilistically is a bigger thing than you think.
And then on top of that, you have to be somebody who’s really interested in understanding when the things you believe are inaccurate. And you have to be so incredibly hungry to collide with corrective information. You have to be openminded to the corrective information.
You have to be updating your beliefs constantly, and I think that you just have to be willing to have less endowment to your own beliefs and hold those really loosely because you have to keep this end goal in mind. I’m not trying to win this hand. I mean, I am, but I understand that there’s all sorts of things that might intervene in my ability to do that. I’m trying to win in the long run.
And the worst thing that I could find out is that I believed something was true and held onto that belief too long, so it affected my bottom line. You just have to be thinking that way all the time. And then you have to be willing to try to solve for it.
”
I would include other people's arguments as part of the evidence that's needing to be evaluated -- others' arguments come to me not as "X therefore Y" but as "Dave says that X therefore Y." I still have to do my own reasoning: is Dave credible enough that I should spend time thinking about what he said? Is X true? Does Y always follow from X?
In the end, I still have to trust my own reasoning, instead of taking it on faith that Dave is right. But, as you say, maybe Dave's a good source of arguments & should be consulted often -- just not as an oracle.
Exactly, and I think some are missing that. I will say the things I believe with confidence, but I have no illusions that I don't have all the information and an incomplete picture. Things can always be improved.
I think a better headline for the post would have ended with “... doesn’t work that well for me.” The author tried the “strongly held” strategy, found it difficult, and decided to justify another strategy of asking “how much are you willing to bet on that?” instead.
Why not both? After all, deciding “how much to bet” is probably one kind of fairly strong opinion. It’s at least strong enough to bet on. Betting strategy changes as information changes, so maybe you could construe that bet (or strong opinion) as being loosely held.
I agree that pithy phrases shouldn’t be used to justify “strongly held bad opinions” but how are we even deciding what a “bad” opinion looks like?
In the end we probably all want to come to the most correct conclusions and be willing to change when presented with new information. How we get there probably doesn’t matter as much as a majority deciding it’s worth the time and effort to do so in the first place.
On how do you tell apart a bad opinion from a good one, the ancient Stoics (lately I'm quoting them more, as I'm immersed in their writings) have some thoughts here. For the longer version, you have to read their works[1], but to give an extremely simplified version, without butchering the concept:
The Stoics have this notion of 'impression' and 'assent'. It goes like this: An impression of walking strikes you. But only after you told yourself "yes, it is fitting for me to walk", thus giving your 'assent' (agreement) to it, will you actually go for a walk. Of course, we know that we don't actually verbalize like that; as it all happens too quickly. Their goal here is to not evade responsibility to shape ones own judgements, opinions, and even emotions "in accordance with reason".
Thus, the Greek philosopher Epictetus' favourite way of describing the Stoic project is: "making correct use of mental impressions".
- - -
The same technique of impression/assent is also used, along with others, to diagnose "passions" (Greek, páthos—it's a loaded word that is used to categorize many emotions, including the debilitating ones). Thus, for the Stoics, the cause of any "passion" is an "error of judgement". What sort of error? Mistaken system of values—there's a ton more to this, but I have to skip it for brevity's sake. FWIW, some reading recommendations on this topic on a thread here[1].
There's a type of exercise where you're not asked "I want to do $x, how do I achieve that?" but instead are asked "I don't know what to do, can you find out?" where the scope might be anywhere from "pick an new ERP" to "reorganise the entire worldwide operations to be a more effective at all things digital".
If faced with such a wide open question you could do some research and start asking all the questions you think of, but you're then just hoping to narrow in on something by luck. The "Strong Opinions, Weakly Held" method works well here, if within the first week you can learn enough to form some opinions, you now have some ideas to test against and disprove. You can start to decide on what information is important and what isn't important, rather than trying to gather all the possible information and synthesize it later on.
If you have an opinion such as "you should formulate a strategy to sell direct to the consumer instead of relying on distribution alone" you have a starting point and have narrowed things down from "how do we sell more things?". You might not have one opinion, you might have five. It takes experience to come up with opinions quickly when faced with limited data and potentially a large problem space.
It's hypothesis-driven decision making. It does require both iteration and the willingness to let your original opinions go--kill your darlings.
> if within the first week you can learn enough to form some opinions, you now have some ideas to test against and disprove
Interestingly there is some evidence that this might lead to anchoring bias related issues. I am not sure what we can do to improve the situation, however.
I was unaware of the origin story for this phrase. I've mostly seen it used to duck actually arguing with people, a la this paragraph:
More generally, “strong opinions weakly held” is often a useful default perspective to adopt in the face of any issue fraught with high levels of uncertainty, whether one is venturing a forecast or not. Try it at a cocktail party the next time a controversial topic comes up; it is an elegant way to discover new insights — and duck that tedious bore who loudly knows nothing but won’t change their mind!
I think the original intent of "strong opinion" is make a decisive conclusion without hemming and hawing. In other words, if it looks like a duck and quacks like a duck, say "Well, I believe it's a duck" but allow for the possibility that you are wrong and be willing to admit that if evidence comes forth showing you are wrong.
I don't operate that way. I have a high tolerance for ambiguity and I am more comfortable with the answer "I don't know" than most people seem to be. Most people seem to have a tremendous need to categorize things and I think this is useful for such people: Go ahead, categorize it. Just don't be overly committed to categorizing it. Be willing to change your mind about it.
Most people fail at the "Be willing to change your mind about it" part.
Most people seem to use this phrase not as a rule of thumb for how to think their way through something -- which can take work and it helps if you go ahead and deal with whatever is in front of you and then take the next step -- but simply to deflect fightiness in online forums.
I'm happy to debate with people, but a lot of argumentation on the internet isn't really intellectual debate trying to tease out the merits of an idea. It gets personal. It gets ugly. It is actually fighting, not debating, and we call both "argument" and do a poor job of distinguishing the two things.
So I have seen this phrase, but it was consistently used to basically say "Don't @ me!" In other words, "I want to go ahead and speak my mind in public to satisfy some need of mine, but I don't really want to deal with other people not agreeing and all that. I just want to say a thing and that's it."
I think the original idea has some merit -- go ahead and state firmly what you think it is but be willing to change your mind -- but that's not what most people seem to use the phrase to mean. Not at all. And what it has come to mean is pretty lame.
An analogy the original concept makes me think of, is how you can pour out some stuff like sand or flour, and it's hard to tell how much you have because it forms a mound. But if you shake it, it levels out. And if you're pouring onto a scale, shaking makes it pour more continually and lets you get the exact amount you want. In general, going back and forth seems like a way of reducing hysteresis, and that's what the "strong opinions weakly held" concept sounds like to me. The human tendency is to resist flip flopping to present an outward image of stability, so it's best to do constant internal flip-flopping without losing your place and letting people know. I haven't thought of the concept by the catchphrase/meme, but I have many times thought that whenever I see a claim, I need to reverse it in my mind and think about whether it's any more or less plausible to consider the opposite. To avoid the friction and path-dependence of an idea. People like to be contrarian and sometimes they appear to be smart people, who scoff at this sort of thing, reversing obvious statements. But I still think it's vital.
On the other hand, when faced with a fork in the road, it's important to choose. Often it doesn't matter which - but if you vacillate, and especially if you are leading - then you can fail by not choosing.
So, strong opinions: decisively go down one road; weakly held: turn around if information suggests it looks like the wrong road.
You can only update your opinions if you engage with reality, which demands a certain conviction in those opinions.
> On the other hand, when faced with a fork in the road, it's important to choose. Often it doesn't matter which - but if you vacillate, and especially if you are leading - then you can fail by not choosing.
The number of times people think they are at such a fork where non-action is worse than any action is an order of magnitude more than in reality.
In my experience, most of these cases are reflective of a social issue. The problem at hand doesn't require one make a decision any time soon, but if you don't, you are viewed negatively. So the social systems around you push you to make a decision even when one is not needed.
Almost every time I've heard someone use the phrase "fence-sitter" it is this scenario. At my work, we have surveys where we rank things on a 5 pt scale: 1 is strongly against, 5 is strongly in favor, and 3 is neutral. There's a significant block of people who keep pushing HR to make it a 4 point scale so there is no neutral option. I've had several discussions with them, and they've never been able to give me a good reason, beyond statements like "You have to have some opinion!" and some negative comment about fence sitters.
But yes, for certain things (e.g. investing for retirement), it's probably better to pick a safe option than not pick anything.
I've seen people with analysis paralysis at the lowest level of development, and I've seen opportunity-chasing in different directions of different market segments at the top of orgs - sometimes fence-sitting takes the shape of trying to go down both roads, rather than no roads.
I think it's a lot more common than you think it is - especially because not choosing does not simply mean not taking action, it means lacking focus.
A strong opinion will at least put all the wood behind one arrow, focus in an area. It might not work; deciding when to change tack is where the judgment is, but at least there ought to be some solid feedback to work with.
Being decisive has nothing to do with a strong opinion.
It does bring up one of my issues with these pop philosophies. So many people interpret them differently while claiming to have the exact same interpretation.
This is a great way to put it. It’s a position I end up taking a lot—there are so many situations where no one involved has enough information to justify strong beliefs.
I have lately had a converse experience, where my superiors expect me to have strong opinions about things that are not even half-baked. Or rather, they ask me about it and even if I hedge and say "I'm not sure but..." they interpret it as gospel and move forward with laden expectations.
There are another version of "weak opinions, strongly held", and unfortunately the version I encountered more: "I don't know why and I don't care, but I will do it my way and you can't change it".
Your interpretation sounds very much more similar to "weak opinions, weakly held" to my ear.
Tools that cultivate more position-taking of this sort, that's what I firmly believe is part of healing the world we've thoughtlessly created through digital mediation of everything.
What is the practical implication of that? "Strong opinions, weakly held" means (IIUC) make a decision, but be open to changing it if data points to it being wrong. It helps avoid the trap of looking for perfect data to make a perfect decision. Can you provide a practical application of "weak opinions, strongly held"?
> In my experience, ‘strong opinions, weakly held’ is difficult to put into practice
I 100% agree when interacting with other people, but I think it's still valuable for your personal growth if you're intellectually honest with yourself.
"How much are you willing to bet on that?" is definitely a smart question to ask other people though.
Framework? That’s just silly. It’s a turn of phrase that helps you change your attitude or approach a problem from a new direction.
“If at first you don’t succeed...” I suck. I’m never going to try again! Haha.
I didn’t know the origin of the phrase, and that’s a testament to its creativity.
As a life long learner, I found it useful in ways not related to the origin story. It helps me to overcome imposter syndrome.
I know a little bit about a lot of things, and a lot about a very few things. And I love solving problems with design thinking. Go for it. Feel confident about your knowledge if you have done the work, but know there are others who know more.
All that gets nicely summarized by Strong opinions, weakly held.
> Framework? That’s just silly. It’s a turn of phrase that helps you change your attitude or approach a problem from a new direction.
While you're welcome to interpret it however you wish, one of the main points of the article is that it is a methodological framework laid out by Paul Saffo[1], but that most people ignore the framework and focus on the catchphrase.
> it is quite difficult for the human mind to vacillate between one strong opinion to another.
For the specific subset of people in tech, I don't believe this is true. How many times have you and someone else had a strong disagreement about the cause of a bug or design of a system, conclusively determined that one answer is correct, and then had the other person come back (usually after a small delay) acting as though they'd agreed with you all along? I've been seeing this for over thirty years. Half the time, the other person even tries to claim they came up with the idea on their own and everyone else was slow to pick it up. Same thing is frequently evident right here. It's easy for some people to switch from one strong opinion to another, and the popularity of "strong opinions weakly held" makes it even easier.
I also think that SOWH is behind a lot of cargo culting and conspiracy theories. People want to get credit for being the champion of an idea, even if they don't fully understand it or it has low odds of being correct, and the appeal increases with the challenge of convincing others. After all, if they're wrong they can just switch sides and claim they'd been on the right side all along. If they're right, it's an epic victory (in their own minds at least).
Strength of belief is not inherently virtuous. It should be proportional to strength of evidence, not armor worn for the sake of a silly maxim. Strong belief in SOWH itself is an example of faith over empiricism.
"How many times have you and someone else had a strong disagreement about the cause of a bug or design of a system, conclusively determined that one answer is correct, and then had the other person come back (usually after a small delay) acting as though they'd agreed with you all along?"
I think, never? I'm sure there are people that do that, because there are people that do everything.
That sounds more like a hypothesis than an opinion.
In the less testable world of opinions (over, say, business strategy) it isn't always possible to settle things by experiment, making it more difficult to let those opinions go.
I don't like the idea all that much. It seems close to how I operate instinctively, but somewhat neglects the coexisence of mutually exclusive ideas, which, I believe, is rooted in the poor choice of the word "opinion".
The reality of human perception is that we never have absolute knowledge and can only operate on a framework of assumptions; when two possible assumptions are mutually exclusive, we often pick the most likely candidate and focus on that scenario.
This seems to be a reasonable methodology throughout most of human evolution, where decisions more often needed to be immediate.
In a modern world though, it seems like a way more helpful mental model is a superposition of scenarios and adequate responses to any of them; in conclusion, the only reasonably principle on which to make decisions would be maximizing the probability of being adequately prepared for the outcome of a situation, which can be simplified as "Be prepared for as many likely outcomes as possible", or, more correctly, act in such a way that maximises the sum of the probabilities of all the outcomes you are adequately prepared for.
EDIT: I probably should have read the entire article before writing all that; just one paragraph after where I had stopped the author actually makes a very similar point.
I understand what “weakly held” means, but what does the “strong opinions” part mean? I couldn’t get this from the post.
What would be an example of a weak opinion and a related strong opinion? Is this just the difference between “the 49ers will win most of their games this year” and “the 49ers will win the Super Bowl this year”?
If so, why is the latter preferred?
Edit: thanks for the downvote, perhaps you can help me understand what is meant by the term, or why you found my comment to be inappropriate?
Weak opinion: I cannot conclude anything about why my car won't start.
Strong opinion: I think my battery is dead and that's why my car won't start.
Strongly held: I tested the battery and voltage and current are within good thresholds. I still believe what I believed before.
Weakly held: I tested the battery and voltage and current are within good thresholds. I have adjusted my belief about what I believed before.
WOSH: I cannot conclude anything about why my car won't start. You've tested my battery as dead. I still cannot conclude anything about why my car won't start.
WOWH: I cannot conclude anything about why my car won't start. You've tested my battery as dead. It could be the battery, maybe. Hard to say.
SOSH: I think my battery is dead and that's why my car won't start. You've tested my battery as functioning fine. It's definitely the battery, though.
SOWH: I think my battery is dead and that's why my car won't start. You've tested my battery as functioning fine. The reason why my car won't start is not my battery; I think it won't start because my starter motor is broken.
Downvotes occur for all sorts of reasons. Please don't introduce noise into the discussion. It's annoying to other readers for all sorts of reasons. I am often tempted to downvote anyone who complains about them.
Thanks for the examples. Is it possible for a weak opinion to be something other than "IDK why X is happening"? That is, can it be an affirmative opinion about some causal relationship or future event?
Honestly, if someone asked me whether the WO example above was strong or weak, I would tend to say it is strong because it is a categorical statement. To me, a weak statement would be "I think my car won't start because there's either a battery problem or a starter engine problem or a wiring problem". That doesn't take a strong stand or eliminate many possibilities.
I think in the context of the original statement it was a heuristic for searching for truth, i.e. "to know, form hypotheses then subject them to falsification tests".
So yes, in the meaning of SOWH, all WO are ones that leave you unable to progress. Your reading of 'categorical implying strong' is reasonable, imho, in just the meaning of the word 'strong'. It's just not what I believe was the intended meaning in the statement of SOWH.
Interesting, so the SO part of SOWH just means "have opinions that can be validated"? That seems very weak, no pun intended. Would anyone go around advocating for people to have WO, using that definition? I guess if that's what it means, then SOWH is pretty vacuous.
Perhaps it's my background (law, economics, logic) but I assumed that "strong" referred to "the strong form" of an argument or hypothesis (such as the efficient market hypothesis). I never understood why someone would say it's better to embrace strong forms of arguments, since these tend to be more extreme (and IMO, things tend to be wrong more when they are taken to extremes).
Despite your belief that it is vacuous, WO is a pretty standard mode of operation for many. It usually manifests as "We just don't know enough. Let's collect more information." or "There's not enough to go on. Let's wait it out." etc.
i.e. unstructured search is more common than hypothesize-falsify despite the latter being established epistemology since early 1900s or before.
There is an entire industry devoted to giving people strong opinions about stuff they have not researched themselves.
They hide behind the false idea that people are unable to evaluate evidence. They will chew it for us. As proof of this, they offer up endless anecdotes of others believing things that the viewer is really sure are either true or false. The most effective 'bias hacks' are easily shown to be false; all that matters is that the viewer believe that "other" people hold that opinion.
A real test is who is willing to discuss it without getting emotional and derailing. The side that needs insulation does that for a reason.
Gaming confirmation bias is perhaps the 2nd most effective tool used to manipulate mass psychology.
I perhaps haven't been using the phrase as originally intended, but I've always used the phrase "strong opinions, loosely held" when it comes to hiring and building a team, and I've found it incredibly useful.
The "strong opinions" part for me means hiring someone who not just has a lot of experience on some topic or technology, but they understand it at a very deep level. The strong opinions come from a place of perhaps having been burned hard by a particular technology or process (or, contrarily, loving a technology or tool for some reason), and being able to point out the 10 little details that turn out to be big issues in real-world usage.
The "loosely held" part to me means being able to trust that (a) you don't know everything in all situations and (b) most importantly you're willing to really listen to other people on the team and are open to the idea that you may be wrong.
It's terrible as a meme, and terrible as a culture guidepost for a big group.
It is only effective in the context of a closed small trusting group making decisions.
One needs to make decisions, often with laughably insufficient information. So make one and watch carefully and be able to reverse if evidence tells you differently. Only works if you already have a strong trust culture and relatively equal power.
Using that policy as a leader of a larger group with a necessarily weaker trust culture and unequal power fails terribly because it comes across as capricious, and irresponsible.
Oh my god, yes. You also have to take the input of the group as “evidence,” and acknowledge when you may not know much about something, but someone else does.
I do this on Hackernews sometimes. I just state my opinion, however weakly held, and wait for it to be torn apart, where I learn a lot, and all my biases are revealed to me. That's how learning works: you challenge your own assumptions, or let your assumptions to be challenged by others.
> wait for it to be torn apart, where I learn a lot
That's probably fine somewhere like HN, but I'd warn people against doing it too much IRL. I have an ex-friend who liked to play the "you just need to" game as a primary learning technique. I say ex-friend because I got tired of the initial implication that his few minutes' of thought could overturn lessons it might have taken me years to learn, until I did the labor of educating him. Maybe some of that was a "me thing" except that he has several other ex-friends for the same reason. I have other still-friends who are a bit less extreme about it, but it's generally not a relationship-positive behavior.
That does work to educate you, but has the side effect of distorting the discussion here. For instance, I'll see a bunch of comments recommending a particular approach for writing software. I might come away thinking most of HN approves of that approach, and so there must be something to it. But what if all those comments are just people trying out the idea, in hopes that other people will tear it apart?
I don't think there's anything wrong with qualifying your opinions. You can still get people to challenge you on them. Isn't that why people start their sentences with "for sake of argument" or "playing devils advocate"?
I've found this is a phrase loved by those who enjoy argument for its own sake, and who don't have a very deep understanding of a particular subject matter.
> Saffo’s original idea is so quotable it has turned into a memetic phenomenon... ‘Strong Opinions, Weakly Held’ turns into ‘Strong Opinions, Justified Loudly, Until Evidence Indicates Otherwise, At Which Point You Invoke It To Protect Your Ass.’
I think an approach of collecting questions rather than answers works well. If you're really interested in a question then you should try to answer it, but be wary of accepting the first answer you see or think of.
I have questions that I've thought about for years, and some tentative hypotheses to go with them.
Honestly I rarely remember the full details of why I chose a particular tech. I did the deep research, picked what I believed to be the best tool, implemented and deployed it and now that tool is my baseline.
If you asked me a couple years later “why the hell did you chose that?!” I probably couldn’t give you many details that formed my opinion originally and couldn’t vociferously defend it by arguing minutiae of spec sheets. But I do know if you want to sell me on something new its gotta beat that baseline tool, not whatever opinions I might have had at the time of choosing.
They're overkill for small personal projects but can be helpful in big teams because it won't just be you wondering, “why the hell did they choose that?”.
For me the „strong opinion weakly held“ is a way of mental prototyping:
> Allow your intuition to guide you to a conclusion, no matter how imperfect — this is the “strong opinion” part. Then – and this is the “weakly held” part – prove yourself wrong. Engage in creative doubt. Look for information that doesn’t fit, or indicators that pointing in an entirely different direction. Eventually your intuition will kick in and a new hypothesis will emerge out of the rubble, ready to be ruthlessly torn apart once again. You will be surprised by how quickly the sequence of faulty forecasts will deliver you to a useful result.
The author of the blog post however finds failure with it for guiding investments gradually, it failing when new information is discovered along the way. But that isn‘t the purpose, to guide one in small day to day adjustments. The purpose is to explore a not well known landscape for strategic decision making. To gather enough solid non trivial insight to make a well founded strategic decision. There are better tools for operational management once one has committed to a direction.
Strong opinions help me often to escape analysis paralysis. It also helps me to surface my premature judgements and transcend them.
I feel like a lot of these types of adages come back to try to convince humans to do one thing: Be Wrong.
And I don't mean, deliberately make incorrect decisions, I mean, allow yourself to have made a mistake.
Some people get the reputation that they act like they know everything. You can even see some of those accusations creep up in various threads on this topic.
There are two major types of people who get that reputation. People who will never admit they're wrong and people who will admit they're wrong so fast, you don't even notice they've changed their stance.
The first type of people think the second type is just doing what they're doing. But no, the second type is more than willing to be wrong. They know that it's going to happen. It goes back to Socrates, knowing you know nothing. Accepting that your knowledge is incomplete. There are no stakes for being wrong if you don't put them up. The second type are also the type to not worry about laying blame about who is wrong. They just want to correct the mistake.
Be more concerned about what is wrong rather than who is wrong.
An great counterpoint to this argument comes from Allen Holub in his talk “#NoEstimates”.
The system of making bets does not translate well when communicating that to business folks and managers — which is why it’s largely non-existent for story pointing in agile environments. People are overly optimistic; they hear 80% and think 100%. Doing percentages isn’t foolproof either.
I've thought along these lines recently and I think a better formulation for clear communication of opinions is:
1) State a complete argument with premises, reasoning, and conclusion
2) State whether you believe the argument to be true
3) Check your stated reasoning to the point that you believe it to be logically valid
4) Check your stated premises to the point that you believe them to be true.
I believe that by sharing our premises and our reasoning, we are not only offering some humility, but we're also inviting respectful engagement. We express accountability by stating our beliefs. That way, if any of those are challenged, it means they're being challenged on the merits and it helps us update our own point of view.
By stating an argument in that manner, that can be compared to stating it "strongly", even if you are also welcoming input on the truth of your premises or the validity of your reasoning, which can be compared to "weakly held".
It unfortunately doesn't work because people in general don't get that knowledge progresses and what may previously been considered a truth, no longer is - and that should be how it works because we learn and grow as a civilisation.[1]
But it's not - people don't accept that what they were told previously is no longer correct. A good example is the current arguments around mask wearing. By originally saying don't wear masks, that has stuck and a lot of people think either you lied originally or you don't know what you're talking about - ideas made worse by being promoted for political gain.
In places like Japan you are expected to wear a mask if you're sick, so there's no reason that shouldn't become standard in the West.
More important could be that workplaces require staff to either stay home and not infect the office when they have the flu, or wear a mask if they have to go in.
If you're a casual employee and don't have any sort of paid sick leave you either turn up or forgo the money (and potential future work). There are also those who pride themselves on never taking sick leave and "soldiering on", but invariably infect their coworkers. Here's some survey stats:
While I want to believe this post, I also wonder how long they have been doing the new technique (did i miss them mentioning that?). I say it because I wonder if "percentage confidence" is now the hard part, where humans are terrible at estimating such things.
> For instance, Steve Jobs was famous for arguing against one position or another, only to decide that you were right, and then come back a month later holding exactly your opinion, as if it were his all along.
Unrelated to the larger theme of the post, but I absolutely hate it when people do this. I thought it sounded like a silly parable until I saw people do it in real life. At which point I concluded it was some kind of characteristic of narcissism.
To this type of person, an idea isn't good until it's theirs. They will trash your idea, and trash you. When your idea seems convenient they will take credit and fail to cite you as an influence. That's ok, one can suppose, if not for the next part. Likely they have forgotten you gave them that idea in order to help them. They still think you are trash. They will have no qualms trashing your next idea with personal attacks just like the last one.
Honestly, as somebody who suffers from "strong opinions, weakly held" I make it a point to include names in my notes on design ideas of who first suggested something so when I pivot to "my old plan is stupid we need to go that way" I can say so-and-so's plan.
I'm someone who does that I believe. I don't see it as a bad thing. I'm not going to go and pretend it was my idea, it's just that I take a while to make up my idea of what's right, and I'll discuss edge cases and all kind of issues with others really thoroughly and strongly, I'll do that with many people, about the same thing. And then I'll kind of form my own opinion from that, which could be 90% of person's X, and 10% of person's Y, or any other combo, sometimes mixed in with some of mine as well, but not always, it's very possible I was the most wrong of all initially.
In practice, it means that when I commit behind an idea, I'm often right. But the ideas are not necessarily mine. And I don't care.
I wouldn't say it's narcissistic, for me, I feel it's actually the opposite. I don't care that I came up with it or you or who did, or even if multiple people contributed to it, I just care that it's the right idea, and once I'm confident that it is, I will strongly upheld it and hold it as my opinion. That's because I've already put it to the test by arguing against it in all possible ways, talking to many others about it, getting their opinions, their alternative ideas, etc.
It's not a bad thing if people are careful about giving credits. I have seen both extremes of that:
1) The type who does exactly what the OP said, treat people like trash and think when they commit to an idea, they are often right. When they change their opinion, they don't give credit either, since they want to maintain the image that they are often right. I have seen great teammates left due to those people. I would not want to work with those people either.
2) The second type also hold strong opinions. When they change their view, they attribute it to people who brought it up. They don't care about maintaining an image that they are "often right". They genuinely thanks the people who brought them good arguments that changes their strongly held opinions. It's not about who invented the idea. But they understand it's good to appreciate people from which you get the new idea. I love working with those people. I try to give proper credits to people when they show me new ideas.
And I don't think the worst part is failing to give credit necessarily. People legitimately forget things, and they independently come up with the same ideas. The problem is when it's wrapped in egotism and inability to see that the person who suggested it earlier did so with good intentions, and might be worth listening to in past, present and future.
We have a client who is like this. It's intolerable, but also so predictable that we basically deal with it by just weaponizing it. We know that when we present information to him, we simply work towards building a message that he will take as his own eventually. Initially he'll crap all over the idea, but invariably will return a few weeks later with a "great idea" that will just be what we told him and then we'll basically get to do whatever we wanted to in the first place.
It requires some ego squelching but in the end we accomplish what we want and everybody ends up relatively happy. However, even after years of successes, this particular client still treats us like the hired help (which I suppose we are).
I have this "theory" that one trait of successful people is forgetfulness. In my experience they tend to forget their mistakes and also forget the source of their successful ideas.
I tend to suffer from this, only in the _other_ dirrection, in that I forget when I give something or I add something or I help with something or I forget conflicts with people and slights reeal easy. My mistakes and failures, I remember vividly. Does that set me up for anti-success I wonder?
I do this with software related things, but what I noticed is that I implement just enough of my own tweaks until I’m comfortable enough taking the credit for the whole idea. My defense when people call me out is to highlight my differences and understate the similarities, even going as far as to say without my features the idea was half baked from the start.
Why do I do it? I don’t know, a lot of developers do it. Just how we are I guess. Not Invented Here syndrome.
The best way I can see this play out is with development. At the start you might have an intuition but there could be several good contenders for direction. Instead of talk/paper analyze them all pick one, any one and move forward as if you'd picked the right one. Along the way look for signs that perhaps this isn't the best or even second best choice. Keep going anyway until you have a clear idea as to what's wrong, at which point you should know which direction is better and what to change to go that way.
It closely follows ship early ship often. These are not hard things to do just takes some practice and being honest. Just don't tie any personal stakes to the initial direction.
Of course, "strong opinions weakly held" is a refactoring of the scientific approach to knowledge. Karl Popper taught that scientific hypotheses necessarily must be disprovable. Hypotheses that aren't disprovable (falsifiable) are useless in science.
Insisting on falsifiable hypotheses is a difficult personal discipline, not to mention collective discipline. "Nothing is true unless it might be false!" Wait, what? How can a company make decisions based on that sort of epistemology? How can a government set policy? It's much easier to make decisions when we have the illusion we're sure about the facts.
Maybe the only sensible path is to hold all our opinions, personal and group, lightly.
TL;DR: Strong opinions, weakly held doesn't work because of Anchoring Bias[1].
Obviously if you fall prey to anchoring bias, you're doing the "weakly held" part wrong, but I think that almost everybody does this wrong, even people who know about anchoring bias and do their best to guard against it. I've known about anchoring bias for maybe a decade, recognized it in my own beliefs, taken steps to attempt to address it (i.e. meditation and reading opposing opinions to my own) and while I'll give myself the credit that maybe I'm a bit better than an average person at changing my own beliefs now, I'm still objectively very bad at it, frequently coming across places where it's clear looking back that I was wrong for years due to anchoring bias. It's much easier, I think, to permeate everything in my own belief system with a fundamental level of doubt, and only form really strong opinions with overwhelming evidence. But even that is only somewhat effective. Anchoring bias is a pretty powerful foe.
I think it's difficult to approach a problem with a Depth-first Search mentality without using "Strong Opinions, Weakly Held". If one goes in with "weak opinions" it's easy to find oneself constantly backtracking and checking the other nodes early on in the chain of assumptions and doing a mental Breadth-first Search instead.
That's not to say that it's the right approach for every situation, but for problems that will have a lot of dependent unresolved assumptions "Strong Opinions, Weakly Held" is sometimes necessary to maintain focus to break through the problem.
I think you can draw an analogy between probability and counting. Negative numbers seem weird and very different from positive numbers, but for many applications 0 is not a particularly interesting boundary, and you benefit from not treating negative and positive numbers as two different cases. EG, I could owe you $2 or -$4.
In probability, 50% feels like it should be a special boundary. But often there's a lot of benefit to not treating it that way, and treating a 49% belief more like a 51% belief than like a 10% belief.
It sounds trivial written out, but how often do people behave that way?
If the probability is just some made up number reflecting a hunch, it's no better at informing us to take an action or drop an opinion. It's just a way of talking about how weakly we are holding the opinion, and not the real deal from statistics.
The concept still provides no concrete framework for assigning the probability, and for taking action. E.g. do you abort the plan when the probability of success feels like it has dropped to 65%, or does it have to feel like 35%?
You just have to compute the expected value. Let's say we play a game where you have to invest $10.
If you have a 65% chance to win and winning means you get $2 on top of your $10, then abort the plan. The expected value is $12×65% = $7.80 which is less than your investment.
If you have a 35% chance to win and winning means you get $30 on top of your 10$, then continue. An expected value of $40×35% = $14 is worth it.
In reality, the tricky part is often to assign numbers (How much does it cost to switch to Rust? How much errors does the stricter type system catch?) but the framework is available.
Ok, also look up the Kelly criterion because you can go bankrupt even with a high expected value.
This viewpoint leads to blog posts like "Why you MUST do X to be a good engineer (2015)", and then "Why I was wrong to say you MUST do X to be a good engineer (2019)"
Has always been a terrible name for "take a guess, then try to come up with a better guess." It's not rocket science. Although rocket scientists probably use it.
On the contrary, I've found that one of the worst things you can do when demonstrating / deploying a machine learning model is to show the underlying prediction confidences.
You can be right 95% of the time, but critics will only remember, and fault the whole system, because of a single, incorrect, but high confidence error.
Which would be reasonable if that erroneous prediction had fat tail consequences, but in cases I've had this happen, it has not
Yes! I think that science also demonstrates how Bayesianism, the author's proposed remedy, misses the point somewhat.
What probability do I put on my belief in quantum mechanics? Zero! It has to be wrong, because it doesn't account for gravity. On the other hand, I'd bet on it for every question it can answer.
Scientific theories are the strongest opinions of all. If you can imagine how the laws of physics, the antiquity of Earth, or natural selection could possibly be wrong, you don't understand them. But, historically, those theories replaced previous theories, which were strongly accepted in their day, and did turn out to be wrong.
There is no way that some theories could possibly be wrong, but it's almost certain that some of those theories are wrong. This is about a deeper type of uncertainty than probability can describe.
In simpler terms, ‘strong opinions, weakly held’ sometimes becomes a license to hold on to a bad opinion strongly, with downside protection, against the spirit and intent of Saffo’s original framework.
Beliefs are very hard to change as the human brain has a perception bias in favour of currently held believes. Data driven goals are a much more objective way to test your beliefs and drive business decisions.
"Strong opinions, weakly held" makes sense in exactly one situation:
When you are definitely an expert in a topic you should have your strong opinions about it.
But since everybody is sometimes wrong about something, and that should be very seldom in the expert case, you should be able to let go of one of your opinions, in case you get counter-arguments.
I've always been this kind of person, opinionated but backed by reasons, and most people find it abrasive. There's nothing I like more than having someone change my opinion, because that means they taught me something. The catch is, most people are either unwilling to teach such an opinionated person, or have nothing of value to teach.
Ah, good. I've always thought this saying to be utterly stupid. I was geared up for a rant, but now I see from the article that the guy who came up with it has done a poor job of distilling his approach into a catchy saying. His approach is basically agile; his catchy saying sounds more like bluster and bravado until you look stupid.
There is a sort of financial epistemology that a lot of people like because it tells them they are successful and therefore right. It also allows them to talk about testing hypotheses and get results in hard numbers. And it even works as far as it does, but I’m more and more skeptical that it explains everything.
"Skate to where the puck is going to be" is an example of SOWH working well. A weak opinion (waiting to get clearer data) would result in skating to where the puck has been. A strongly held opinion would result in skating to where the puck isn't going at all, and has never been.
To answer the question of "when should you change your opinion?":
All new data should change opinions, even if it's a small piece of data that affects your opinion only on the margins, it is still a change toward an opinion that will better map onto reality.
Ultimately, truth-seeking is a personal thing and you should tune your methods to the machine that is you.
tl;dr SOWH is good. Don't claim it for yourself. Sloppy Priors are good. Be careful of not adjusting on evidence. The bet trick is very good. Be careful about your utility function.
Personally, among the people I know the following doesn't happen:
> In such cases, the failure mode is that ‘Strong Opinions, Weakly Held’ turns into ‘Strong Opinions, Justified Loudly, Until Evidence Indicates Otherwise, At Which Point You Invoke It To Protect Your Ass.”
And that is mostly because it's really easy to claim you're operating in SOWH and really hard to determine if you actually are. So most truth-seekers apply the same techniques of epistemology to it as they do to other things that require auto-evaluation with no objective truth set: you trust weighted external input more than your own. Am I charismatic? Am I intelligent? Am I whiny? It is hard for me to say. It is far easier for me to find that out from people I can identify as trusted on the subject.
I remain convinced that operating in SOWH is empowerment (at a 80% certainty har-har). I believe that the Bet Trick and the Sloppy Prior Trick are both clever techniques to improve your search for truth as well.
The Bet Trick is very good. My personal danger is that I do not apply linear utility (i.e. an unlikely win makes me much happier than a string of losses). So if I predicted that the US would be open by August and I'm right that makes me way happier than if I predict repeatedly that median summer temperature is greater than median winter temperature in Saskatoon. I'm okay losing $200 repeatedly on the former for the win.
The Sloppy Prior also has the trick that it allows you to examine your reaction to evidence. Your Sloppy Posterior to the weakest of evidence must be different from your Sloppy Prior! If it isn't, your cognition is currently failing you. The only problem is that the sloppiness gives you room to avoid having different posteriors.
That last part I find very hard in almost-certain and almost-never situations: if a set of instrument measurements show that temperatures across the Earth are the same as they were 40 years ago, then it's highly likely that the measurements are broken, but it is not certain, so my posteriors for AGW given that evidence should drop. But they don't unless I am conscious of this.
More generally, “strong opinions weakly held” is
often a useful default perspective to adopt [...]
Try it at a cocktail party the next time a
controversial topic comes up
Ah, yes. We've all met this guy at parties.
"[Technology X] is the worst thing ever and people who use it are setting the industry back by ten years. Prove me wrong!"
Don't be that guy.
The guy who's even worse than a regular blowhard: the blowhard that doesn't even necessarily believe what he's saying; he's just trying to stir up debate.
The social equivalent of a message board troll, essentially.
If you want to incorporate "strong opinions, weakly held" into your own internal decision-making and opinion-forming process, cool. Not a bad way to go. It's also appropriate in many contexts with others - ie spitballing sessions where everybody is tossing out possible solutions to a problem they're tying to solve or hell, maybe a drinking session with friends where you're all three or four drinks deep and shooting the shit, being silly.
But at "cocktail parties" or other casual social situations like hallway or lunch table conversations? Yeeesh. Don't be that person, where you obnoxiously dump your "strong opinions" onto others. A lot of people don't enjoy debate for debate's sake. (I actually tend to enjoy it - but many do not.)
tl;dr: it doesn't work that well because it's difficult.
To be honest this feels a bit to me like saying that cryptography doesn't work that well because it's difficult. The article itself even has examples of people for which it has worked well (eg Steve Jobs).
I recognize the argument that there's people who don't realize that they're bad at and still quote the "strongly opinions, weakly held" idea, but that means it's misused, not that it doesn't work well.
Again, the crypto analogy holds up. We all know some examples of crypto rolled by people who didn't realize it was too difficult for them. But that doesn't mean crypto doesn't work.
> To be honest this feels a bit to me like saying that cryptography doesn't work that well because it's difficult. The article itself even has examples of people for which it has worked well (eg Steve Jobs).
No, the article specifically does not say that. It says the principle doesn't generalise well across people, for exactly the same reason cryptography doesn't. We have "don't roll your own crypto"; this article is the equivalent for "strong opinions weakly held".
> ...make a tentative forecast based on the information available, and then systematically tear it apart, using the insights gained to guide my search for further indicators and information...
This is literally the scientific method. Come up with a falsifiable premise, then attempt to falsify it. Arguing that science is hard is pretty irrelevant. Most of the failings present in this article are really just evidence of not doing the second half. You should be looking for evidence that you're wrong, not evidence that you're right.
This is later borne out by the proposal of adding confidence intervals and dates by which assumptions should be robust against being disproven. You'll find this type of language on experimental design in lots of scientific fields.
This article is about someone's journey to rediscover the thing their source already knew (using scientific rigor can take a hypothesis into a theorem, and gradually bring you nearer an understanding of the underlying truth), and then call it something else because the original distillation of a theory into a soundbyte flew past them.
The article was strange because the author said the method doesn't work because people don't hold opinions weakly, i.e. don't update them upon new evidence. And then dives into trying to question what new evidence even means. I've never heard the phrase "Strong Opinions, held weakly" but during my education and work I've always been told "stick to your guns and be confident. You came to conclusions for a reason. But keep your mind open because you can never have the full picture."
As I see it, if the method is wrong because people don't use it correctly then don't throw out the method, update it to be more usable. Which is exactly the systematic tearing apart and rebuilding process that the author actually is engaging in himself. Adversaries to an idea is a good thing.
This is such a strange article. The tldr is "Strong opinions weakly held doesn't work because people don't do it." Then the author makes some pretty strong opinions about how that's not how the brain works. I also believe that he's talking about two different ideas.
The first idea: having strong opinions but they aren't held strongly.
I have never read Saffo's work or even heard of these people here, but this tactic described here is something I use frequently and has been very successful for me as a scientist and is what many other scientists I know do. The key issue of his complaint is that people do not hold their ideas weakly enough.
Here's how I see it. Every opinion I have is just wrong. You cannot have all the data and all the relevant facts, so no matter what conclusion you make it is incomplete. The question is just "how wrong." If it is a little wrong, no worries, but if it is a lot wrong, big worries. The strength of your opinion should be proportional to this evidence. Essentially we're all making Fermi estimates, but as time goes on they become better and better. But that doesn't mean you aren't possibly missing some piece of key information. So you should always be open to changing your opinion. You can also hold an opinion strongly but change it with updating evidence. If you don't update your opinion to account for new evidence you are just a bad and stubborn scientist.
Second idea: method for developing good ideas
The second idea is about building up, tearing down, and repeating the process. Teams I've been on have used this method successfully to develop new theories and new products. You don't have to account for everything but it provides a good baseline. This model development really only is in the initial stages of development. You use it to figure out what to test and probe. Before you start a million dollar experiment you sure better have some good ideas and explanations for why you're doing what you're doing. This is essentially creating a red and blue team. You can do this as a group or you can do this individually (harder because you have to accept cognitive dissonance). This adversarial process can be highly successful in creating good conclusions (Hell, this is analogous to what a GAN does). The issues are when someone is really stubborn about their conclusions. But the big reason this works is because when you're submitting a proposal you've already answered basically any question anyone can ask of you. This is because a reviewer SHOULD be trying to find reasons to reject your proposal because you don't want to waste money.
So how this works in the real world is that I develop opinions based on the evidence that I have. I stick to my guns because I didn't form these opinions willy nilly, even with a lot of self doubt (focus on the adversarial benefit and it is okay to be wrong). (And the key part) When someone presents new and compelling evidence you update your model. But it is perfectly acceptable to determine if this evidence is irrelevant or an outlier. I do know this is hard for many people but it isn't that hard if you just accept the relativity of wrong [1] as a fundamental principle. In my undergrad studying experimental physics it is hounded into you to account for error of your measuring tools. The next logical conclusion is to try to account for the error from your most important measuring tool, you. If you accept that you aren't perfect, can't have perfect knowledge (i.e. "The map is not the territory"), this is not that hard. But then again, I'm considered weird, so I'm completely open to being wrong.
The article is spot on. As humans it is incredibly difficult to avoid being committed to your decision, a big problem for 'strong opinions, weakly held'. I think Elon's advice is much better and I try to exercise it regularly: "Assume you are wrong. Your goal is to be less wrong."
Strong Opinions Weakly Held, sounds reasonable in theory. In practice, it means Always act like your right, until you change your mind, then pretend like you always held the other view all along. At least that's how I see it used on this site, whenever people praise Steve Jobs, Linus Torvalds, etc, for being dismissive and rude.
Yes, leaders have to make bold choices, and they can't waffle. But that doesn't require you to be rude to others, or to act like you're always right. It doesn't require you to take a confrontational style to discussion, where you assert your beliefs loudly and expect others to fight you on it.
Can we just admit that for some people on this site, technology and entrepreneurship plays into a power fantasy?
To some extent, to succeed in the modern world, you have to override your instincts with a thought-out response to things. Whatever heuristics we've built up over thousands of years are interesting, but some of them don't apply anymore. (If you were a prehistoric human and you encountered a beehive, you'd eat all the honey. But now you can get hundreds of beehives worth of honey in one trip to the grocery store, and your body's not going to tell you to not eat it. It will tell you the opposite -- this tastes great! You just have to learn that it's bad for your health and choose not to do it, no matter how good it would taste.)
Basically, the human brain is a neural network trained on very old data. Fortunately, it is also very adaptive and can ignore that training data with some conscious effort. You need to operate your brain in that mode more now than you did 10,000 years ago.
"Strong opinions weakly held" means "argue your point but sometimes you're going to lose." Evolution might not have rewarded losing over the course of millions of years, but whatever meeting you're arguing in doesn't have that kind of staying power. You can lose the argument and the human race will survive another generation. Whatever argument you're having probably doesn't matter in any meaningful sense. It's not life or death.