I'm not one to give an exaggerated eulogy nor rhapsodize about all those "Books with a white cover and a weird picture" -- but I will say I read thinking fast and slow for the first time last year, after decades of resisting, and felt it covered some generally profound ideas that still are relevant as ever and not widely understood.
(Though at some point, maybe the 2nd half of the book, drags on and you can skip most of those chapters. If you don't have time for that, I'm sure chat GPT can give you a taste of the main premises and you can probe deeper from there.)
It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.
It’s still very much worth reading in its own right, but now implicitly comes bundled with a game I like to call “calibrate yourself on the replication crisis”. Playing is simple: every time the book mentions a surprising result, try to guess whether it replicated. Then search online to see if you got it right.
This is exactly the kind of task that I want to deploy a long context window model on: "rewrite Thinking Fast and Slow taking into account the current state of research. Oh, and do it in the voice, style and structure of Tim Urban complete with crappy stick figure drawings."
Not me, if I'm going to take the time to read something, I want it to have been written, reviewed and edited by a human. There is far too much high fidelity information to assimilate that I'm missing out on to put in low fidelity stuff
Most human authors are frankly far too stupid to be worth reading, even if they do put care into their work.
This, IMO, is the actual biggest problem with LLMs training on whatever the biggest text corpus us that's available: they don't account for the fact that not all text is equally worthy of next-token-predicting. This problem is completely solvable, almost trivially so, but I haven't seen anyone publicly describe a (scaled, in production) solution yet.
It hardly matters what sources you are using if you filter it through something that has less understanding than a two year old, if any, no matter how eloquent it can express itself.
(My comment was less about my behavior but an attempt to encourage others to evaluate my thinking in hopes that they may apply it to their own in order to benefit our collective understanding)
Same! Just earlier today I was wanting to do this with "The Moral Animal" and "Guns, Germs, and Steel."
It's probably the AI thing I'm most excited about, and I suspect we're not far from that, although I'm betting the copyright battles are the primary obstacle to such a future at this point.
The thing with Guns, Germs, and Steel is that it make it essentially all about geographic determinism. There's another book (Why the West Rules--For Now written before China had really fully emerged on the stage) which argues that, yes, geography played an important role in which cores emerged earliest. BUT if you look at the sweep of history, the eastern core was arguably more advanced than the western core at various times. So a head start can't be the only answer.
The book specifically considers Eurasia to be one geographical region and it does acknowledge the technological developments in China. The fact that Europe became the winner in this race, according to GGS, is a sign that while geography is important it does not determine the course of history. It is not all about geographic determinism
It is a snapshot in time, and so wrong if viewed in a longer context.
People from Europe, came to have the Industrial Revolution at just the correct moment.
Some small changes in history and it would have happened in India.
It is making a theory to fit the facts.
I do not think the author is a "white supremacist" but the book reads like that. Taking all the accidents of history and making them seem like destiny that Europeans rule the world (they do not, they never did, and they are fading from world domination fast)
I enjoyed both GGS and WTWRFN, but in a mode where I basically ignored the thesis, reading instead for the factual information so clearly presented. Like the coverage of the Polynesian diaspora in GGS that has really stuck with me.
Thinking Fast & Slow was a fun read, but I did not retain much more than the basic System I/II concept which I find is a useful device.
It's not even clear that the dual process system 1/system 2 metaphor is accurate at all, so it may not be possible to redeem a book whose thesis is that the dual process model is correct.
It's not just that individual studies have failed to replicate. The whole field was in question at least a decade before the book was written, and since then many of the foundational results have failed to replicate. The book was in a sense the last hurrah of a theory that was on its way out, and the replication crisis administered the coup de grace IMO.
>This is exactly the kind of task that I want to deploy a long context window model on: "rewrite Thinking Fast and Slow taking into account the...
I want something similar but for children's literature. From Ralph and the Motorcycle to Peter Pan, a lot of stuff doesn't hold up.
The books provide plenty of utility. But many things don't hold up to modern thinking. LLMs provide the opportunity to embrace classic content while throwing off the need to improvise as one parses outmoded ideas.
I would actually like to have books that had "Thinking Fast and Slow" as a prerequisite. Many data visualization books could be summed up as a bar chart is easily consumed by System 1. The visual noise creates mental strain on System 2.
It's saying that the author's invented metric may indicate that the studies within each chapter may not replicate. No actual replication studies were done to produce the table in that post.
What's wild to me is that anyone could read chapter 4 and not look up the original papers in disbelief.
Long before the controversy was public I was reading that book and, despite claims that the reader must believe the findings, it sounded like nonsense to me. So I looked up the original paper to see what the experiment set up was, and it was unquestionably a ridiculous conclusion to draw from a deeply flawed experiment.
I still never understood how that chapter got through without anyone else having the same reaction I did.
I had exactly this reaction to Malcolm gladwell. It is completely obvious that gladwell across multiple books has never once read one of his references and consistently misrepresents what they say.
I have a slightly different take on him, which comes to the same ultimate end on how I view his work.
As he's shifted from primarily a journalist to primarily a storyteller, he's chosen to sacrifice additional information and accuracy in lieu of telling a consistent and compelling narrative that support what he thinks is the important thing to take away, not necessarily what you would take away were you to review all the same information yourself.
Under that understanding, I find him fun to listen to. The things he "reports" on/illuminates are interesting, but at this point I don't assume he's giving them an even handed representation, so his conclusions are not necessarily my own, and at best it's a set of things to look into and research myself if I find my interest piqued after a fun or interesting story is told.
But once you fact-check him a few times, then you won't be able to trust that his sources support his points anymore, which then raises the question: Is there ANY evidence that supports what Gladwell is trying to say?
If the best evidence that Gladwell could find to support his points, don't support his points, then it really makes me question the utility of trying to find evidence to support Gladwell's points.
It may be that, the reason why no subject matter expert makes the points that Gladwell does is because Gladwell's points are either wrong or not even wrong.
The problem is that Gladwell comes off as trying to be scientific, when really he's just a persuasive storyteller. If you view Gladwell as a purveyor of facts or a science communicator, then he does a horrible job and yes, it's hard to trust whatever he presents because he's at best not very careful about his facts and conclusions. In that case, why would you ever pay attention to him?
On the other hand if you see Gladwell for what I think he is, which is an entertaining storyteller using science as a backdrop to present his view of the world, and willing to twist it to suit his view, then he's really no different than many other writers. In that case you can enjoy him or not, at your leisure, just as you would any other editorial or op-ed piece. Entertainment is entertainment, and his content is entertaining even if I don't expect it to be true (to be clear, I'm approaching this as someone that mostly consumes his podcasts, which some chapters of his books have been directly converted to. I wouldn't want to spend the time reading this content that's not true, but I'm happy to listen to it while on a drive).
Whether he's causing more harm than good overall to the public is another question, but honestly that's a much bigger discussion with a lot of much bigger problems than him, so I'm not sure it's worth getting worked up over.
The problem with "take it or leave it, it's just a story" is that stories are not neutral, or cost-free. Memes compete for attention/proliferation/survival, and false-but-appealing memes outcompete true-but-boring memes. It's a mini DDOS attack on our collective bandwidth to be churning out durable falsehoods disguised as scientific insight.
Thank you for this comment - you have succinctly captured something that I have been feeling but unable to express in words.
Stories/memes/narratives are the most easy/potent form of mental ingestion - so much so that I think humans cannot ingest facts or ideas at all, only stories. And this puts a collective responsibility on all of us to be very careful about the stories we create.
Not just that, but real policy comes out of these books. Gladwell is highly influential among leaders of our bureaucracies. So when it comes time to look at airline safety policies, Gladwell's nonsense about Koreans being too hierarchical to fly safely can worm its way in there.
Isn't that because the replications only looked at a selected subset of all the possible literature? You can be almost sure that if an article's conclusion hinges on a wide interpretation of the experimental result, or the stimuli haven't been sampled properly (and who knows the distribution of stimuli?) or the subjects are first year psych students, and the proof is a rejection of the null hypothesis, that it cannot be replicated. The worst offenders are those that conclude their theory is true because the theory they argue against is rejected.
No, but propositions with strong counter-evidence generally are, which is the main topic here. "Not-repicable" generally means "attempted to replicate, but got results inconsistent with the original conclusion."
That is not my understanding of what “not replicable” means. My understanding is “attempted to replicate, but didn’t get any significant results supporting the original conclusion”. There’s nothing that says that the new results are inconsistent with the original findings, only that they couldn’t find any support for them in a similar study.
And that could be for a number of reasons. Of course, sometimes the results are just wrong, due to statistical flukes, or too creative data cleaning and analysis. Often the results might just be much more limited than what the original study claims: Maybe the results of a psychological study is valid for MIT students in the beginning of the semester, before lunch, but not for Yale students in the early afternoon. In this case the only mistake would have been to assume the results were universal.
It is amazing how many smart people have bad intuition on science, misunderstand the null hypothesis, etc. So much for the viability of a scientific thinking populace afaict: it seems not possible to pull off.
A very good point (I'm not sure if it's relevant to the book in question, as I haven't read it or if you're referring just about the conversation so far). It seems like many people will take a strong claim they are dubious about, and on finding the evidence is sparse, inconclusive, or missing, swing to assuming that statement is false, instead of a more neutral position of "I have no opinion or some reason to think is unlikely, but others think it is unlikely even if poorly supported or unsupported."
This tendency seems to be capitalized on fairly heavily in political media by finding some poorly supported assertion of the other side to criticize, which causes people to assume the opposite is true.
Extraordinary claims require extraordinary evidence. Or, in less dramatic terms, if you cannot reject the null, you should operate on the assumption that the null holds.
Thanks for sharing this -- I read the book maybe a decade ago and largely discounted it as non-replicable pop-sci; this changed my opinion of Kahneman's perspective and rigor (for the better!)
It looks like it's a bit more nuanced than that. What I saw from the link was some debate about what holds and what doesn't for various forms of "priming"
Yeah I wouldn't read too much into any single study. But what I would defend vigorously is System1 / System2 distinction as something so clear/fundamental that you can see it constantly once you understand it.
Maybe some people do. I have to keep checking which one is system 1 and which is system 2 every time I hear the terms, because they're not self-evident. Intuition/logic is.
It’s also very common in psychology theories, I haven’t read “Thinking, fast and slow” but I imagine there’s more than Kahneman’s own papers cited: https://en.wikipedia.org/wiki/Dual_process_theory
wow, it looks like "dual process" theory is basically the same thing.
I don't know if there's a better text on dual-process theory out there (perhaps by the original authors), but regardless of who originated it, I think it's something worth learning about for everyone (and if you don't have a better source then Thinking Fast and Slow is a very good one).
... except the distinction was being made in various forms long before Kahneman, and does get questioned. When you start to poke at it, what's intuitive starts to seem less so.
His own work held up very well to replication. It's when he is citing the work of other scholars (in particular, that of social psychologists) that doesn't hold up well to replication.
> It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.
Irony is, Kahneman had himself written a paper warning about generalizing from studies with small sample sizes:
"Suppose you have run an experiment on 20 subjects, and have obtained a significant re-
sult which confirms your theory (z = 2.23, p < .05, two-tailed). You now have cause to run an additional group of 10 subjects. What do you think the probability is that the results will be significant, by a one-tailed test, separately for this group?"
"Apparently, most psychologists have an exaggerated belief in the likelihood of successfully replicating an obtained finding. The sources of such beliefs, and their consequences for the conduct of scientific inquiry, are what this paper is about."
Then 40 years later, he fell into the same trap. He became one of the "most psychologists".
This game is doubly valuable when the surprising result confirms one of your existing beliefs. I'm pretty good about doing this for surprising results that contradict a belief I hold, but I have to be extra disciplined about doing it when it confirms one of my beliefs.
I imagine that in 30 years, it will become clear that individual humans display enormous diversity, their diversity increasing as societal norms relax, and their behavior changing as the culture around them change. As such, replication is hopeless and trying to turn "psychology" into a science was a futile endeavor.
That is not to say that psychology cannot be helpful, just that we cannot infer rational conclusions or predictions from it the same way we can from hard sciences.
Self help books are enormously helpful, but they're definitely not science either.
it is a positivistic science, i.e. it studies observable phenomena using the scientific method. The days of Freud and Jung where you could just smoke cigars and fart out ideas about collective unconsciousness and anal fixation are long over. Experiments are conducted, confounding variables controlled, and hypotheses (including H0) are tested. Granted, it's not as easy as in physics where you just drop a ball repeatedly and note down the results, buf it doesn't in any way make it a "softer" science. To equate psychology with self-help books is akin to equating LLMs with Markov chains.
The general idea is very simple. Tactical vs strategic thinking are two different things and it’s good to be aware of that. I don’t know that that needs to be proven or disproven
19th Century definition of tactics aas being everything that happens within the range of cannons and strategy as everything that happens outside of cannon range, fits well to thinking fast (tactics) and slow (strategy).
This is unfortunately the case for many books on human behavior. Sure, Dan Ariely comes to mind, but the field itself is very tricky.
I don't think we - people used to STEM - appreciate how difficult behavioral psychology is. In STEM, we are used to isolating experiments down, so there are as few variables as possible. And we are used to well-designed experiments being reproducible if everyone does what they are supposed to do right.
In the study of human behavior there are always countless uncontrollable variables, every human is a bit different and it is very difficult to discover something that would apply generally. Also, pretty much all of the research is done on western population of European descent.
This is why I take all behavioral claims with a large grain of salt, but I still have respect for the researchers doing their best in the field.
I don't agree, I think if you understand science in general then you realize at an early age e.g. 20 that social/behavioral science is at best a pseudoscience
I wonder if it's better to have a lot of small hits or a few big hits and many misses in regard to replication. If the studies which have the greatest implications replicate, then maybe many misses is not that bad.
Unfortunately the reality is that the more interesting and quotable the result is, the less likely it is to replicate. So replication problems most strongly hit things that seem like they should have the greatest implications.
And critically, scientific publications are incentivized likewise to publish the most outlandish claims they can possibly get away with. That same incentive affects individual scientists, who disproportionately submit the most outlandish claims they can possibly get away with. The boring stuff -- true or not -- is not worth the hassle of putting into a publishable article.
And then the most outlandish of these are picked up by popular science writers. Who proceed to mangle it beyond recognition, and add a random spin. This then goes to the general public.
Some believe the resulting garbage. And wind up with weird ideas.
Others use that garbage to justify throwing out everything that scientists say. And then double down on random conspiracy theories, denialism, and pseudoscience.
I wish there was a solution to this. But everyone is actually following their incentives here. :-(
It's so weird that the stuff about priming was supposedly debunked, yet if you look around at what happened to society over the past few years, I've been blown away by how suggestible and controllable people are.
“When I describe priming studies to audiences, the reaction is often disbelief . . . The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.”
psychology isn’t science. it’s a grave mistake to read/interpret it a such. does that mean it’s useless? of course not: some of the findings (and i use findings very loosely) help us adjust our prior probabilities. if we’re right in the end, we were lucky. otherwise we just weren’t.
I think psychology is very successful at categorizing abd treating mental illnesses. The DSM is really a monument and hold for most of its part very well to scrutiny.
Where psychology is massively failing to replicate is in trying to characterise healthy individuals. Typically the work of Kahneman.
But that's what interest people and sells, pop psychology.
Genuinely curious, how would you scrutinize a categorization tool that includes both causes and effects in its key?
I'm only tangentially following the whole autism/Asperger's/ADD/ADHD development, and I'm growing more and more convinced that all these categories are mostly arbitrary constructs grown out of random history and academia politics. Happy to be proved wrong here, though.
it’s fundamentally unscientific at this point. much of our current science lies in the realm of natural law. so far we haven’t found any laws that govern human behavior. what we know, with considerable certainty, is that behavior can be positively influenced. but at the point of action, nothing we know of compels any specific/predictable behavior. until we have found rigid laws of reasons that apply to both the brute and the civilized, any ‘discoveries’ of psychology are reports of someone’s idiosyncrasies, imho.
There's nothing about the scientific method that requires the process to output tidy little laws to be deserving of being called science.
Some fields are quite lucky that the universe is so elegantly organized, but for that isn't true for the overwhelming majority of fields with as many degrees of freedom as biological systems and anything more complex.
That doesn't mean we can't conduct experiments that reproduce.
The history of science generally doesn't seem to be characterized by shifts in theory due to empirical disproofs. Usually, when theories are "disproved", we don't want to throw out the baby with the bathwater, but rather, we want to stick to the theory and try to patch it up. When Uranus didn't seem to be moving according to the predictions of Newtonian mechanics (a disproof!), physicists didn't throw out Newton, they posited the existence of another planet. And they turned out to be right, Neptune existed.
See Chalmers' What is This Thing Called Science? for an introduction to these kinds of topics, or Kuhn and Feyerabend's work for historical responses. (And the Duhem-Quine thesis for the "auxiliary hypothesis" response to falsifiability I hinted at with my example.)
> But let’s say you can narrow it down to one good one, and you can find the time to read it. You plunk down an absurd $30 (of which, I’m told, less than $3 goes to the author) for a bulky hardcover and you quickly discover that the author doesn’t have all that much to say. But a book is a big thing, and they had to fill it all up, so the author padded it. There are several common techniques.
> One is to repeat your point over and over, each time slightly differently. This is surprisingly popular. Writing a book on how code is law, an idea so simple it can fit in the book’s title? Just give example after example after example.
> Another is to just fill the book with unnecessary detail. Arguing that the Bush administration is incompetent? Fill your book up with citation after citation. (Readers must love being hit over the head with evidence for a claim they’re already willing to believe.)
> I have nothing against completeness, accuracy, or preciseness, but if you really want a broad audience to hear what you have to say, you’ve got to be short. Put the details, for the ten people who care about them, on your website. Then take the three pages you have left, and put them on your website too.
Because most nonfiction books are one, relatively small set of ideas (profound or not, novel or not) that could be concisely written as a few blog posts or a single long form article, but in order to monetize and build the author’s brand, get exaggerated into a full book. It is really painful and something I hope GPT will help the rest of us cut through in the future (“summarize the points. Dig into that point. What evidence is given? Etc etc etc” as a conversation rather than wasting 30 hours reading noise for a book)
Well, life of most people on the earth is just same boring repetitions with few novel events. So I wonder what would people do with their ample amount of saved time thanks to ChatGPT. Perhaps writing another Javascript framework, launch new food delivery apps besides raging on social media.
Most people don’t absorb concepts immediately with only a simple explanation.
Unless you are reading a topic you are already familiar with, reinforcement of an idea helps you to examine a concept from different angles and to solidify what is being discussed.
If everyone fully absorbed and understood everything they read, schooling could be completed years in advance.
Apart from CYOA, I think a hypertext "book" would be the most effective way to dynamically scale detail preferences without resorting to skimming or skipping.
Some people want more or less evidence based on their level of skepticism or critical thinking, while others want more evidence to reinforce the soundness of their inferred position on a topic, especially if it's a topic unfamiliar to them. Other people are under time constraints and just want the key points and a brief presentation.
That's exacerbating the original environmental problem, in addition to thick paper books, filled with filler material just to promote the author's brand, you now want to waste electricity on running an LLM that will give you a the short version? That's.... short sighted.
This should be dealt with by pressuring the publishing industry not to inflate books and fill them with fluff. This could be done by not buying these kind of books, and publicly shaming publishers who engage in this behavior. It's easier in non fiction books since the amount of fluff in fiction books is a more subjective matter.
They do it because a short book looks like a pamphelet and nobody will buy it. Most Gladwell books could easily be 30 pages, but nobody will pay $14.99 for that.
You can't shame them into buying books they can't sell.
How much electricity does it take an LLM to summarize a book? I'm sure the carbon emissions involved are trivial, and if they aren't, I've always been of the belief that (like eating meat) people are going to do what they want to do regardless of the environmental cost, so it's better to focus your ire on reducing the environmental cost. The problem here isn't using an LLM to summarize a book, it's that we've got a power grid fueled mainly by fossil fuels. (That is a problem that will fix itself in no time anyway now that renewables are cheaper and the gap is widening.)
> They do it because a short book looks like a pamphelet and nobody will buy it. Most Gladwell books could easily be 30 pages, but nobody will pay $14.99 for that.
This applies equally to Sci-Fi and fantasy doorstopper novels. At least those have interesting filler—sometimes even better than the main story.
Except that a lot of people read these books for the entertainment value of the anecdotes, and a lot of people enjoy feeling self-important for having read long books.
Not just self-important, but the anecdotes are designed to be bite-sized to be easily retellable, like the 2016 NYT Abraham Wald/ beware survivorship bias on graphing bulletholes on WWII airplanes viz which locations were more damaging.
So you get a decade of "business nonfiction" bookstand fodder which is claims with high shock quality but often not relicable, mixed with anecdotes, to signal the reader's self-declared learnedness.
Combine this with the long-tailed nature of the statistic "the average American only reads one book a year" and it's frightening. I think Scott Galloway said sales of his e-book are >10x more than his paper book.
And during Covid lots of people got exposed for having fake bookshelf backdrops.
There was one Covid Zoom podcast (about economics?) with a middle-aged man sitting in front of a large set of bookshelves, housing row-upon-row of gnarly old books. The only recently-used book was lying flat on top of the other books.
You can just leverage the “second brain” crowd — for every vaguely well-known non-fiction book someone has written up a summary for themselves and posted it on their blog.
Well there's a reason for that - consider it a form of spaced-repetition learning.
The author's goal is to convey an idea to the reader. He breaks it up into small overlapping chunks and gradually doles out these small overlapping chunks over the course of the book, sometimes backtracking and repeating an idea with a different example, all accompanied by a compelling narrative.
If he does his job well then the reader doesn't notice that spaced-repetition learning is happening because the supporting examples are entertaining enough to continue reading. In the worst case, the author gets the exact criticism that you are leveling.
Honestly, if you had Mathematics books written like Thinking, Fast and Slow or Freakonomics you'd have a lot more students passing calculus.[1]
So, here's a challenge for you - in your area of expertise (whatever that is), write down the chapters of a hypothetical book you would write to explain one or two foundational principles to an outsider (to that area). It's pretty hard to do. Then compare with best-selling non-fiction aimed at outsiders like Freakonomics, etc.
I did this (chapter overview thing) and realised pretty quickly that I had planned a really boring book.
[1] I read Thinking, Fast and Slow around 2011, and I read The Mom Test last year. Almost all of the sub-themes of the former is still in my memory. The only thing I remember of The Mom Test is that people will lie to you to protect your feelings.
Reading a book, say 10 hours, is like a meditation on an idea: you get numerous examples of it and a variety ways of thinking about it.
Our brains learn best when they encounter something often across time (spaced repetition).
Reading a single tweet may summarize the book, but the chances of you recalling the idea in an appropriate situation is much lower than if you had spent hours on it.
> 3rd third of the book: ??? I usually give up at this point
History of science books thankfully stave off that final third until at least 80%. However, their final chapter or two universally manages to be a letdown. It's either wild optimistic speculation, hype for a theory that's debunked 5 years after publication, or a focus that accidentally happened to predict the course of science post-publication. The story is told in a tonally jarring manner compared to the tight narrative in the rest of the book.
My #1 suspect for this disease is a desire to connect the content of the book to real life. Such attempts miss more often than they drive the point home, even if they're factually correct.
The last 20% shifts in tone and goes into speculating on a general framework for genetic selection outside of biological systems. Very speculative, interesting, and birthed the term meme
Perhaps nonfiction works could be ordered neatly such that the thesis and support material isn't buried, but arranged in a tiered detailed manner conducive to a "Choose Your Own Adventure"-like skipping of material. Short-form and long-form must find a way to coexist and be useful without critical compromise. It's not like everyone has or should need Cliff Notes or getAbstract.
I know Daniel Kahneman only through reading him. Like you, I found Thinking Fast & Slow incredibly useful, first reading it perhaps 10 years ago. Definitely 2014, and I can't believe that's 10 years ago.
I must admit this headline shocked me for the simple reason that... I straight up had no idea that he was so old.
Thank you, Daniel, for the way you've influenced my (and our) thinking in ways that are still impacting us today, both in work and in the rest of life. Rest in peace.
To me the general idea felt as trite and blindingly obvious as any book by Taleb or Gladwell, and about as self-congratulatory. The only thing that surprised me was that he picked the singularly unmemorable names "System 1" and "System 2", instead of something like "intuition" and "reason".
> it covered some generally profound ideas that still are relevant as ever and not widely understood
I've tried to read this book over and over again to understand what everyone is talking about but never found the insights that useful in practice. Like, what have you been able to apply these insights too? What good is it to know that we have a slow mode of thinking and a fast way? Genuine question.
When to trust your instincts /intuition (eg. when few facts are known, there are no critical central deciding factors, but it's important to take a decision and move forward) and when to stop trusting your instincts and reflect a little (eg someone is trying to rush you into making a buying decision).
When it's likely that your biased, and try to work around that (highly related to above). (Eg. When don't make critical decisions when you're sleep deprived)
How you can utilize other people lacking this ability. (eg utilize it in sales processes)
> If you don't have time for that, I'm sure chat GPT can give you a taste of the main premises and you can probe deeper from there.
From gpt4:
The Two Systems: Kahneman introduces the concept of two distinct systems that govern our thoughts. System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
Heuristics and Biases: The book explains how the fast, intuitive thinking of System 1 leads to the use of heuristics—a kind of mental shortcut that facilitates quick judgments but can also lead to systematic biases and errors in thinking. Kahneman discusses several of these biases, such as the availability heuristic, where people judge the probability of events by how easily examples come to mind, and the anchoring effect, where people rely too heavily on the first piece of information they encounter.
Overconfidence: One of the themes of the book is the confidence people place in their own beliefs and judgments. Kahneman shows that people tend to overestimate their knowledge and their ability to predict outcomes, leading to a greater confidence in their judgments than is warranted. This overconfidence can contribute to risky decision-making and failure to consider alternative viewpoints.
Prospect Theory: Kahneman, along with Amos Tversky, developed Prospect Theory, which challenges the classical economic theory that humans are rational actors who always make decisions in their best interest. Prospect Theory suggests that people value gains and losses differently, leading to decisions that can seem illogical or irrational. It highlights the asymmetry between the perception of gains and losses, where losses are felt more acutely than gains are enjoyed.
Happiness and Well-being: The book also delves into the determinants of happiness and well-being, distinguishing between the experiencing self (which lives in the present) and the remembering self (which keeps score and makes decisions). Kahneman explores how our happiness is influenced more by how life events are remembered than by the actual experience. This leads to some counterintuitive findings, such as people being happier with experiences that end on a high note, regardless of the overall quality or duration of the experience.
I have a physical copy and only read about 2 chapters, after having it for years. I need to take it on a flight to finally get around to reading it. It seems like one of those books you have to own and read at least once.
I guess I'm one of the rare folks who started reading TFAS and left without any big revelations or takeaways. I got bored half way through and stopped. Shrug...
It certainly has interesting ideas in small doses but knowing about your heuristics and biases doesn't mean you still don't fall victim to them.
The real issue I have with the book is the audio book is read in such boring way by Patrick Egan. I would love to hear a different voice actor take a shot at it as it is not the most exciting book on its own anyway. Then read in such a boring way it reminds me of the worst of college classes I took.
I also don't think it is the type of book that was ever meant to gain the popularity it has. It isn't even that easy to find Judgment Under Uncertainty by clicking on Kahneman's name on Amazon. Kahneman and Tversky wrote that 30 years before Thinking Fast and Slow and it is a great book.
Because of its popularity I would say Thinking, Fast and Slow might be the most overrated book I can think of.
A big reason for that is I would say it is the greatest marketing of a book I can think of by Random House given the subject and how dry the book is.
Assuming this is a genuine question and not just intellectual posturing the obvious answer if you've ever spoken to.. most people.. is that a lot of the (admittedly somewhat simple) concepts in the book are for whatever reason, not part of the general public psyche
Kahneman's book was praised a lot, but I found it more questionable than useful.
It has its good parts, like elaborating on System 1 and System 2, but my favorite concept was regression to the mean. It might by obvious in some cases, but the book made me realize that it applies nearly everywhere.
The bad parts include priming (e.g. the Florida effect) that like others mentioned could not be replicated. He sometimes praises himself for even trivial observations. But my biggest grime that he dismisses Bernoulli's hypothesis in favor of his loss aversion (I still think humans apply a mix of both), while also framing loss aversion as irrational. That is, humans should always only maximize the expected outcome (in terms of money). The reasoning is that during life we will encounter a continuous stream of decisions and maximizing the expected value in each decision will (according to the law of large numbers) maximize the overall income.
It's not (always) irrational. Imagine you have a million dollars. Someone offers you a gamble of a fair coin flip between gaining 2 million dollars and paying one million dollars. With a million USD on your bank account you had a quite comfortable life, and it could get more comfortable with 3 million on your account. But if you lose you are ruined. According to Kahnemann you should take that gamble. Also consider that before the invention of money, those decisions were typically whether to hunt that mammoth or something less aggressive.
The German version of "Who Wants to Be a Millionaire" has a particularity: Your win jumps from € 125.000 to € 500.000 at the 14th question (A consequence of conversion from Deutsche Mark to Euros). Assume you have no idea what the answer is. According to Kahneman you should always pick one at random. If you pick right you get € 500.000. If you pick wrong you will still win € 32.000 or € 500 if you took the 4th lifeline like most contestants do. This makes an expected win of 3/4500 + 1/4500.000 = € 125.375, compared to € 125.000 when you don't answer. Would you do it?
Some questions along these lines are not so easy. For example, iterated versions that take bankruptcy into account (game over) by using geometric mean instead of arithmetic mean; but that might not always be the optimal strategy [https://www.pnas.org/doi/10.1073/pnas.68.10.2493].
More meta is that, when people in aggregate are stressed and not all that wise or informed, they tend to look for convenience and expediency rather than effort and mastery. Unfortunately, this can also happen when people are apathetic or not stressed and slack off on reasonable skepticism or fail to dig into the details.
Leadership, pride, excellence, empathy, and fairness must not fail into the decay of jingoist buzzwords and remain values with intent and actions that remain unwavering.
The greatest danger is dishonesty when words stop having ordinary meanings, when people stop talking to each other, or when they're a lack of agreement on the obvious intersection of a shared reality.
No, the less cynical take is it says "the second half of the book drags on and you can skip most of those chapters." Which has been generically true of much business nonfiction for decades now. How people choose to apply that knowledge predates AI: skipping it, reading any of the summary reviews available, listening to the audiobook while in gym/car/shopping, asking a friend, dipping into it selectively, etc. There are lots of low-tech alternatives.
The sea temperature is already literally off the charts and we do not know why but let's just add a couple more countries worth of energy waste in datacenters producing nothing, what could possibly go wrong.
Although many of the results in Fast and Slow didn't hold up, Kahneman was always refreshingly open and honest about that, and keen to identify the limits of knowledge.
Which surely is one of the best things you can say about a scientist.
I was fortunate enough to take a cognitive psych grad seminar from him in the 90s, co-taught by his wife, Anne Treisman. He always seemed given to thinking a little more deeply in the moment than most people do.
One half-joking comment he made about science in the real world vs some idealized notion of it has always stuck with me. In a discussion about whether the results of some paper conflicted with some model or theory of cognition, he mused that scientific progress in psychology (and other non-hard sciences) was really about embarrassing rivals with competing models. No high-level model was ever stated precisely enough to rule out some particular finding; you could always tweak your theory a little to accommodate it. It's just that at some point, you might be too embarrassed to do so.
I did a Cog Sci bachelor and this is the conclusion I made I finished my studies. A lot of Thinking Fast and Slow are summaries of research done in the field over decades. In particular, biases and intuition as internalized knowledge/expertise. Sure you can complain about replication issues but this is the best model when it comes to minds that I know of.
> He always seemed given to thinking a little more deeply in the moment than most people do.
"As soon as you present a problem to me, I have some ready-made answer. Those ready-made answers get in the way of clear thinking, and we can’t help but have them." – Daniel Kahneman
Doesn't science require replication? He wrote books based on un-replicated studies.
Further, the confidence he extolled about his now debunked ideas make him a charlatan. This person was a bad scientist. If we esteem people who don't check their data and influence millions of people with falsities, we are going to create a society with low trust.
Just look at this thread, the man lost respect among the people in the know. There are a few people clinging onto 'well just because its not true, doesnt mean I didn't find it interesting". I'm not sure what we get out of promoting anti-science scientists.
I’m in the know and the replication crisis created boosted my confidence in him because it wiped out half the field while merely discrediting a few of the chapters and studies in thinking fast and slow, most of which was discredited he cited from other researchers.
I did see a lot of charlatans in this thread fail to appreciate the broader context of the replication crisis and failed to appreciate how unscathed Kahneman was by it because he was being careful when his peers were not and long before people started judging him with the wisdom of perfect hindsight. Of course if they wrote such a book they would only express their ideas with timidity and never make a mistake.
I read his book alongside a guide as to what in his book could be ignored. I knew every damning word people said about the man before I read a word he said and left impressed.
>Doesn't science require replication? He wrote books based on un-replicated studies.
People even publish studies on un-replicated research! There might be a lot to be said about his research, but I disagree that publishing your research is the worst thing you can do. Maybe there wouldn't be any replication of his studies if it hadn't been for his books.
Can you be specific what ideas of his aren't scientific?
It's true that science requires replication but he deals with models but perhaps uses bad studies to support it.
It's like saying he should replicate the theory of evolution.
> "Table 1 shows the number of results that were available and the R-Index for chapters that mentioned empirical results. The chapters vary dramatically in terms of the number of studies that are presented (Table 1). The number of results ranges from 2 for chapters 14 and 16 to 55 for Chapter 5. For small sets of studies, the R-Index may not be very reliable, but it is all we have unless we do a careful analysis of each effect and replication studies.
> Chapter 4 is the priming chapter that we carefully analyzed (Schimmack, Heene, & Kesavan, 2017). Table 1 shows that Chapter 4 is the worst chapter with an R-Index of 19. An R-Index below 50 implies that there is a less than 50% chance that a result will replicate. Tversky and Kahneman (1971) themselves warned against studies that provide so little evidence for a hypothesis. A 50% probability of answering multiple choice questions correctly is also used to fail students. So, we decided to give chapters with an R-Index below 50 a failing grade. Other chapters with failing grades are Chapter 3, 6, 7, 11, 14, 16. Chapter 24 has the highest highest score (80, which is an A- in the Canadian grading scheme), but there are only 8 results.
Which is to say in other words, most of the book probably replicates, particularly the parts based on Kahneman’s own work, and for the parts that don’t you can just skip the chapters or take them with a grain of salt.
Kahneman to me always struck me as the one eyed king of the replication crisis. Yes he fucked up but he fucked up notably less than his contemporaries and most of his work is still readable.
Without a copy of the book, I don't remember which parts were based on Kahneman’s own work, and I don't see that we can/should just skip the other chapters.
R-index guys said [0]: "Table 1 shows the number of results that were available and the R-Index for chapters that mentioned empirical results."
Chapters where estimated R-index < 50: Ch 3,4,6,7,11,14,16
Chapters where estimated R-index > 50: Ch 5,8,9,12,17,24
Chapters that don't cite empirical results (by Kahneman, or who?): 1,2,10,13,15,18-23, all of 25-38
As to the chapters that had empirical results, and had an estimated R-index > 50: scores of 55, 57, 60, 62 are really scraping by; saying that means they "probably replicate" is setting the bar really low, even quoting Tversky and Kahneman (1971) back at themselves. (The R-index guys say "Even some of the studies with a high R-Index seem questionable with the hindsight of 2020.")
As to whether he was the one-eyed king of the replication crisis, he certainly started speaking out in 2012 [3] after the social priming scandal broke; did insiders have suspicions about non-replicability before that and should people have pushed back more, earlier? The fallout from the Francesca Gino and Ariely scandals continues.
That's a fair point. Frankly, I don't know what to say. Should we only promote studies that have been replicated? My first thought tells me that the answer should be "yes". At the same time, that would mean we would never talk about certain studies, because I don't think we can reach a 100% degree of replication.
The justice system works as expected if thieves get caught stealing, I'd still be pretty embarrassed if I was the thief. Science may still sort of work, a lot of scientists sure don't.
It's a pretty sad state of affairs if the best that can be said about a person with his reputation, and about the process overall, that the system managed to catch his low quality output decades down the line.
Careful there. By saying "many of the results", you're hand-waving most of the book. The book covers over several decades of his collaborative research with Amos Tversky.
Most of the "underpowered studies" are in the priming-related chapter, called "The Assoviative Machine". The rest of the book is still worth a careful read.
(I had to make this same correction here several years ago. I didn't look up the comment to link it here.)
I remember years ago Penn Jillette talking about his book "Thinking Fast and Slow". And I was like why is a magician talking about a book written by an economist?? Well, read it and you'll understand why it fits so well with their brand of magic. Dr. Kahneman expresses in words what's going on in your brain while watching someone like them perform.
There's a school of thought which holds that economics is a subset of psychology.
I'd thought that this was reflected in some university departmental organisation, with M.I.T. being the one that came to mind. Despite there being a behavioural economics section there, though, so far as I'm aware Economics remains its own department.
Kahneman's training and primary focus were both in psychology, but he was awarded the somewhat problematic Nobel Memorial Prize in Economic Sciences. Multi-discipliniarity is in fact A Thing.
Princeton bio:
Daniel Kahneman is Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs.... He has been the recipient of many awards, among them the Distinguished Scientific Contribution Award of the American Psychological Association (1982) and the Grawemeyer Prize (2002), both jointly with Amos Tversky, the Warren Medal of the Society of Experimental Psychologists (1995), the Hilgard Award for Career Contributions to General Psychology (1995), the Nobel Prize in Economic Sciences (2002), the Lifetime Contribution Award of the American Psychological Association (2007), and the Presidential Medal of Freedom (2013).
To the multi-disciplinary point above, I think Kahneman's economic work is largely categorized as behavioral economics. That's not divorced from group dynamics, as it tries to understand how social preferences, social utility, and other psychological factors shape strategic decision-making behavior.
There's also the fact that the economics profession, and the Economics Nobel committee, have both been making pains to include a broader set of dynamics under their scope. Among the more notably economic polymaths prior to Kahneman was Herbert A. Simon, possibly better known to the HN crowd for his pioneering work in AI and computer science, though he also worked in economics and, again, psychology. He was awarded the 1967 Nobel in economics on decision-making in the context of bounded rationality.
I may be misremembering here (or perhaps projecting), but I vaguely remember the issue being that economics tried to portray itself as too much of a "hard science" for Khaneman's liking.
The counterargument to the proposition in my earlier post would be to show that there is economic activity which relies on nonhuman behaviours. Automated financial trading or AI-based management systems (financial, corporate, industrial, governmental, etc.) might be possible exceptions, which raises further interesting questions.
Economics since the mid 1900s hasn't been solely focused on mercantilism, but rather has had a significant focus on choice under various conditions, assumptions, and constraints. Game theory, mechanism design, contract and auction deisgn, and focus on individual versus collective behavior (e.g Arrow's Impossibility Theorem) have strong overlap with psychology.
Though there is certainly daylight: my subfields of Industrial Organization and Computational eocnomics are way more related to quantitative finance, ML, and similar than voting behavior.
The thrust of my pithy question was whether or not the principle subjects of economics --- of human production, consumption, and exchange --- where the sum total of all psychological inquiry.
As the parent post tried to substantiate, for current economics the principle subjects really are not limited to human production, consumption and exchange, but also include subfields that focus on all choices that humans make - things like revealed preferences vs expressed preferences or risk aversion are a focus of economics, but they apply to consumption just as well as for other choices people make about e.g. intimate relations or choice of hobbies or music.
Yes, and I'd still argue that psychology's span of breadth is more encompassing. Note that I both studied economics at uni, and have spent considerable time since exploring both the history and current branches of the field. I'm rather familiar with it and my criticisms are rather based on that familiarity.
The relevant questions are:
1. Which field of study encompasses more elements, economics or psychology?
2. Are these fields intersecting, and if so, is one a proper subset of the other or not?
In a Venn diagram sense, how do the sets relate?
Psychology includes numerous branches, fields, divisions, and foci, of which choice determination is only a very small element. See for example:
Note that psychology, economics, sociology, political science, and anthropology all emerged out of what had previously been moral philsophy largely during the 19th century. Divisions and focus are somewhat arbitrary and strongly influenced by institutions and other pressures --- in the case of economics, political policy influences are notoriously strong, though other disciplines aren't immune from same. But if I had to matryoshka the two, I'd nest econ within psych.
kahneman was such a fascinating personality.
Other than "Thinking fast and slow", I highly recommend "The Undoing Project" by Michael Lewis about Kahneman and Tversky's incredible journey changing the standard economic theory.
Nudge is based on a significant amount of messy science and smuggled policy preferences. It has also not really borne out via the policy initiatives started based on its concepts.
Does behavioral economics offer an alternative theory with useful predictions? Like, do we have an option pricing model that holds up better than models coming out of standard theory?
Kahneman's reply was burried deep inside the comment section. Reproduced below for others' convenience:
>>
I [Kahneman] accept the basic conclusions of this blog. To be clear, I do so (1) without expressing an opinion about the statistical techniques it employed and (2) without stating an opinion about the validity and replicability of the individual studies I cited.
What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.
My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published.
I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own. What impressed me was the unanimity and coherence of the results reported by many laboratories. I concluded that priming effects are easy for skilled experimenters to induce, and that they are robust. However, I now understand that my reasoning was flawed and that I should have known better. Unanimity of underpowered studies provides compelling evidence for the existence of a severe file-drawer problem (and/or p-hacking). The argument is inescapable: Studies that are underpowered for the detection of plausible effects must occasionally return non-significant results even when the research hypothesis is true – the absence of these results is evidence that something is amiss in the published record. Furthermore, the existence of a substantial file-drawer effect undermines the two main tools that psychologists use to accumulate evidence for a broad hypotheses: meta-analysis and conceptual replication. Clearly, the experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it. This was simply an error: I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through. When questions were later raised about the robustness of priming results I hoped that the authors of this research would rally to bolster their case by stronger evidence, but this did not happen.
I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.
I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims.
Fantastic book - I certainly had to read it slowly, not fast.
Two things bothered me about it though - firstly, it landed shortly before the reproducibility issues of such research became more widely known.
Secondly - towards the end of the book, it espouses the idea that using some methods of psychlogical and behavoural manipulation is at worst a net neutral, especially if there was nothing to see of the manipluation in question. After all, who can argue against organ donation being opt-out by default, or similar?
To me, this is like a magician claiming that there was no sleight of hand, as we were free to look wherever we liked during their performance. Denying the presence and capabailities of tools of manipulation is, in my opinion, incredibly dangerous, and the worst of its outcomes has been very publicly played out in recent years.
I think the argument is whether or not we admit we are having these effects and take responsibility for them, we are doing them.
I personally find that telling people exactly what I intend to do makes it more effective rather than less. But in a field where we can change people's behavior by making a button orange instead of blue or presenting a form in one page vs three, I find it impossible to pretend that one of those is a neutral choice.
Instead, I focus on what it is we are maximizing for, and how people feel about the experience. I push my companies to choose patterns that help people feel secure & in control, leading to predictable outcomes that align with what they actually expressed wanting. It means we are collaborating with our users, even though we could have used those same techniques to make them feel more anxious, spend more money than they intended, or buy things they didn't actually need.
Kahneman's impact on economics can't be understated. The subject was becoming a fairly absurd and dogmatically prescriptivist practice before Kahneman stirred it up along with a a relatively small number of colleagues.
To a large extent, it's still dogmatic and prescriptivist, but unorthodox opinions (not just limited to behavioral economics) are more accepted & considered following Kahneman's input.
> The subject was becoming a fairly absurd and dogmatically prescriptivist practice
That was when I was studying it!
So much rubbish "economics is a science because it uses maths" is one favourite of mine.
I did over a decade of study on economics and finance and nobody, even once, mentioned Karl Marx, arguably the most influential economist in the last two hundred years.
It was very prone to fetishes. "Price mechanism" was one I recall. Every problem in society had to be shoe horned into a market so the "price mechanism" would get to work.
Thinking Fast and Slow made a tremendous impact on me when I read it (multiple times) in the 2010s. What curiosity and what clarity of thought this man had. His influence will continue to be felt!
The idea of system 1 and system 2 had a profound impact on me. While specific conclusions in the book were reported to be based on low quality data, it doesn't take away from the fact that it gave me a new mental lens to look at things and understand people's behaviour.
If you’ve only read Thinking Fast And Slow, try grabbing a copy of his 2021 book Noise. It’s a little drier but I found it to be a much deeper and more insightful read. Less pop sci, more hard research results.
And if I recall correctly he addresses the replication issues from Thinking Fast And Slow and discusses more recent research that disproves or adds nuance on the older studies. I think it’s also more practically useful and applicable to everyday life. Where TFS gives you a “these are interesting facts about life” vibe, Noise is more “here’s the problem and this is what you can do about it” style.
Feel like I'm the only one who couldn't get through Thinking, Fast and Slow. Felt like a rambling slog, with most of the interesting bits being something that was very common sense to me.
When did you read the book? It seems to me that the ideas within the book now pervade culture in ways that it didn't when it first came out. If you read it recently it could contribute to your feeling of this being common sense. Also because this idea feels like common sense doesn't mean that it was. Evolution is common sense now but it was a revolutionary way to think of species when On The Origin Of Species came out.
I had the same reaction tbh. I managed to get to the end, but it was a slow, tedious slog. And maybe it's because I'd already read a lot of other pop-psych books, but I barely felt like I learned anything new from it.
That aside, I don't doubt that Kahneman was a brilliant mind, and I'm saddened by his passing. RIP.
"It must have been late 1941 or early 1942. Jews were required to wear the Star of David and to obey a 6 p.m. curfew. I had gone to play with a Christian friend and had stayed too late. I turned my brown sweater inside out to walk the few blocks home. As I was walking down an empty street, I saw a German soldier approaching. He was wearing the black uniform that I had been told to fear more than others – the one worn by specially recruited SS soldiers. As I came closer to him, trying to walk fast, I noticed that he was looking at me intently. Then he beckoned me over, picked me up, and hugged me. I was terrified that he would notice the star inside my sweater. He was speaking to me with great emotion, in German. When he put me down, he opened his wallet, showed me a picture of a boy, and gave me some money. I went home more certain than ever that my mother was right: people were endlessly complicated and interesting."
I'd second every recommendation in this thread for "Thinking Fast And Slow" - it's one of those books that gives you a concept that has such immediate salience that it feels like it unlocks some part of reality you didn't see before but is totally obvious in retrospect.
One of the few other books that's changed my thinking about my thinking in similar ways is Annie Murphy Paul's "The Extended Mind" - https://bookshop.org/p/books/the-extended-mind-the-power-of-... . It's hard to put anything at the level of Thinking Fast and Slow, but it felt like reading a sequel to that book.
For me, among other things, Thinking, Fast, and Slow cracked the implicit notion that my intuition is always right by demonstrating how easily it could go wrong.
Fantastic book - I certainly had to read it slowly, not fast.
Two things bothered me about it though - firstly, it landed shortly before the reproducibility issues of such research became more widely known.
Secondly - towards the end of the book, it espouses the idea that using some methods of psychlogical and behavoural manipulation is at worst a net neutral, especially if there was nothing to see of the manipluation in question. After all, who can argue against organ donation being opt-out by default, or similar?
To me, this is like a magician claiming that there was no sleight of hand, as we were free to look wherever we liked during their performance. Denying the presence and capabailities of tools of manipulation is, in my opinion, incredibly dangerous, and the worst of its outcomes has been very publicly played out in recent years.
> some methods of psychlogical and behavoural manipulation
I think you may be objecting to the idea of manipulation here rather than his point. Influence is not necessarily bad, if a dentist notices some poster which causes his patients to floss more shouldn't he keep it up?
Suggesting all manipulation is bad implies we shouldn't do public health education etc if it happens to be effective.
What the eye doesn't see the heart doesn't grieve, as they used to say
in the pie factory where I worked. But the belly knows. Yes, it's a
dangerous, cavalier idea. But from an endlessly complicated and
interesting thinker.
wow ..didn't realize he was so old .He was always on the tips of people's tongues, never seemed old or dated or faded away even at 90. He was at his peak intellectual influence or trajectory, which is uncommon for someone so old; most careers peak at 40-60. Not only that, his reputation fully unblemished and unmarred, which is also increasingly uncommon.
Lots of praise for this man. He was obviously very influential. His theory enabled the broad paternalistic state we are all suffering from today. Too many policy makers read his book and thought they could tax every person into "good" behavior. That's way beyond the scope of how governments should be involved.
Now that "soft paternalism" has been so successful, the same policymakers are pivoting into hard paternalism .
I learned a lot from Thinking Fast and Slow, but it's also a cynical book. In the same vein as Skinner's behavioralist view of people.
from the article:
"Then the students were asked which was more likely: that Linda is a bank teller or that Linda is a bank teller and is active in the feminist movement. The vast majority went with bank teller and active feminist, which has to be the less likely choice because the probability of two conditions will always be less than the probability of either one."
Isn't that a bad question to ask, it suggests there are only two possible outcomes, wouldn't a better question include a third option of "not a bank teller and may or may not be a active feminist"?
Maybe your (anyone's) system 1 might assume only two possible outcomes, but it's not in the question - it's which is more likely, and one option assuredly is, up to a less-than-or-equal sign.
I hope the actual study wasn't as "tricky" as it was referenced in the article re: the Linda example
I'd imagine there's enough stuff Kahneman identified with biases that have held up and don't involve artificial questions like this designed to trick the respondents whose real world applicability seem questionable at best...
further, in the supplied example, I'd argue that the prior probability of Linda being a feminist (based on her being an activist/etc.) is probably higher than her not being a feminist so, in a sense the respondents got it right (i.e., in that population, I'd argue there are more women who are bank tellers and feminists than just bank tellers)...
"Which is more likely" isn't a question that commonly has three options. It's a question that commonly has only two options, one of which is more likely than the other.
I read that as assuming that the "bank teller only" answer implied that the person was not a feminist, since most people would assume that being a bank teller does not make you a feminist.
In any case, a bad phrasing to give for a survey.... These questions would be better off being as unambiguous as possible.
Because when you're talking to a real human that wants to hear your thoughts about Linda, it's much, much more likely that this is what they're asking and they didn't word it accurately.
That was the whole point of this particular episode… to highlight that ‘accurately’ involves hidden assumptions that you may or may not share with the listener. And then, try to identify if there is a systemic commonality in those hidden assumptions.
I don't think you're correct. I usually see it presented as evidence that people are modeling probability wrong.
> The conjunction fallacy (also known as the Linda problem) is an inference that a conjoint set of two or more specific conclusions is likelier than any single member of that same set, in violation of the laws of probability.
‘More likely’ to be true doesn’t imply they are the only two options. But pulling out these kinds of assumptions (that you heard that limit when it wasn’t explicitly there) is exactly the kinds of things they were trying to discover, partly to make sure to ask better questions.
Kahneman's "add one" / "add three" exercise is his recommended way to activate System 2.
If anyone has figured out how to do it using one's phone, please share. There used to be an App on Google Play store but it doesn't work on more recent versions of Android. I created a Spreadsheet based random 4-digit number prompter, which isn't bad, but I'd like better ideas if anyone has any.
Among all the praise for Thinking, Fast and Slow it seems that many people have missed out on Noise. Also a fantastic book that shaped how I approach situations perhaps more than the former.
Kahneman was one of those people where I was just waiting to have a problem tough enough that I'd have a good reason to email him with a question, whether or not I'd get a response. I guess no longer.
I wrote Daniel an email once from a personal (non-academic) account once as I felt like there was something meaningful to discuss, though I didn't expect a response. He did indeed reply, and seemed remarkably down to earth and genuine considering he was something of a rockstar in pop-sci/psych. RIP.
I first became aware of a type of "slow thinking" through the book "Hare Brain, Tortoise Mind: How Intelligence Increases When You Think Less", by Guy Claxton, published 1999. Does anyone have a comparison of the types of slow thinking?
This is one reason why I feel like economics is still in its infancy and I find a lot of policy recommendations are built off ideology rather than actual reality. That doesn't stop people from being very confident though about their statements.
I understand that sometimes you need assumptions to make the math work, but the fact that it took so long for behavioral economics and bounded rationality to be recognized is crazy. Just because the math is convenient doesn't mean people work that way at all.
I say this as someone who has taken a lot of econ classes, so I understand its value, but it is still very much a set of principles and ways of thinking about problems involving people, rather than something as exact as it's made out to be.
I got slightly off topic here, but seeing as how Daniel got the nobel prize in 2011 (pretty recent) and the work occurred in the 70s, it made me think again about how young the field is.
It's hard to think of any public intellectual whose career was on the peak of its trajectory as his was, and at such and advanced age. Usually someone has a few ideas and they fade with time, but not him. The neoclassical assumptions had crashed after 2008 and this guy comes along with his books and upends the whole economics establishment.
The Chicago School was criticised not only by behavioural economists and psychologists, though, but also by other (fairly orthodox, eg New Keynesian) economists [0]. This is not to distract from Kahneman's monumental contributions (many together with Tversky, as narrated in the book The Undoing Project by Michael Lewis).
For me the best part about reading Thinking: Fast and Slow is that I'm more distrustful of my own thought processes. That little bit of questioning of my own conclusions has helped me see gaps in my reasoning.
My dream is to one day have the caliber of insights this man had, along with his ability to express them so clearly and persuasively.
as an owner of my softbound copy of TF&S I was surprised there wasn't one here already. (I saw the WSJ article drop the minute it posted, as a subscriber.) RIP, Prof. K.
I have a standing offer to buy any individual any of Kahneman's books, if they commit to me they will read it. Amazingly only a few people have taken me up on this.
Kahneman is unequivocally the person I would call my hero, today I am sad to see him leave us. I hope to honor his memory by... I guess, recognizing just how wrong I am, on a regular basis.
His life is filled with tragedy, like his father dying in Vichy France and his wife preceding his death. Imagine living forever where everyone you know and love dying before you.
Only insofar as it's "kinda weird" to run a service where you notice interesting new things that happen in the world, and then write professional, mostly-reliable words about them, to share with your general audience.
In past centuries, this was a fairly common business model, but I understand the concept sounds pretty jarring to modern ears.
Professor Kahneman, who was long associated with Princeton University and lived in Manhattan, employed his training as a psychologist to advance what came to be called behavioral economics. The work, done largely in the 1970s, led to a rethinking of issues as far-flung as medical malpractice, international political negotiations and the evaluation of baseball talent, all of which he analyzed, mostly in collaboration with Amos Tversky, a Stanford cognitive psychologist who did groundbreaking work on human judgment and decision-making.
I used to play D&D after school with Tal Tversky and Jon Barwise at the Tversky's Eichler home on the Stanford campus. This was in the early 1980s. I had no idea how famous either of my friend's fathers were (or would become). It's sad how young both of their parents were when they died.
Yes, two new sciences. To those criticizing his work: I’m not saying the magnitude of the impact is the same, but have all of Darwin’s ideas about evolution stood the test of time? Clearly not.
Each of us sees the world through our own sets of biases. None of us is immune. Any of us that can see clearly enough to move humanity’s body of knowledge forward even a smidgen is a rarity and a treasure. Even among that group this man (and his collaborators) did more than most. I don’t believe _I_ will be counted among those. I can’t fault Kahneman for making some mistakes. Finding and fixing those is part of the process, and requires others with a different set of perspectives and biases. At a future date, weighed against his contributions, I believe they will appear relatively small.
I'm not an economist nor that interested in them, but I did read Nassim Taleb's books* and Kahneman stuck out as one of very few economists Taleb doesn't totally trash.
* I had read Eugene Koonin's "The Logic of Chance" and was then recommended Taleb's books for a more thorough perspective on probability, to apply to Koonin's work.
I know I am nitpicking but Kahneman was not an economist. He was a cognitive psychologist. In fact he is the only psychologist ever to win a Nobel prize in economics.
That's not true though. I can name at least two other people on whom he has a high opinion (a.k.a he's praised them in his books): Karl Popper and George Soros. There are others as well, but most people here will recognize these people (I assume) so that's why I mentioned them, and admittedly they were the first to pop into my head lol
It's true that he's an asshole on X and in his books, while in real life he seems very nice and non-confrontational. He's talked about this in an interview he did with the Guardian I think a few years ago (maybe a decade ago), and if my memory serves me right, he said something to the tune of it being easier to get lost when you're not speaking directly to someone's face and become more rude than usual
I thought he was so witty and refreshing when I read AntiFragile (which I will always regard as a masterpiece), but I feel like he's ran out of ideas now and is just complaining about people he doesn't like.