Hacker News new | past | comments | ask | show | jobs | submit login

I'm not one to give an exaggerated eulogy nor rhapsodize about all those "Books with a white cover and a weird picture" -- but I will say I read thinking fast and slow for the first time last year, after decades of resisting, and felt it covered some generally profound ideas that still are relevant as ever and not widely understood.

(Though at some point, maybe the 2nd half of the book, drags on and you can skip most of those chapters. If you don't have time for that, I'm sure chat GPT can give you a taste of the main premises and you can probe deeper from there.)




It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.

It’s still very much worth reading in its own right, but now implicitly comes bundled with a game I like to call “calibrate yourself on the replication crisis”. Playing is simple: every time the book mentions a surprising result, try to guess whether it replicated. Then search online to see if you got it right.


The density of non-replicable results varies by chapter.

You can ignore anything said in chapter 4 about priming for example.

See https://replicationindex.com/2020/12/30/a-meta-scientific-pe... for more.


This is exactly the kind of task that I want to deploy a long context window model on: "rewrite Thinking Fast and Slow taking into account the current state of research. Oh, and do it in the voice, style and structure of Tim Urban complete with crappy stick figure drawings."


Then we just need the LLM that will rewrite your book taking into account the current state of LLM hallucination behaviour.


Not me, if I'm going to take the time to read something, I want it to have been written, reviewed and edited by a human. There is far too much high fidelity information to assimilate that I'm missing out on to put in low fidelity stuff


Most human authors are frankly far too stupid to be worth reading, even if they do put care into their work.

This, IMO, is the actual biggest problem with LLMs training on whatever the biggest text corpus us that's available: they don't account for the fact that not all text is equally worthy of next-token-predicting. This problem is completely solvable, almost trivially so, but I haven't seen anyone publicly describe a (scaled, in production) solution yet.


> This problem is completely solvable, almost trivially so, but I haven't seen anyone publicly describe a (scaled, in production) solution yet.

Can you explain your solution?


I imagine it looks something like "Censor all writing that contradicts my worldview"


It hardly matters what sources you are using if you filter it through something that has less understanding than a two year old, if any, no matter how eloquent it can express itself.


Then don't copy and paste your copy of Thinking Fast and Slow into your AI along with my prompt then?


(My comment was less about my behavior but an attempt to encourage others to evaluate my thinking in hopes that they may apply it to their own in order to benefit our collective understanding)


Same! Just earlier today I was wanting to do this with "The Moral Animal" and "Guns, Germs, and Steel."

It's probably the AI thing I'm most excited about, and I suspect we're not far from that, although I'm betting the copyright battles are the primary obstacle to such a future at this point.


The thing with Guns, Germs, and Steel is that it make it essentially all about geographic determinism. There's another book (Why the West Rules--For Now written before China had really fully emerged on the stage) which argues that, yes, geography played an important role in which cores emerged earliest. BUT if you look at the sweep of history, the eastern core was arguably more advanced than the western core at various times. So a head start can't be the only answer.


The book specifically considers Eurasia to be one geographical region and it does acknowledge the technological developments in China. The fact that Europe became the winner in this race, according to GGS, is a sign that while geography is important it does not determine the course of history. It is not all about geographic determinism


It is a snapshot in time, and so wrong if viewed in a longer context.

People from Europe, came to have the Industrial Revolution at just the correct moment.

Some small changes in history and it would have happened in India.

It is making a theory to fit the facts.

I do not think the author is a "white supremacist" but the book reads like that. Taking all the accidents of history and making them seem like destiny that Europeans rule the world (they do not, they never did, and they are fading from world domination fast)


I enjoyed both GGS and WTWRFN, but in a mode where I basically ignored the thesis, reading instead for the factual information so clearly presented. Like the coverage of the Polynesian diaspora in GGS that has really stuck with me.

Thinking Fast & Slow was a fun read, but I did not retain much more than the basic System I/II concept which I find is a useful device.


I thought the OP was joking!


It's not even clear that the dual process system 1/system 2 metaphor is accurate at all, so it may not be possible to redeem a book whose thesis is that the dual process model is correct.

It's not just that individual studies have failed to replicate. The whole field was in question at least a decade before the book was written, and since then many of the foundational results have failed to replicate. The book was in a sense the last hurrah of a theory that was on its way out, and the replication crisis administered the coup de grace IMO.


>This is exactly the kind of task that I want to deploy a long context window model on: "rewrite Thinking Fast and Slow taking into account the...

I want something similar but for children's literature. From Ralph and the Motorcycle to Peter Pan, a lot of stuff doesn't hold up.

The books provide plenty of utility. But many things don't hold up to modern thinking. LLMs provide the opportunity to embrace classic content while throwing off the need to improvise as one parses outmoded ideas.


It will not be anything lice classic content anymore.

You could not redact piece of art out of "old ideas". It is like re-drawing classics paintings but mask nipples and removing blood.

And books which could be redacted this way without falling apart — well, don't read such books at all and don't feed them to the children.

Literature for children must not be dumbed down, but exactly as for adults, but better.


It isn't redaction but reasonable and artful substitution. It isn't about dumbing down, but removing dumb ideas.


Maybe use chatGPT to make this make sense.


I would actually like to have books that had "Thinking Fast and Slow" as a prerequisite. Many data visualization books could be summed up as a bar chart is easily consumed by System 1. The visual noise creates mental strain on System 2.


"please finish game of thrones treating the impending zombie invasion as an allegory for global warming"

Also please omit "who has a better story than bran"


Didn't George say it is such an allegory?


Awesome prompt!


is there a 'thinking fast and slow: the reproducible bits' recut? I know with films there's fan made edits.


We need O'Reilly: The Good Parts for books...


That isn't what that blog post is saying.

It's saying that the author's invented metric may indicate that the studies within each chapter may not replicate. No actual replication studies were done to produce the table in that post.


My comment about priming is due to articles like https://replicationindex.com/2017/02/02/reconstruction-of-a-.... As https://replicationindex.com/2017/02/02/reconstruction-of-a-... shows, Daniel Kahneman came to agree that priming research was bunk.

I included that blog post as a guide to what other results may be suspect.


What's wild to me is that anyone could read chapter 4 and not look up the original papers in disbelief.

Long before the controversy was public I was reading that book and, despite claims that the reader must believe the findings, it sounded like nonsense to me. So I looked up the original paper to see what the experiment set up was, and it was unquestionably a ridiculous conclusion to draw from a deeply flawed experiment.

I still never understood how that chapter got through without anyone else having the same reaction I did.


I had exactly this reaction to Malcolm gladwell. It is completely obvious that gladwell across multiple books has never once read one of his references and consistently misrepresents what they say.


I have a slightly different take on him, which comes to the same ultimate end on how I view his work.

As he's shifted from primarily a journalist to primarily a storyteller, he's chosen to sacrifice additional information and accuracy in lieu of telling a consistent and compelling narrative that support what he thinks is the important thing to take away, not necessarily what you would take away were you to review all the same information yourself.

Under that understanding, I find him fun to listen to. The things he "reports" on/illuminates are interesting, but at this point I don't assume he's giving them an even handed representation, so his conclusions are not necessarily my own, and at best it's a set of things to look into and research myself if I find my interest piqued after a fun or interesting story is told.


But once you fact-check him a few times, then you won't be able to trust that his sources support his points anymore, which then raises the question: Is there ANY evidence that supports what Gladwell is trying to say?

If the best evidence that Gladwell could find to support his points, don't support his points, then it really makes me question the utility of trying to find evidence to support Gladwell's points.

It may be that, the reason why no subject matter expert makes the points that Gladwell does is because Gladwell's points are either wrong or not even wrong.


The problem is that Gladwell comes off as trying to be scientific, when really he's just a persuasive storyteller. If you view Gladwell as a purveyor of facts or a science communicator, then he does a horrible job and yes, it's hard to trust whatever he presents because he's at best not very careful about his facts and conclusions. In that case, why would you ever pay attention to him?

On the other hand if you see Gladwell for what I think he is, which is an entertaining storyteller using science as a backdrop to present his view of the world, and willing to twist it to suit his view, then he's really no different than many other writers. In that case you can enjoy him or not, at your leisure, just as you would any other editorial or op-ed piece. Entertainment is entertainment, and his content is entertaining even if I don't expect it to be true (to be clear, I'm approaching this as someone that mostly consumes his podcasts, which some chapters of his books have been directly converted to. I wouldn't want to spend the time reading this content that's not true, but I'm happy to listen to it while on a drive).

Whether he's causing more harm than good overall to the public is another question, but honestly that's a much bigger discussion with a lot of much bigger problems than him, so I'm not sure it's worth getting worked up over.


The problem with "take it or leave it, it's just a story" is that stories are not neutral, or cost-free. Memes compete for attention/proliferation/survival, and false-but-appealing memes outcompete true-but-boring memes. It's a mini DDOS attack on our collective bandwidth to be churning out durable falsehoods disguised as scientific insight.


Thank you for this comment - you have succinctly captured something that I have been feeling but unable to express in words.

Stories/memes/narratives are the most easy/potent form of mental ingestion - so much so that I think humans cannot ingest facts or ideas at all, only stories. And this puts a collective responsibility on all of us to be very careful about the stories we create.


Not just that, but real policy comes out of these books. Gladwell is highly influential among leaders of our bureaucracies. So when it comes time to look at airline safety policies, Gladwell's nonsense about Koreans being too hierarchical to fly safely can worm its way in there.


In those times, that was exactly the kind of thing that people wanted to believe


Haha, yeah, I am reading the book these days, and I clearly remember thinking that those effects seemed really exaggerated.


Isn't that because the replications only looked at a selected subset of all the possible literature? You can be almost sure that if an article's conclusion hinges on a wide interpretation of the experimental result, or the stimuli haven't been sampled properly (and who knows the distribution of stimuli?) or the subjects are first year psych students, and the proof is a rejection of the null hypothesis, that it cannot be replicated. The worst offenders are those that conclude their theory is true because the theory they argue against is rejected.


A fun question especially considering the topic of the thread: are propositions that lack proof necessarily false?


No, but propositions with strong counter-evidence generally are, which is the main topic here. "Not-repicable" generally means "attempted to replicate, but got results inconsistent with the original conclusion."


That is not my understanding of what “not replicable” means. My understanding is “attempted to replicate, but didn’t get any significant results supporting the original conclusion”. There’s nothing that says that the new results are inconsistent with the original findings, only that they couldn’t find any support for them in a similar study.

And that could be for a number of reasons. Of course, sometimes the results are just wrong, due to statistical flukes, or too creative data cleaning and analysis. Often the results might just be much more limited than what the original study claims: Maybe the results of a psychological study is valid for MIT students in the beginning of the semester, before lunch, but not for Yale students in the early afternoon. In this case the only mistake would have been to assume the results were universal.


This is much more correct.

It is amazing how many smart people have bad intuition on science, misunderstand the null hypothesis, etc. So much for the viability of a scientific thinking populace afaict: it seems not possible to pull off.


A very good point (I'm not sure if it's relevant to the book in question, as I haven't read it or if you're referring just about the conversation so far). It seems like many people will take a strong claim they are dubious about, and on finding the evidence is sparse, inconclusive, or missing, swing to assuming that statement is false, instead of a more neutral position of "I have no opinion or some reason to think is unlikely, but others think it is unlikely even if poorly supported or unsupported."

This tendency seems to be capitalized on fairly heavily in political media by finding some poorly supported assertion of the other side to criticize, which causes people to assume the opposite is true.


Not necessarily false.

But such a small fraction of possible propositions are true that it is unlikely to be worthwhile to waste much time on propositions with no evidence.


Does this particular proposition have no evidence (none is in existence)?


Of course not, but the more important and difficult questions address how we should reason about, evaluate, and present ideas that lack proof.


> ...are propositions that lack proof necessarily false?

I'll have you know you just nearly nerd sniped a mathematician ;-)


Extraordinary claims require extraordinary evidence. Or, in less dramatic terms, if you cannot reject the null, you should operate on the assumption that the null holds.


If evidence exists (failed studies) of lack of causality, is it a proof that no causality exists?

I suspect I may not be the first person to entertain this question, perhaps there is some literature on the matter.


For what it's worth, Kahnman answered a post that scrutinized the effect of priming: https://replicationindex.com/2017/02/02/reconstruction-of-a-...


Thanks for sharing this -- I read the book maybe a decade ago and largely discounted it as non-replicable pop-sci; this changed my opinion of Kahneman's perspective and rigor (for the better!)


It looks like it's a bit more nuanced than that. What I saw from the link was some debate about what holds and what doesn't for various forms of "priming"


Yeah I wouldn't read too much into any single study. But what I would defend vigorously is System1 / System2 distinction as something so clear/fundamental that you can see it constantly once you understand it.


It's been called "emotion / intuition" and "logic" for centuries or millennia before the goofy System name was invented.


Ironically people like System 1/2 more than intuition/logic because the terms sound more like they are coined by System 2.


Maybe some people do. I have to keep checking which one is system 1 and which is system 2 every time I hear the terms, because they're not self-evident. Intuition/logic is.


At least, they sound that way to your System 1!


That's not him though.

Like, it was in all my cog psych textbooks more than twenty years ago, with cites back in the 80s (which weren't him).

This is my favourite paper of theirs: http://stats.org.uk/statistical-inference/TverskyKahneman197...

I got into a bunch of trouble with some reviewers of my thesis for referencing this repeatedly.


It's just such a bad name though.


It’s also very common in psychology theories, I haven’t read “Thinking, fast and slow” but I imagine there’s more than Kahneman’s own papers cited: https://en.wikipedia.org/wiki/Dual_process_theory


wow, it looks like "dual process" theory is basically the same thing.

I don't know if there's a better text on dual-process theory out there (perhaps by the original authors), but regardless of who originated it, I think it's something worth learning about for everyone (and if you don't have a better source then Thinking Fast and Slow is a very good one).


... except the distinction was being made in various forms long before Kahneman, and does get questioned. When you start to poke at it, what's intuitive starts to seem less so.

https://journals.sagepub.com/doi/full/10.1177/17456916124606...

(that's a link to a defense of dual process theories, but it makes clear there's increasing criticism of them)


Does anyone have a link to a publicly accessible version of this paper?


I think this should work:

https://scottbarrykaufman.com/wp-content/uploads/2014/04/dua...

There's review paper coming from a more critical perspective in Psych Bulletin or Psych Review I was looking for, but I couldn't find it atm.


In software we often call it fastpath and slowpath :)


His own work held up very well to replication. It's when he is citing the work of other scholars (in particular, that of social psychologists) that doesn't hold up well to replication.


> It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.

Irony is, Kahneman had himself written a paper warning about generalizing from studies with small sample sizes:

"Suppose you have run an experiment on 20 subjects, and have obtained a significant re- sult which confirms your theory (z = 2.23, p < .05, two-tailed). You now have cause to run an additional group of 10 subjects. What do you think the probability is that the results will be significant, by a one-tailed test, separately for this group?"

"Apparently, most psychologists have an exaggerated belief in the likelihood of successfully replicating an obtained finding. The sources of such beliefs, and their consequences for the conduct of scientific inquiry, are what this paper is about."

Then 40 years later, he fell into the same trap. He became one of the "most psychologists".

http://stats.org.uk/statistical-inference/TverskyKahneman197...


This game is doubly valuable when the surprising result confirms one of your existing beliefs. I'm pretty good about doing this for surprising results that contradict a belief I hold, but I have to be extra disciplined about doing it when it confirms one of my beliefs.


> calibrate yourself on the replication crisis

I imagine that in 30 years, it will become clear that individual humans display enormous diversity, their diversity increasing as societal norms relax, and their behavior changing as the culture around them change. As such, replication is hopeless and trying to turn "psychology" into a science was a futile endeavor.

That is not to say that psychology cannot be helpful, just that we cannot infer rational conclusions or predictions from it the same way we can from hard sciences.

Self help books are enormously helpful, but they're definitely not science either.


you have no idea what "psychology" is, do you?


I have a similar point of view as acchow, so it's possible I don't understand psychology either. Could you enlighten us about what it actually is?


it is a positivistic science, i.e. it studies observable phenomena using the scientific method. The days of Freud and Jung where you could just smoke cigars and fart out ideas about collective unconsciousness and anal fixation are long over. Experiments are conducted, confounding variables controlled, and hypotheses (including H0) are tested. Granted, it's not as easy as in physics where you just drop a ball repeatedly and note down the results, buf it doesn't in any way make it a "softer" science. To equate psychology with self-help books is akin to equating LLMs with Markov chains.


The general idea is very simple. Tactical vs strategic thinking are two different things and it’s good to be aware of that. I don’t know that that needs to be proven or disproven


19th Century definition of tactics aas being everything that happens within the range of cannons and strategy as everything that happens outside of cannon range, fits well to thinking fast (tactics) and slow (strategy).


This is unfortunately the case for many books on human behavior. Sure, Dan Ariely comes to mind, but the field itself is very tricky.

I don't think we - people used to STEM - appreciate how difficult behavioral psychology is. In STEM, we are used to isolating experiments down, so there are as few variables as possible. And we are used to well-designed experiments being reproducible if everyone does what they are supposed to do right.

In the study of human behavior there are always countless uncontrollable variables, every human is a bit different and it is very difficult to discover something that would apply generally. Also, pretty much all of the research is done on western population of European descent.

This is why I take all behavioral claims with a large grain of salt, but I still have respect for the researchers doing their best in the field.


I don't agree, I think if you understand science in general then you realize at an early age e.g. 20 that social/behavioral science is at best a pseudoscience


Attacking me is a poor way to phrase your fringe opinion.


I disagreed with your opinion, I didn't attack you. I don't know the first thing about you.


I wonder if it's better to have a lot of small hits or a few big hits and many misses in regard to replication. If the studies which have the greatest implications replicate, then maybe many misses is not that bad.


That's an interesting theoretical question.

Unfortunately the reality is that the more interesting and quotable the result is, the less likely it is to replicate. So replication problems most strongly hit things that seem like they should have the greatest implications.

Kind of a "worst of all worlds" scenario.


And critically, scientific publications are incentivized likewise to publish the most outlandish claims they can possibly get away with. That same incentive affects individual scientists, who disproportionately submit the most outlandish claims they can possibly get away with. The boring stuff -- true or not -- is not worth the hassle of putting into a publishable article.


And then the most outlandish of these are picked up by popular science writers. Who proceed to mangle it beyond recognition, and add a random spin. This then goes to the general public.

Some believe the resulting garbage. And wind up with weird ideas.

Others use that garbage to justify throwing out everything that scientists say. And then double down on random conspiracy theories, denialism, and pseudoscience.

I wish there was a solution to this. But everyone is actually following their incentives here. :-(


The scientists push it on the pop writers, to created a Personal Brand and an industrial complex around their pet theory.


> It’s worth noting that many of the results in Thinking, Fast and Slow didn’t hold up to replication.

Beyond this specific issue, are psychology experiments and issues time and culture sensitive? I think so [1]

[1] https://www.ssoar.info/ssoar/bitstream/handle/document/42104...


It's so weird that the stuff about priming was supposedly debunked, yet if you look around at what happened to society over the past few years, I've been blown away by how suggestible and controllable people are.


I imagine this is just as fun to play with unsurprising results.


I believe same was the case for "Growth mindset"


“Many” is hyperbole, “Some” is more fair and his results stood up or the replication crisis better than most of his contemporaries.


> many of the results

My impression is that the priming chapter is bunk, but the rest has generally held up. Is that no longer true?


Experiments involving grad students dont correlate well with how (normal) people behave in real life.


“When I describe priming studies to audiences, the reaction is often disbelief . . . The idea you should focus on, however, is that disbelief is not an option. The results are not made up, nor are they statistical flukes. You have no choice but to accept that the major conclusions of these studies are true.”


psychology isn’t science. it’s a grave mistake to read/interpret it a such. does that mean it’s useless? of course not: some of the findings (and i use findings very loosely) help us adjust our prior probabilities. if we’re right in the end, we were lucky. otherwise we just weren’t.


I think psychology is very successful at categorizing abd treating mental illnesses. The DSM is really a monument and hold for most of its part very well to scrutiny.

Where psychology is massively failing to replicate is in trying to characterise healthy individuals. Typically the work of Kahneman.

But that's what interest people and sells, pop psychology.


Genuinely curious, how would you scrutinize a categorization tool that includes both causes and effects in its key?

I'm only tangentially following the whole autism/Asperger's/ADD/ADHD development, and I'm growing more and more convinced that all these categories are mostly arbitrary constructs grown out of random history and academia politics. Happy to be proved wrong here, though.


I haven't read this to be fair but this seems to question the dsm itself. https://www.technologynetworks.com/neuroscience/news/psychia...


That's an unexpected position for me.

How do you define science? Could it be a science, according to you, or is there something fundamentally non-scientific about it?


it’s fundamentally unscientific at this point. much of our current science lies in the realm of natural law. so far we haven’t found any laws that govern human behavior. what we know, with considerable certainty, is that behavior can be positively influenced. but at the point of action, nothing we know of compels any specific/predictable behavior. until we have found rigid laws of reasons that apply to both the brute and the civilized, any ‘discoveries’ of psychology are reports of someone’s idiosyncrasies, imho.


There's nothing about the scientific method that requires the process to output tidy little laws to be deserving of being called science.

Some fields are quite lucky that the universe is so elegantly organized, but for that isn't true for the overwhelming majority of fields with as many degrees of freedom as biological systems and anything more complex.

That doesn't mean we can't conduct experiments that reproduce.


Is it not scientific to say that X property is true of human behavior more often that it is not with statistical significance?


> How do you define science?

Science is that which could be disproved.

It is a very small, and very important, part of human knowledge.


The history of science generally doesn't seem to be characterized by shifts in theory due to empirical disproofs. Usually, when theories are "disproved", we don't want to throw out the baby with the bathwater, but rather, we want to stick to the theory and try to patch it up. When Uranus didn't seem to be moving according to the predictions of Newtonian mechanics (a disproof!), physicists didn't throw out Newton, they posited the existence of another planet. And they turned out to be right, Neptune existed.

See Chalmers' What is This Thing Called Science? for an introduction to these kinds of topics, or Kuhn and Feyerabend's work for historical responses. (And the Duhem-Quine thesis for the "auxiliary hypothesis" response to falsifiability I hinted at with my example.)


I find that most nonfiction books follow a common structure:

* 1st third of the book: Lays out the basic ideas, gives several examples

* 2nd third of the book: More examples that repeat the themes from the 1st part

* 3rd third of the book: ??? I usually give up at this point

I sometimes wish that more books were like "The Mom Test" - just long enough to say what they need, even if that makes for a short book.


I always think of this from Aaron Swartz:

> But let’s say you can narrow it down to one good one, and you can find the time to read it. You plunk down an absurd $30 (of which, I’m told, less than $3 goes to the author) for a bulky hardcover and you quickly discover that the author doesn’t have all that much to say. But a book is a big thing, and they had to fill it all up, so the author padded it. There are several common techniques.

> One is to repeat your point over and over, each time slightly differently. This is surprisingly popular. Writing a book on how code is law, an idea so simple it can fit in the book’s title? Just give example after example after example.

> Another is to just fill the book with unnecessary detail. Arguing that the Bush administration is incompetent? Fill your book up with citation after citation. (Readers must love being hit over the head with evidence for a claim they’re already willing to believe.)

> I have nothing against completeness, accuracy, or preciseness, but if you really want a broad audience to hear what you have to say, you’ve got to be short. Put the details, for the ten people who care about them, on your website. Then take the three pages you have left, and put them on your website too.

Source: http://www.aaronsw.com/weblog/001229


Sounds like the man was arguing against spending a wonderful afternoon sitting in the sun reading anecdotes about something you're interested in.


Because most nonfiction books are one, relatively small set of ideas (profound or not, novel or not) that could be concisely written as a few blog posts or a single long form article, but in order to monetize and build the author’s brand, get exaggerated into a full book. It is really painful and something I hope GPT will help the rest of us cut through in the future (“summarize the points. Dig into that point. What evidence is given? Etc etc etc” as a conversation rather than wasting 30 hours reading noise for a book)


Well, life of most people on the earth is just same boring repetitions with few novel events. So I wonder what would people do with their ample amount of saved time thanks to ChatGPT. Perhaps writing another Javascript framework, launch new food delivery apps besides raging on social media.


Most people don’t absorb concepts immediately with only a simple explanation.

Unless you are reading a topic you are already familiar with, reinforcement of an idea helps you to examine a concept from different angles and to solidify what is being discussed.

If everyone fully absorbed and understood everything they read, schooling could be completed years in advance.


Apart from CYOA, I think a hypertext "book" would be the most effective way to dynamically scale detail preferences without resorting to skimming or skipping.

Some people want more or less evidence based on their level of skepticism or critical thinking, while others want more evidence to reinforce the soundness of their inferred position on a topic, especially if it's a topic unfamiliar to them. Other people are under time constraints and just want the key points and a brief presentation.


That's exacerbating the original environmental problem, in addition to thick paper books, filled with filler material just to promote the author's brand, you now want to waste electricity on running an LLM that will give you a the short version? That's.... short sighted.

This should be dealt with by pressuring the publishing industry not to inflate books and fill them with fluff. This could be done by not buying these kind of books, and publicly shaming publishers who engage in this behavior. It's easier in non fiction books since the amount of fluff in fiction books is a more subjective matter.


They do it because a short book looks like a pamphelet and nobody will buy it. Most Gladwell books could easily be 30 pages, but nobody will pay $14.99 for that.

You can't shame them into buying books they can't sell.

How much electricity does it take an LLM to summarize a book? I'm sure the carbon emissions involved are trivial, and if they aren't, I've always been of the belief that (like eating meat) people are going to do what they want to do regardless of the environmental cost, so it's better to focus your ire on reducing the environmental cost. The problem here isn't using an LLM to summarize a book, it's that we've got a power grid fueled mainly by fossil fuels. (That is a problem that will fix itself in no time anyway now that renewables are cheaper and the gap is widening.)


> They do it because a short book looks like a pamphelet and nobody will buy it. Most Gladwell books could easily be 30 pages, but nobody will pay $14.99 for that.

This applies equally to Sci-Fi and fantasy doorstopper novels. At least those have interesting filler—sometimes even better than the main story.


Except that a lot of people read these books for the entertainment value of the anecdotes, and a lot of people enjoy feeling self-important for having read long books.


Not just self-important, but the anecdotes are designed to be bite-sized to be easily retellable, like the 2016 NYT Abraham Wald/ beware survivorship bias on graphing bulletholes on WWII airplanes viz which locations were more damaging.

So you get a decade of "business nonfiction" bookstand fodder which is claims with high shock quality but often not relicable, mixed with anecdotes, to signal the reader's self-declared learnedness.

Combine this with the long-tailed nature of the statistic "the average American only reads one book a year" and it's frightening. I think Scott Galloway said sales of his e-book are >10x more than his paper book.

And during Covid lots of people got exposed for having fake bookshelf backdrops.


There was one Covid Zoom podcast (about economics?) with a middle-aged man sitting in front of a large set of bookshelves, housing row-upon-row of gnarly old books. The only recently-used book was lying flat on top of the other books.

Can you read the title?

https://pasteboard.co/imwZw8fcSpLy.png


The latter will happen already by more and more people doing ways to summarize books


You can just leverage the “second brain” crowd — for every vaguely well-known non-fiction book someone has written up a summary for themselves and posted it on their blog.


Sure, but be sure to check they're vaguely objective, not just shills, compensated reviewers or affiliates.


Well there's a reason for that - consider it a form of spaced-repetition learning.

The author's goal is to convey an idea to the reader. He breaks it up into small overlapping chunks and gradually doles out these small overlapping chunks over the course of the book, sometimes backtracking and repeating an idea with a different example, all accompanied by a compelling narrative.

If he does his job well then the reader doesn't notice that spaced-repetition learning is happening because the supporting examples are entertaining enough to continue reading. In the worst case, the author gets the exact criticism that you are leveling.

Honestly, if you had Mathematics books written like Thinking, Fast and Slow or Freakonomics you'd have a lot more students passing calculus.[1]

So, here's a challenge for you - in your area of expertise (whatever that is), write down the chapters of a hypothetical book you would write to explain one or two foundational principles to an outsider (to that area). It's pretty hard to do. Then compare with best-selling non-fiction aimed at outsiders like Freakonomics, etc.

I did this (chapter overview thing) and realised pretty quickly that I had planned a really boring book.

[1] I read Thinking, Fast and Slow around 2011, and I read The Mom Test last year. Almost all of the sub-themes of the former is still in my memory. The only thing I remember of The Mom Test is that people will lie to you to protect your feelings.


I sometimes wish that more books were like "The Mom Test" - just long enough to say what they need, even if that makes for a short book.

most non fiction could be well-summarized as a lengthy blog post


There's a quote I love from someone I can't remember:

> Most books should be blog posts. Most blog posts should be tweets. Most tweets shouldn't exit.


I'd go further: many non-fiction books could be losslessly compressed into a tweet.

(Looking at you, The Checklist Manifesto)


Reading a book, say 10 hours, is like a meditation on an idea: you get numerous examples of it and a variety ways of thinking about it.

Our brains learn best when they encounter something often across time (spaced repetition).

Reading a single tweet may summarize the book, but the chances of you recalling the idea in an appropriate situation is much lower than if you had spent hours on it.


I agree, and there are many books which are well worth the time it takes to read them. All I'm saying is that there are many other books which aren't.


HAHAHA. :@D Challenge accepted.

The right checklist organizes uniformity, success, and safety in almost every human endeavor. - @SomeGuyOnTheInterwebs


I'd go further:

"Use checklists." - @AtulGawande

There, now you don't need to read the book.


> 3rd third of the book: ??? I usually give up at this point

History of science books thankfully stave off that final third until at least 80%. However, their final chapter or two universally manages to be a letdown. It's either wild optimistic speculation, hype for a theory that's debunked 5 years after publication, or a focus that accidentally happened to predict the course of science post-publication. The story is told in a tonally jarring manner compared to the tight narrative in the rest of the book.

My #1 suspect for this disease is a desire to connect the content of the book to real life. Such attempts miss more often than they drive the point home, even if they're factually correct.


Counter example: The selfish gene

The last 20% shifts in tone and goes into speculating on a general framework for genetic selection outside of biological systems. Very speculative, interesting, and birthed the term meme


I haven't read that one, but your description makes it sound to be the exception that proves the rule.


Mainly a business and self-help “airport book” problem. Sometimes pop-science.


Perhaps nonfiction works could be ordered neatly such that the thesis and support material isn't buried, but arranged in a tiered detailed manner conducive to a "Choose Your Own Adventure"-like skipping of material. Short-form and long-form must find a way to coexist and be useful without critical compromise. It's not like everyone has or should need Cliff Notes or getAbstract.


Counterexample: Gilles Deleuze's "Empiricism and Subjectivity".


As a sample of 1, it seems to me that this is particularly an issue with American non-fiction.


Yes, most nonfiction books should be pamphlets instead of books.


It's all about that value-per-word ;)


I know Daniel Kahneman only through reading him. Like you, I found Thinking Fast & Slow incredibly useful, first reading it perhaps 10 years ago. Definitely 2014, and I can't believe that's 10 years ago.

I must admit this headline shocked me for the simple reason that... I straight up had no idea that he was so old.

Thank you, Daniel, for the way you've influenced my (and our) thinking in ways that are still impacting us today, both in work and in the rest of life. Rest in peace.


To me the general idea felt as trite and blindingly obvious as any book by Taleb or Gladwell, and about as self-congratulatory. The only thing that surprised me was that he picked the singularly unmemorable names "System 1" and "System 2", instead of something like "intuition" and "reason".


> it covered some generally profound ideas that still are relevant as ever and not widely understood

I've tried to read this book over and over again to understand what everyone is talking about but never found the insights that useful in practice. Like, what have you been able to apply these insights too? What good is it to know that we have a slow mode of thinking and a fast way? Genuine question.


When to trust your instincts /intuition (eg. when few facts are known, there are no critical central deciding factors, but it's important to take a decision and move forward) and when to stop trusting your instincts and reflect a little (eg someone is trying to rush you into making a buying decision).

When it's likely that your biased, and try to work around that (highly related to above). (Eg. When don't make critical decisions when you're sleep deprived)

How you can utilize other people lacking this ability. (eg utilize it in sales processes)


> If you don't have time for that, I'm sure chat GPT can give you a taste of the main premises and you can probe deeper from there.

From gpt4:

The Two Systems: Kahneman introduces the concept of two distinct systems that govern our thoughts. System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.

Heuristics and Biases: The book explains how the fast, intuitive thinking of System 1 leads to the use of heuristics—a kind of mental shortcut that facilitates quick judgments but can also lead to systematic biases and errors in thinking. Kahneman discusses several of these biases, such as the availability heuristic, where people judge the probability of events by how easily examples come to mind, and the anchoring effect, where people rely too heavily on the first piece of information they encounter.

Overconfidence: One of the themes of the book is the confidence people place in their own beliefs and judgments. Kahneman shows that people tend to overestimate their knowledge and their ability to predict outcomes, leading to a greater confidence in their judgments than is warranted. This overconfidence can contribute to risky decision-making and failure to consider alternative viewpoints.

Prospect Theory: Kahneman, along with Amos Tversky, developed Prospect Theory, which challenges the classical economic theory that humans are rational actors who always make decisions in their best interest. Prospect Theory suggests that people value gains and losses differently, leading to decisions that can seem illogical or irrational. It highlights the asymmetry between the perception of gains and losses, where losses are felt more acutely than gains are enjoyed.

Happiness and Well-being: The book also delves into the determinants of happiness and well-being, distinguishing between the experiencing self (which lives in the present) and the remembering self (which keeps score and makes decisions). Kahneman explores how our happiness is influenced more by how life events are remembered than by the actual experience. This leads to some counterintuitive findings, such as people being happier with experiences that end on a high note, regardless of the overall quality or duration of the experience.


It’s the only one of those books I still don’t regret telling people “it’s good” a decade later (with a couple caveats)


I have a physical copy and only read about 2 chapters, after having it for years. I need to take it on a flight to finally get around to reading it. It seems like one of those books you have to own and read at least once.


The Undoing Project is a solid read about his life and work too.


an interesting michael lewis book.

(lots less controversial than Going Infinite)


I guess I'm one of the rare folks who started reading TFAS and left without any big revelations or takeaways. I got bored half way through and stopped. Shrug...


Had the exact same experience. Maybe someone can enlighten us with why this is -supposedly- such a revelatory book?


It certainly has interesting ideas in small doses but knowing about your heuristics and biases doesn't mean you still don't fall victim to them.

The real issue I have with the book is the audio book is read in such boring way by Patrick Egan. I would love to hear a different voice actor take a shot at it as it is not the most exciting book on its own anyway. Then read in such a boring way it reminds me of the worst of college classes I took.

I also don't think it is the type of book that was ever meant to gain the popularity it has. It isn't even that easy to find Judgment Under Uncertainty by clicking on Kahneman's name on Amazon. Kahneman and Tversky wrote that 30 years before Thinking Fast and Slow and it is a great book.

Because of its popularity I would say Thinking, Fast and Slow might be the most overrated book I can think of.

A big reason for that is I would say it is the greatest marketing of a book I can think of by Random House given the subject and how dry the book is.


> Because of its popularity I would say Thinking, Fast and Slow might be the most overrated book I can think of.

The Art of War is the most overrated book, imo. "If the enemy is more powerful than you, don't attack them!"

So profound. -_-


This is like "Seinfeld isn't funny" in that once enough shows copy it, it looks cliche.


Assuming this is a genuine question and not just intellectual posturing the obvious answer if you've ever spoken to.. most people.. is that a lot of the (admittedly somewhat simple) concepts in the book are for whatever reason, not part of the general public psyche


Kahneman's book was praised a lot, but I found it more questionable than useful.

It has its good parts, like elaborating on System 1 and System 2, but my favorite concept was regression to the mean. It might by obvious in some cases, but the book made me realize that it applies nearly everywhere.

The bad parts include priming (e.g. the Florida effect) that like others mentioned could not be replicated. He sometimes praises himself for even trivial observations. But my biggest grime that he dismisses Bernoulli's hypothesis in favor of his loss aversion (I still think humans apply a mix of both), while also framing loss aversion as irrational. That is, humans should always only maximize the expected outcome (in terms of money). The reasoning is that during life we will encounter a continuous stream of decisions and maximizing the expected value in each decision will (according to the law of large numbers) maximize the overall income.

It's not (always) irrational. Imagine you have a million dollars. Someone offers you a gamble of a fair coin flip between gaining 2 million dollars and paying one million dollars. With a million USD on your bank account you had a quite comfortable life, and it could get more comfortable with 3 million on your account. But if you lose you are ruined. According to Kahnemann you should take that gamble. Also consider that before the invention of money, those decisions were typically whether to hunt that mammoth or something less aggressive.

The German version of "Who Wants to Be a Millionaire" has a particularity: Your win jumps from € 125.000 to € 500.000 at the 14th question (A consequence of conversion from Deutsche Mark to Euros). Assume you have no idea what the answer is. According to Kahneman you should always pick one at random. If you pick right you get € 500.000. If you pick wrong you will still win € 32.000 or € 500 if you took the 4th lifeline like most contestants do. This makes an expected win of 3/4500 + 1/4500.000 = € 125.375, compared to € 125.000 when you don't answer. Would you do it?


Some questions along these lines are not so easy. For example, iterated versions that take bankruptcy into account (game over) by using geometric mean instead of arithmetic mean; but that might not always be the optimal strategy [https://www.pnas.org/doi/10.1073/pnas.68.10.2493].


More meta is that, when people in aggregate are stressed and not all that wise or informed, they tend to look for convenience and expediency rather than effort and mastery. Unfortunately, this can also happen when people are apathetic or not stressed and slack off on reasonable skepticism or fail to dig into the details.

Leadership, pride, excellence, empathy, and fairness must not fail into the decay of jingoist buzzwords and remain values with intent and actions that remain unwavering.

The greatest danger is dishonesty when words stop having ordinary meanings, when people stop talking to each other, or when they're a lack of agreement on the obvious intersection of a shared reality.


[flagged]


No, the less cynical take is it says "the second half of the book drags on and you can skip most of those chapters." Which has been generically true of much business nonfiction for decades now. How people choose to apply that knowledge predates AI: skipping it, reading any of the summary reviews available, listening to the audiobook while in gym/car/shopping, asking a friend, dipping into it selectively, etc. There are lots of low-tech alternatives.


[flagged]


Aye well, there's your problem, the clouds.

The sea temperature is already literally off the charts and we do not know why but let's just add a couple more countries worth of energy waste in datacenters producing nothing, what could possibly go wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: