> But they don't apply formal or "formalist" approaches, they invoke the names of formal methods but then extract from them just a "vibe".
I broadly agree with this criticism, but I also think it's kind of low-hanging. At least speaking for myself (a former member of those circles), I do indeed sit down and write quantitative models when I want to estimate things rigorously, and I can't be the only one who does.
> Indeed, as well as just ignoring that uncertainties about the state of the world or the model of interaction utterly dominate any "calculation" that you could hope to do.
This, on the other hand, I don't think is a valid criticism nor correct taken in isolation.
You can absolutely make meaningful predictions about the world despite uncertainties. A good model can tell you that a hurricane might hit Tampa but won't hit New Orleans, even though weather is the textbook example of a medium-term chaotic system. A good model can tell you when a bridge needs to be inspected, even though there are numerous reasons for failure that you cannot account for. A good model can tell you whether a growth is likely to become cancerous, even though oncogenesis is stochastic.
Maybe a bit more precisely, even if logic cannot tell you what sets of beliefs are correct, it can tell you what sets of beliefs are inconsistent with one another. For example, if you think event X has probability 50%, and you think event Y has probability 20% conditional on X, it would be inconsistent for you to believe event Y has a probability of less than 10%.
> The world at large is does not spend all its time in lesswrongian ritual multiplication or whatever... but this is not because they're educated stupid
When I thought about founding my company last January, one of the first things I did was sit down and make a toy model to estimate whether the unit economics would be viable. It said they would be, so I started the company. It is now profitable with wide operating margins, just as that model predicted it would be, because I did the math and my competitors in a crowded space did not.
Yeah, it's possible to be overconfident, but let's not forget where we are: startups win because people do things in dumb inefficient ways all the time. Sometimes everyone is wrong and you are right, it's just that that usually happens in areas where you have singularly deep expertise, not where you were just a Really Smart Dude and thought super hard about philosophy.
What you describe (doing basic market analysis) is pretty much unrelated to 'rationality.'
'Rationality' hasn't really made any meaningful contributions to human knowledge or thinking. The things you describe, are all products of scientists and statisticians, etc...
Bayesian statistics is not rationality. It is just Bayesian statistics... And it is mathematicians who should get the credit not less wrong!!
The rationalist movement, if it is anything, is a movement defined by the desire of its members to perceive the world accurately and without bias. In that sense, using a variety of different tools from different academic and intellectual disciplines (philosophy, economics, mathematics, etc) should be expected. I don't think any of the major rationalist figures (Yudkowsky, Julia Galef, Zvi Mowshowitz) would claim any credit for developing these ideas; they would simply say they've used and helped popularize a suite of tools for making good decisions.
Perhaps I overemphasized it, but a personal experience on that front was key to realizing that the lesswrong community was in aggregate a bunch of bullshit sophistic larpers.
In short, some real world system had me asking a simply poised probabilities question. I eventually solved it. I learned two things as a result, one (which I kinda knew, but didn't 'know' before) is that the formal answer to even very simple question can be extremely complicated (e.g. asking for the inverse of a one line formal turning into a half page of extremely dense math), and two that many prominent members of the lesswrong community were completely clueless about the practice of the tools they advocate, not even knowing the most basic search keywords or realizing that there was little hope of most of their fans ever applying these tools to all but the absolute simplest questions.
> You can absolutely make meaningful predictions about the world despite uncertainties. A good model can tell you that a hurricane might
Thanks for the example though-- reasoning about hurricanes is the result of decades of research by thousands of people, the inputs involve data from thousands of weather stations including floating buoys, multiple satellites, and aircraft that fly through the storms to get data. The calculations include numerous empirically derived constants that provide averages for unmeasureable quantities for inputs that the models need plus adhoc corrections to fit model outputs to previously observed behavior.
And the results, while extremely useful, are vague and not particularly precise-- there are many questions they can't answer.
While it is a calculation, it is very much an example of empiracy being primary over reason.
And if someone is thinking that our success with hurricane modeling tells them anything about their ability to 'reason things out' from their own life, without decades of experience, data collection, satellite monitoring, teams of PHD, then they're just mistaken. It's just not comparable.
Reasoning things out, with or without the aid of data, can absolutely be of use. But that utility is bounded by the quality of our data, our understanding of the world, errors in our reasoning process, etc. And people do engage in that level of reasoning all the time. But it's not more primary than it is because of the significant and serious limitations.
I suspect that the effort require to calculate things out also comes with a big risk of overconfidence. Like, stick your thumb in the air, make some rough cash flow calculations, etc. That's a good call and probably captures the vast majority of predictive power for some new business. But if instead you make some complicated multi-agent computational model of the business it might only have a little be more predictive power but a lot more risk of following it off a cliff when experience is suggesting the predictions were wrong.
> people do things in dumb inefficient ways all the time
Or, even more often, they're optimizing for a goal different than yours, one that might not even be legible to you!
> just as that model predicted it would be, because I did the math and my competitors in a crowded space did not.
or so you think! Often organizations fail to do "obvious" things because there are considerations that just aren't visible or relevant to outsiders, rather than any failure of reasoning.
For example, I've been part of an org that could have pivoted to a different product and made more money... but doing so would have meant laying off a bunch of people that everyone really liked working with. The extra money wasn't worth it. Whomever eventually scooped up that business might have thought they were smart for seeing it where we didn't, but if so they'd be wrong about why we didn't do it. We saw the opportunity and just had different objectives.
I wouldn't for a moment argue that collections of people don't do stupid things, they do-- but there is a lot less stupid than you might assume on first analysis.
> it's just that that usually happens in areas where you have singularly deep expertise, not where you were just a Really Smart Dude and thought super hard about philosophy
We agree completely there-- but it's really about the data and expertise. Sure, you have to do the thinking to connect the dots, and then have the courage and conviction (or hunger) to execute on it. You may need all three of data, expertise, and fancy calculations. But the third is sometimes optional and the former two are almost never optional and usually can only be replaced by luck, not 'reasoning'.
This is an excellent explanation of the flaws inherent in the rationalist philosophy. I’m not deeply involved in the community but it seems like there’s very little appreciation of the limits of first principles based reasoning. To put it simply, there are ideas and phenomena that are strictly inaccessible to pure logical deduction. This also infects their thoughts about AI doom. There’s very little mention of how the AI will collect data or marshall physical resources. Instead it just reads physics textbooks then infers an infallible plan to amass infinite power. It’s worth noting that the AGI god’s abilities seem to align pretty well with Yudkowsky’s conception of why he is a superior person.
> A good model can tell you that a hurricane might hit Tampa but won't hit New Orleans
On the other hand we have no model to predict that hurricane a year in advance and tell us which city it’ll hit.
Yet these people believe they can rationalise about far more unpredictable events far further in the future.
That is, I agree that they completely ignore the point at which uncertainties utterly dominate any calculation you might try to do and yet continue to calculate to a point of absurdity.
I broadly agree with this criticism, but I also think it's kind of low-hanging. At least speaking for myself (a former member of those circles), I do indeed sit down and write quantitative models when I want to estimate things rigorously, and I can't be the only one who does.
> Indeed, as well as just ignoring that uncertainties about the state of the world or the model of interaction utterly dominate any "calculation" that you could hope to do.
This, on the other hand, I don't think is a valid criticism nor correct taken in isolation.
You can absolutely make meaningful predictions about the world despite uncertainties. A good model can tell you that a hurricane might hit Tampa but won't hit New Orleans, even though weather is the textbook example of a medium-term chaotic system. A good model can tell you when a bridge needs to be inspected, even though there are numerous reasons for failure that you cannot account for. A good model can tell you whether a growth is likely to become cancerous, even though oncogenesis is stochastic.
Maybe a bit more precisely, even if logic cannot tell you what sets of beliefs are correct, it can tell you what sets of beliefs are inconsistent with one another. For example, if you think event X has probability 50%, and you think event Y has probability 20% conditional on X, it would be inconsistent for you to believe event Y has a probability of less than 10%.
> The world at large is does not spend all its time in lesswrongian ritual multiplication or whatever... but this is not because they're educated stupid
When I thought about founding my company last January, one of the first things I did was sit down and make a toy model to estimate whether the unit economics would be viable. It said they would be, so I started the company. It is now profitable with wide operating margins, just as that model predicted it would be, because I did the math and my competitors in a crowded space did not.
Yeah, it's possible to be overconfident, but let's not forget where we are: startups win because people do things in dumb inefficient ways all the time. Sometimes everyone is wrong and you are right, it's just that that usually happens in areas where you have singularly deep expertise, not where you were just a Really Smart Dude and thought super hard about philosophy.