Hard to pigeonhole. Long, sometimes rambling, well written (in my view; opinions vary), generally thoughtful, fairly centrist, often left-of-centre (enough so to deeply annoy certain breeds of conservatives), but with enough libertarian or classical liberal ideas sprinkled through it to deeply annoy a certain type of leftist.
Some of his consistent themes are a gentle scepticism about what we think we know, a refusal to attribute malice to those who disagree with him, and a desire to be pragmatic about how we can achieve our shared goals.
...obviously this means there's a vocal faction on social media who believe he is the modern equivalent of a grand wizard of the KKK, and who have said so in exactly so many words repeatedly.
If you're the kind of person who'd like Scott's writing, you'll probably like it a lot, and you will reach this conclusion quite quickly. You'll also likely find it inexplicable anyone might disagree. If you're the kind of person who does not like his writing, you'll probably hate it, and probably find it confusing that anyone else might not hate it. For reasons I don't remotely understand, he (and his writing) is oddly polarizing.
Your last paragraph reminds me of one of my favorite short stories from him, about "Scissor Statements". Very black mirror vibes, and I mean that as a sincere complement.
Yes, I really like that one too. Fun and enjoyable to read, but once I got to the end, I found it quite thought provoking.
It hadn't occurred to me before to think about its relevance to Scott himself; there's some interesting irony there. Particularly, I think, because I rather imagine that's the opposite of what Scott has ever intended with his writing.
> For the uninitiated, what sort of content is Slate Star Codex known for?
IIRC, it's a blog that's very influential in the "rationalist" community, which I think spun out of https://en.wikipedia.org/wiki/LessWrong. So very long-winded posts on miscellaneous topics.
It's because rationality that's committed to a narrow definition of what rationality means typically leads to irrationality at some terminal point.
When people describe themselves as "rationalist" they typically mean "instrumental reason" which is a form of reasoning that has its own bias and baked in values (e.g. western modern values, often supremely meritocratic, technocratic, utilitarian values) which are different from the values powering the methods and forms of reasoning of other cultures and other historical periods. There's a tendency for a superiority complex to creep in where one who takes undue pride in one's "reasoning" sees only one mode of reasoning as the one true reasoning (one true value system).
Ancient cultures had value systems that were entirely different than ours, this doesn't make our scientific reasoning "better" it makes our reasonings incommensurable until you commit to some system of shared values as the one true system of values (e.g. self-preservation, but there's nothing saying that's what humanity should value and in fact one could argue that one of the important facets of being human is the capability to reject this value).
> Ancient cultures had value systems that were entirely different than ours, this doesn't make our scientific reasoning "better"
It's better at coming to the correct conclusions about objective aspects of the world (better – not perfect, mind you). This is useful if your value system prioritises knowledge, though knowledge is pretty useful for all sorts, so I think science is good unless your value system penalises knowledge-generation.
Probably because self-ascribed 'rationalism' can potentially come across a bit self-aggrandizing. Like calling oneself part of the 'patriot' party (which implicitly implies members of other political groups aren't patriotic).
Not that I have a problem with SSC, I think it's a cool blog.
At this point it's pretty common for self-identifying lesswrong-style rationalists to use quote marks, or add a bunch of qualifiers. Weird nerdy people get a lot of hate, there are whole subreddits devoted to hating on "rationalists" for a variety of reasons. Adding quotes tends to calm those people down, and make them a bit less hateful.
“Democrat” is the singular noun form, “Democrats” is the plural noun form, “Democratic” is the adjective form, and “Democrat”-as-adjective is the “signal that the speaker is a hostile partisan" form. They are all useful distinctions.
> While some people get upset about it, "Democrat" is useful to differentiate the two verbally. Obviously capitalization isn't possible.
Not really. "Democrat Party" is literally just a shibboleth used by opposing partisans who are upset that "democratic" is an adjective with positive associations. Everything else is just a post-hoc rationalization.
You think that because a) you don't know the origin of the intentionally incorrect usage of "Democrat" as an adjective, and b) you're confused about the difference between an adjective and a noun.
The usage in that domain name is as a noun. "Joe is a Democrat" is fine. That's a noun. "Democrat politicians" is not fine. That's an adjective.
I would note the pattern is not unique to "Democrat/Democratic"; it is used to turn other descriptors into slurs. Cf. "Jewish" vs. "Jew" as adjectives.
Yes, and I'd certainly try to distinguish whether I meant the specific party or the general concept (either by putting it in quotes or using a capital letter).
Some members of the LessWrong online community call themselves "the rationalist community," but there are plenty of people who self-ascribe as rationalist who don't affiliate with that community. Here, the term means the LessWrong rationalists, not all rationalists everywhere.
I don't know about them, but I've found the community around it both intelligent and accepting. That's not what I've come to expect from the word rationalist.
I remember at the peak of the 2016 post-election meltdown the SSC subreddit hosted a sort of "ask Trump supporters anything" thread. Despite the absolute insanity of the time and only a minuscule fraction of the sub's readership supporting Trump, the conversation was interesting, calm and uniformly civil.
There were a few Social Justice types that used to comment there too; again, all very civil, although there was sometimes the sense that they were regarded as an interesting zoo exhibit.
The quotes might signal opposition, but they also might just signal fuzziness. E.g. This is a word that you're probably familiar with used in a specific jargony way that you cannot deduce merely from the standard definition.
Would you agree that each of those words can be used in "a specific jargony way" and that putting quotation marks around either very likely signals opposition?
I'm not familiar with a distinction between a more common usage and a more jargony usage of those two words. Is there one? Actually, I don't think I really understand any usage of the word "humanist" very well.
With no context, humanism sounds like it's just something pertaining to humans. Also, at least to my perception, the word has a built-in positive valence. But, it has a much more specific meaning: https://en.m.wikipedia.org/wiki/Humanism
ah, but here "rationalist" isn't being used in that particular way. One of the reasons it is sometimes put in quotes, I think, is to distinguish it from the meaning that you've linked.
A rationalist in that sense is someone who holds the philosophical positions described in that article.
A rationalist-in-this-other-sense is someone who, uh, generally has beliefs in some other collection of philosophical positions, and is involved in a certain community/social-circle .
It is an unfortunate overloading of a term.
Some have given a definition of rationalist (or rationalist-adjacent) as : Eliezer Yudkowsky is a rationalist, and anyone who spends a lot of time arguing with rationalists is a rationalist.
This is quite a different thing that the sense of the word described in the Wikipedia article.
Personally, I'm rather fond of the group, but there are still cases where I find myself using quote marks when describing it.
Humans aren't rational. Anyone who claims they are, doesn't understand enough about the human condition to offer you any advice that you should care about.
I think that's part of the problem. The stated goal is to understand our biases and flaws, and to become "less wrong." But it's easy to fall into a trap where one claims to get better at such biases, therefore making them more rational than people who haven't, therefore making their position superior - hence re-enforcing their biases.
A lot of the public declarations coming from "rationalist" communities remind me of public declarations of sin coming from certain religious groups. Though it presents itself as self-effacing, it ends up being affirming. You rarely see the thought extend to "therefore, outgroups that I've been deriding perhaps know better than my ingroup."
Particularly interesting when there are biases that have almost become dogma in certain "rationalist" circles, such as the preoccupation with godlike artificial superintelligence.
Lots of very in depth book reviews. Scott is a psychiatrist, so lots of posts on the replication crisis in the soft sciences. And plenty of "rationalist" posts, which are just deep thoughts on culture and society. I'm still not entirely sure why he's controversial. He's definitely very very smart. Unfortunately the rationalist community has a few people who will start arguing that IQ is at least partially genetic, therefore IQ is also tied to race somehow... I think? So watch out for those landmines.
Scott is very intentional about not getting into the landmine issues. IMO part of his service to the community is helping define those boundaries for the less... socially aware members of the community (ie, the things you Do Not Discuss).
There should never exist a rubric known as "things you do not discuss" in any society that takes open debate and free expression seriously. Discussion is not violence and it should always be defended as a freedom regardless of how boorish or controversial its subjects. The very idea of such a category is grotesque and cowardly, to start.
When I finally took the time to learn how the proof of The Halting Problem worked, I was fascinated by just how "edge-case" it actually is: it's less a general principle, than a proof that a black-hat exploit can always exist that will break any given halting-prediction algorithm.
With the caveat that I'm a Chomskyan free-speech absolutist, and I generally agree with you: it's not hard to apply the same Halting Problem concept to free expression. Such an exploit can manifest many ways: QAnon, Red Guards, literal Nazis, Woke "Neo-Marxists", etc; but regardless, scale the intersection of extremist ideology and human social behavior until they approach infinity, and it's easy to see how they can (and historically have) resulted in a systemic collapse of free expression and free thought. (This was what Popper was trying to capture in his now-oft-quoted Paradox of Tolerance, which is now ironically over-applied as a lever of pre-emptive intolerance of challenges to orthodoxy!)
I do agree that it's both a categorical and strategic mistake to succumb to an epidemiological model of memetic extremism. Even to the extent that model applies, extremist ideas only spread under social preconditions of susceptibility (as described by Hoffer [0]), and I don't think pre-emptive idea suppression is either right, or wise, or helpful. Extremist ideological infection is more a symptom than a root cause.
And yet: we can still recognize that certain taboos might exist for a reason, such as the one against openly voicing "maybe we should just kill the people who disagree with us". James Lindsay (perhaps the second-most infamous opponent of postmodernist thought) has described postmodernism as a "universal solvent", capable of taking apart any idea. It's not that you never use such a cognitive tool; rather, one uses it cautiously and judiciously, when one has the wisdom to wield it properly. Similarly, we need safe spaces for dangerous thoughts, even of the Popperian or Halting Problem variety; yet it may be appropriate to hold social taboos against those ideas being used casually in polite society and the public discourse, lest they dissolve polite society and public discourse themselves.
That's all well and good, but I'd still rather not see a bunch of autistics fired and unpersoned for unwittingly questioning modern dogma. We live in an increasingly religious time; better to go into questioning the orthodoxy with your eyes open.
> IMO part of his service to the community is helping define those boundaries for the less... socially aware members of the community (ie, the things you Do Not Discuss).
I wish more people cited Frances Yates on Bruno - his heliocentrism was a side note even before he attacked Copernicus but he had some fantastic mnemonic techniques
The quality of comments on Slate Star Codex were part of what really made it great. (I assume Yates came to mind because of the comment to that Bruno post. That and many of the other comments to that post did a great job criticizing the argument that curious, intelligent people such as scientists must be irrepressible contrarians.) I hope that continues on Substack.
… And now I realise why people don't like SlateStarCodex. (I still like it.)
I'm no historian, but the stuff he wrote about Catholicism doesn't actually seem right. Argued better than I can in [0].
And some of the comments are not just atrocious, but are not argued against[1]:
> I think there are commonly-known models in all four quadrants. For example:
> a. Widely accepted and good fit for reality:
(Law of supply and demand)
• “Supply and demand” is a good first-order model for certain market dynamics, but it only explains… at a guess, ⅓ of the economics I personally interact with.
> b. Socially unacceptable and good fit for reality:
(IQ tests as a good proxy for mental ability)
• IQ tests are a reasonable proxy for certain, specific axes of mental ability within a subpopulation; the general “IQ tests are a good proxy for mental ability” claim is blatantly absurd.
> c. Widely accepted and bad fit for reality:
(Sexism as main cause of gender wage gap)
• “Sexism is not the main cause of the gender wage gap”… I'm less certain that this is wrong, but I have actually done quite a bit of research on it (including reading what Scott Alexander wrote on the topic!) and this is splitting hairs, to be charitable; I think labelling it “rhetoric” is more accurate. No, several of the systemic injustices aren't due to individual people going “aha! I know what I'm going to do today: not pay my female underlings!”, you're right! But everybody already knows this,[2] and that's not what they mean when they say it's due to sexism. Things like a culture of “you only get a raise if you push for it” (not sure how widespread this is) can contribute to this, and that is, when considered in combination with the rest of everything (e.g. men may be “forthright” and “assertive”, but women are “bossy”), a sexist aspect of the culture.
> d. Socially unacceptable and bad fit for reality
(Vaccines cause autism)
(The rest of the comment is mostly okay, apart from those three examples above; I'm cherry-picking to make a point, but I think the point's valid.)
These are just trotted out as “obvious if you're one of us”, when they're probably not even correct. You don't go into a room where people say sensible things and think that your association with those people somehow makes what you say sensible, and you certainly don't sit in the audience of an entry-level lecture and assume that your fellow audience-members are all experts in the field, so why assume that you should take as blind truth things you read in the comments of a blog‽
Also, I'm not certain I agree with the message Scott's trying to convey. This uncertainty correlates with my uncertainty about the historical accuracy of his examples: when he has a point he wants to make and finds some evidence after the fact, it's generally obvious that he's doing so (probably because that skill doesn't get much use).
NYT was writing a piece about rationality, SSC, lesswrong, east bay, maybe AGI, etc.
Evidence suggests it wasn't a hit piece (at least initially) and was just about the rationality community/bay area influence since a lot of people don't really know about lesswrong.
As part of it NYT said they had to reveal his real name and he asked them not to (details described in that blog post). This created controversy.
A lot of people know what this is and it bothers them?
I’m still lost, so I looked at that older blog post and its also not explained there and the linked subreddit is the “non political one”
so, reading the room here, there is a political context and those above terms are political and I should google something about “lesswrong politics”
I’ll maybe check out that particular rabbit hole in synthesizing but can you enlighten me further because I still have no idea what you’re talking about
They're explicitly not political, lesswrong is a website/community and rationality is about trying to think better by being aware of normal cognitive biases and correcting for them. Also trying to make better predictions and understand things better by applying Bayes' theorem when possible to account for new evidence: https://en.wikipedia.org/wiki/Bayes%27_theorem (and being willing to change your mind when the evidence changes).
It's about trying to understand and accept what's true no matter what political tribe it could potentially align with. See: https://www.lesswrong.com/rationality
The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.
Great yeah that sounds like something I wish I knew existed
Its been very hard to find people able to separate their emotions from an accurate description of reality even if it sounds like a different political tribe, or moreso that people are more willing to assume you are part of a political tribe if some words don't match their political tribe’s description of reality even if what was said was most accurate
I found the community around 2012 and I remember wishing I had known it existed too.
In that list, the less wrong posts are probably what I'd read first since they're generally short (Scott Alexander's are usually long) and you'll get a feel for the writing.
As an aside about the emotions bit, it’s not so much separating them but recognizing when they’re aligned with the truth and when they’re not: https://www.lesswrong.com/tag/emotions
Good... content. I don't really have a good summary except that he digs into random issues (and thought experiments) with an intellectual honesty and wit + clarity that I haven't found elsewhere.
I see he kind of addresses it in his initial post. That description seemed pretty broad and open ended though.