Hacker News new | past | comments | ask | show | jobs | submit login

For the uninitiated, what sort of content is Slate Star Codex known for?

I see he kind of addresses it in his initial post. That description seemed pretty broad and open ended though.




Hard to pigeonhole. Long, sometimes rambling, well written (in my view; opinions vary), generally thoughtful, fairly centrist, often left-of-centre (enough so to deeply annoy certain breeds of conservatives), but with enough libertarian or classical liberal ideas sprinkled through it to deeply annoy a certain type of leftist.

Some of his consistent themes are a gentle scepticism about what we think we know, a refusal to attribute malice to those who disagree with him, and a desire to be pragmatic about how we can achieve our shared goals.

...obviously this means there's a vocal faction on social media who believe he is the modern equivalent of a grand wizard of the KKK, and who have said so in exactly so many words repeatedly.

If you're the kind of person who'd like Scott's writing, you'll probably like it a lot, and you will reach this conclusion quite quickly. You'll also likely find it inexplicable anyone might disagree. If you're the kind of person who does not like his writing, you'll probably hate it, and probably find it confusing that anyone else might not hate it. For reasons I don't remotely understand, he (and his writing) is oddly polarizing.


Your last paragraph reminds me of one of my favorite short stories from him, about "Scissor Statements". Very black mirror vibes, and I mean that as a sincere complement.

https://slatestarcodex.com/2018/10/30/sort-by-controversial/


Yes, I really like that one too. Fun and enjoyable to read, but once I got to the end, I found it quite thought provoking.

It hadn't occurred to me before to think about its relevance to Scott himself; there's some interesting irony there. Particularly, I think, because I rather imagine that's the opposite of what Scott has ever intended with his writing.


> For the uninitiated, what sort of content is Slate Star Codex known for?

IIRC, it's a blog that's very influential in the "rationalist" community, which I think spun out of https://en.wikipedia.org/wiki/LessWrong. So very long-winded posts on miscellaneous topics.


Why put the word rationalist in quotes?


https://plato.stanford.edu/entries/rationality-instrumental/

It's because rationality that's committed to a narrow definition of what rationality means typically leads to irrationality at some terminal point.

When people describe themselves as "rationalist" they typically mean "instrumental reason" which is a form of reasoning that has its own bias and baked in values (e.g. western modern values, often supremely meritocratic, technocratic, utilitarian values) which are different from the values powering the methods and forms of reasoning of other cultures and other historical periods. There's a tendency for a superiority complex to creep in where one who takes undue pride in one's "reasoning" sees only one mode of reasoning as the one true reasoning (one true value system).

Ancient cultures had value systems that were entirely different than ours, this doesn't make our scientific reasoning "better" it makes our reasonings incommensurable until you commit to some system of shared values as the one true system of values (e.g. self-preservation, but there's nothing saying that's what humanity should value and in fact one could argue that one of the important facets of being human is the capability to reject this value).


> Ancient cultures had value systems that were entirely different than ours, this doesn't make our scientific reasoning "better"

It's better at coming to the correct conclusions about objective aspects of the world (better – not perfect, mind you). This is useful if your value system prioritises knowledge, though knowledge is pretty useful for all sorts, so I think science is good unless your value system penalises knowledge-generation.


Probably because self-ascribed 'rationalism' can potentially come across a bit self-aggrandizing. Like calling oneself part of the 'patriot' party (which implicitly implies members of other political groups aren't patriotic).

Not that I have a problem with SSC, I think it's a cool blog.


Do you have a problem with the “democrat” party implying that other parties aren’t democratic?


At this point it's pretty common for self-identifying lesswrong-style rationalists to use quote marks, or add a bunch of qualifiers. Weird nerdy people get a lot of hate, there are whole subreddits devoted to hating on "rationalists" for a variety of reasons. Adding quotes tends to calm those people down, and make them a bit less hateful.

Really it's not derisive, it's self defense.


> Do you have a problem with the “democrat” party

Its called the “Democratic Party”, and it is a proper noun, which distinguishes it visiually form the common use of “democratic”.


While some people get upset about it, "Democrat" is useful to differentiate the two verbally. Obviously capitalization isn't possible.

Obama's team launched https://democrats.org/ in 2009 or so. I really think people shouldn't get upset over "Democrat".


“Democrat” is the singular noun form, “Democrats” is the plural noun form, “Democratic” is the adjective form, and “Democrat”-as-adjective is the “signal that the speaker is a hostile partisan" form. They are all useful distinctions.


> While some people get upset about it, "Democrat" is useful to differentiate the two verbally. Obviously capitalization isn't possible.

Not really. "Democrat Party" is literally just a shibboleth used by opposing partisans who are upset that "democratic" is an adjective with positive associations. Everything else is just a post-hoc rationalization.


You think that because a) you don't know the origin of the intentionally incorrect usage of "Democrat" as an adjective, and b) you're confused about the difference between an adjective and a noun.

The usage in that domain name is as a noun. "Joe is a Democrat" is fine. That's a noun. "Democrat politicians" is not fine. That's an adjective.


I would note the pattern is not unique to "Democrat/Democratic"; it is used to turn other descriptors into slurs. Cf. "Jewish" vs. "Jew" as adjectives.


Yes, and I'd certainly try to distinguish whether I meant the specific party or the general concept (either by putting it in quotes or using a capital letter).


Some members of the LessWrong online community call themselves "the rationalist community," but there are plenty of people who self-ascribe as rationalist who don't affiliate with that community. Here, the term means the LessWrong rationalists, not all rationalists everywhere.


I don't know about them, but I've found the community around it both intelligent and accepting. That's not what I've come to expect from the word rationalist.


Agreed. Much less of a Mensa syndrome going on.

I remember at the peak of the 2016 post-election meltdown the SSC subreddit hosted a sort of "ask Trump supporters anything" thread. Despite the absolute insanity of the time and only a minuscule fraction of the sub's readership supporting Trump, the conversation was interesting, calm and uniformly civil.

There were a few Social Justice types that used to comment there too; again, all very civil, although there was sometimes the sense that they were regarded as an interesting zoo exhibit.


Because the word has multiple meanings, some of them subtextual. The quotes don't need to be derisive.


Because it’s a label rather than a descriptor.


They call themselves "rationalists". "Rationalizers" might be more accurate.


Likely because ardy42 isn't a fan of them and the word would otherwise have positive connotations.

In general, the quotes in this context signal an ideological opposition.


The quotes might signal opposition, but they also might just signal fuzziness. E.g. This is a word that you're probably familiar with used in a specific jargony way that you cannot deduce merely from the standard definition.


Much like the words humanist or intellectual?

Would you agree that each of those words can be used in "a specific jargony way" and that putting quotation marks around either very likely signals opposition?


I'm not familiar with a distinction between a more common usage and a more jargony usage of those two words. Is there one? Actually, I don't think I really understand any usage of the word "humanist" very well.


With no context, humanism sounds like it's just something pertaining to humans. Also, at least to my perception, the word has a built-in positive valence. But, it has a much more specific meaning: https://en.m.wikipedia.org/wiki/Humanism

There's a similar dynamic with rationalism: https://en.m.wikipedia.org/wiki/Rationalism


ah, but here "rationalist" isn't being used in that particular way. One of the reasons it is sometimes put in quotes, I think, is to distinguish it from the meaning that you've linked. A rationalist in that sense is someone who holds the philosophical positions described in that article. A rationalist-in-this-other-sense is someone who, uh, generally has beliefs in some other collection of philosophical positions, and is involved in a certain community/social-circle . It is an unfortunate overloading of a term.

Some have given a definition of rationalist (or rationalist-adjacent) as : Eliezer Yudkowsky is a rationalist, and anyone who spends a lot of time arguing with rationalists is a rationalist.

This is quite a different thing that the sense of the word described in the Wikipedia article.

Personally, I'm rather fond of the group, but there are still cases where I find myself using quote marks when describing it.


Humans aren't rational. Anyone who claims they are, doesn't understand enough about the human condition to offer you any advice that you should care about.


The whole point of rationalism is understanding our biases and cognitive flaws, not to pretend they don't affect us.


I think that's part of the problem. The stated goal is to understand our biases and flaws, and to become "less wrong." But it's easy to fall into a trap where one claims to get better at such biases, therefore making them more rational than people who haven't, therefore making their position superior - hence re-enforcing their biases.

A lot of the public declarations coming from "rationalist" communities remind me of public declarations of sin coming from certain religious groups. Though it presents itself as self-effacing, it ends up being affirming. You rarely see the thought extend to "therefore, outgroups that I've been deriding perhaps know better than my ingroup."

Particularly interesting when there are biases that have almost become dogma in certain "rationalist" circles, such as the preoccupation with godlike artificial superintelligence.


Lots of very in depth book reviews. Scott is a psychiatrist, so lots of posts on the replication crisis in the soft sciences. And plenty of "rationalist" posts, which are just deep thoughts on culture and society. I'm still not entirely sure why he's controversial. He's definitely very very smart. Unfortunately the rationalist community has a few people who will start arguing that IQ is at least partially genetic, therefore IQ is also tied to race somehow... I think? So watch out for those landmines.


Scott is very intentional about not getting into the landmine issues. IMO part of his service to the community is helping define those boundaries for the less... socially aware members of the community (ie, the things you Do Not Discuss).


His article about Hungarian mathematicians and the followup about Israeli Nobel Prize winners definitely hits those landmines.

Link: https://slatestarcodex.com/2017/05/26/the-atomic-bomb-consid...


There should never exist a rubric known as "things you do not discuss" in any society that takes open debate and free expression seriously. Discussion is not violence and it should always be defended as a freedom regardless of how boorish or controversial its subjects. The very idea of such a category is grotesque and cowardly, to start.


When I finally took the time to learn how the proof of The Halting Problem worked, I was fascinated by just how "edge-case" it actually is: it's less a general principle, than a proof that a black-hat exploit can always exist that will break any given halting-prediction algorithm.

With the caveat that I'm a Chomskyan free-speech absolutist, and I generally agree with you: it's not hard to apply the same Halting Problem concept to free expression. Such an exploit can manifest many ways: QAnon, Red Guards, literal Nazis, Woke "Neo-Marxists", etc; but regardless, scale the intersection of extremist ideology and human social behavior until they approach infinity, and it's easy to see how they can (and historically have) resulted in a systemic collapse of free expression and free thought. (This was what Popper was trying to capture in his now-oft-quoted Paradox of Tolerance, which is now ironically over-applied as a lever of pre-emptive intolerance of challenges to orthodoxy!)

I do agree that it's both a categorical and strategic mistake to succumb to an epidemiological model of memetic extremism. Even to the extent that model applies, extremist ideas only spread under social preconditions of susceptibility (as described by Hoffer [0]), and I don't think pre-emptive idea suppression is either right, or wise, or helpful. Extremist ideological infection is more a symptom than a root cause.

And yet: we can still recognize that certain taboos might exist for a reason, such as the one against openly voicing "maybe we should just kill the people who disagree with us". James Lindsay (perhaps the second-most infamous opponent of postmodernist thought) has described postmodernism as a "universal solvent", capable of taking apart any idea. It's not that you never use such a cognitive tool; rather, one uses it cautiously and judiciously, when one has the wisdom to wield it properly. Similarly, we need safe spaces for dangerous thoughts, even of the Popperian or Halting Problem variety; yet it may be appropriate to hold social taboos against those ideas being used casually in polite society and the public discourse, lest they dissolve polite society and public discourse themselves.

[0] https://samzdat.com/2017/06/28/without-belief-in-a-god-but-n...


That's all well and good, but I'd still rather not see a bunch of autistics fired and unpersoned for unwittingly questioning modern dogma. We live in an increasingly religious time; better to go into questioning the orthodoxy with your eyes open.


> IMO part of his service to the community is helping define those boundaries for the less... socially aware members of the community (ie, the things you Do Not Discuss).

Are you talking about stuff like this? https://slatestarcodex.com/2013/10/20/the-anti-reactionary-f...



I wish more people cited Frances Yates on Bruno - his heliocentrism was a side note even before he attacked Copernicus but he had some fantastic mnemonic techniques


The quality of comments on Slate Star Codex were part of what really made it great. (I assume Yates came to mind because of the comment to that Bruno post. That and many of the other comments to that post did a great job criticizing the argument that curious, intelligent people such as scientists must be irrepressible contrarians.) I hope that continues on Substack.


No, actually I didn’t get that far in the comments - I happen to have a copy of her book The Art of Memory


I just wanted to say I did read and appreciate your followup.


… And now I realise why people don't like SlateStarCodex. (I still like it.)

I'm no historian, but the stuff he wrote about Catholicism doesn't actually seem right. Argued better than I can in [0].

And some of the comments are not just atrocious, but are not argued against[1]:

> I think there are commonly-known models in all four quadrants. For example:

> a. Widely accepted and good fit for reality: (Law of supply and demand)

• “Supply and demand” is a good first-order model for certain market dynamics, but it only explains… at a guess, ⅓ of the economics I personally interact with.

> b. Socially unacceptable and good fit for reality: (IQ tests as a good proxy for mental ability)

• IQ tests are a reasonable proxy for certain, specific axes of mental ability within a subpopulation; the general “IQ tests are a good proxy for mental ability” claim is blatantly absurd.

> c. Widely accepted and bad fit for reality: (Sexism as main cause of gender wage gap)

• “Sexism is not the main cause of the gender wage gap”… I'm less certain that this is wrong, but I have actually done quite a bit of research on it (including reading what Scott Alexander wrote on the topic!) and this is splitting hairs, to be charitable; I think labelling it “rhetoric” is more accurate. No, several of the systemic injustices aren't due to individual people going “aha! I know what I'm going to do today: not pay my female underlings!”, you're right! But everybody already knows this,[2] and that's not what they mean when they say it's due to sexism. Things like a culture of “you only get a raise if you push for it” (not sure how widespread this is) can contribute to this, and that is, when considered in combination with the rest of everything (e.g. men may be “forthright” and “assertive”, but women are “bossy”), a sexist aspect of the culture.

> d. Socially unacceptable and bad fit for reality (Vaccines cause autism)

(The rest of the comment is mostly okay, apart from those three examples above; I'm cherry-picking to make a point, but I think the point's valid.)

These are just trotted out as “obvious if you're one of us”, when they're probably not even correct. You don't go into a room where people say sensible things and think that your association with those people somehow makes what you say sensible, and you certainly don't sit in the audience of an entry-level lecture and assume that your fellow audience-members are all experts in the field, so why assume that you should take as blind truth things you read in the comments of a blog‽

Also, I'm not certain I agree with the message Scott's trying to convey. This uncertainty correlates with my uncertainty about the historical accuracy of his examples: when he has a point he wants to make and finds some evidence after the fact, it's generally obvious that he's doing so (probably because that skill doesn't get much use).

[0]:https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-...

[1]: https://slatestarcodex.com/2017/10/23/kolmogorov-complicity-...

[2]: https://blog.jaibot.com/


Scott has written an intro post about this on the new blog: https://astralcodexten.substack.com/p/youre-probably-wonderi...

The about page also has some links to more popular articles on the old blog: https://astralcodexten.substack.com/about


Here are some of my favorites that I'd recommend:

The Toxoplasma of Rage: An essay about how more controversial examples tend to get elevated and thinking about why that happens. https://slatestarcodex.com/2014/12/17/the-toxoplasma-of-rage...

Meditations On Moloch: An essay about how incentives and coordination problems cause systemic societal issues: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/

Who By Very Slow Decay: An essay about death in medicine. https://slatestarcodex.com/2013/07/17/who-by-very-slow-decay...

I Can Tolerate Anything Except The Outgroup: An essay about tribalism https://slatestarcodex.com/2014/09/30/i-can-tolerate-anythin...

He's a good writer, as other replies mention it came out of lesswrong (see: https://www.lesswrong.com/tag/sequences).

Some lesswrong favorites (mostly Eliezer Yudkowsky):

Policy Debates Should Not Appear One-Sided: https://www.lesswrong.com/posts/PeSzc9JTBxhaYRp9b/policy-deb...

A Fable of Science and Politics: https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of...

Pretending to be Wise: https://www.lesswrong.com/posts/jeyvzALDbjdjjv5RW/pretending...

Local Validity as a Key to Sanity and Civilization: https://www.lesswrong.com/posts/WQFioaudEH8R7fyhm/local-vali...

The Bottom Line: https://www.lesswrong.com/posts/34XxbRFe54FycoCDw/the-bottom...

---

For a fun creative fiction one from Scott: https://slatestarcodex.com/2015/06/02/and-i-show-you-how-dee...

From EY: http://www.hpmor.com/


That's a good list. I'm also very partial to

Fearful Symmetry: https://slatestarcodex.com/2015/06/14/fearful-symmetry/

although I seem to be in a minority on that one. I find it immensely calming when the horrors of the Culture War get too much. Also

G.K. Chesterton on AI Risk: https://slatestarcodex.com/2017/04/01/g-k-chesterton-on-ai-r...

for a giggle, although it'll probably only appeal to Chesterton fans.

Oh, and to anyone working through this list, please take the trigger warning on Who By Very Slow Decay seriously. He's not kidding.


Thanks - I’d probably also add this one to my list: https://www.lesswrong.com/posts/7X2j8HAkWdmMoS8PE/disputing-...

I swear 90% of arguments are basically that blog post.


Not medical advice, but I enjoyed reading https://slatestarcodex.com/2018/07/10/melatonin-much-more-th..., actually just in general all of the "much more than you wanted to know" posts are great.


Here is a decent selection of top articles: https://medium.com/handwaving-freakoutery/top-slate-star-cod...

I didn't write this, but I second most of the selection.


Also for the uninitiated, what is this post about?

Why were people trying to cancel him, cancel the Times, and more?


Summary: https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-...

NYT was writing a piece about rationality, SSC, lesswrong, east bay, maybe AGI, etc.

Evidence suggests it wasn't a hit piece (at least initially) and was just about the rationality community/bay area influence since a lot of people don't really know about lesswrong.

As part of it NYT said they had to reveal his real name and he asked them not to (details described in that blog post). This created controversy.


I’m sorry, rationality, lesswrong, AGI?? Adjusted gross income?

Et Cetera?

A lot of people know what this is and it bothers them?

I’m still lost, so I looked at that older blog post and its also not explained there and the linked subreddit is the “non political one”

so, reading the room here, there is a political context and those above terms are political and I should google something about “lesswrong politics”

I’ll maybe check out that particular rabbit hole in synthesizing but can you enlighten me further because I still have no idea what you’re talking about


AGI = Artificial General Intelligence, watch this for the main idea around the goal alignment problem: https://www.youtube.com/watch?v=EUjc1WuyPT8

They're explicitly not political, lesswrong is a website/community and rationality is about trying to think better by being aware of normal cognitive biases and correcting for them. Also trying to make better predictions and understand things better by applying Bayes' theorem when possible to account for new evidence: https://en.wikipedia.org/wiki/Bayes%27_theorem (and being willing to change your mind when the evidence changes).

It's about trying to understand and accept what's true no matter what political tribe it could potentially align with. See: https://www.lesswrong.com/rationality

For more reading about AGI:

Books:

- Superintelligence (I find his writing style somewhat tedious, but this is one of the original sources for a lot of the ideas): https://www.amazon.com/Superintelligence-Dangers-Strategies-...

- Human Compatible: https://www.amazon.com/Human-Compatible-Artificial-Intellige...

- Life 3.0, A lot of the same ideas, but the other extreme of writing style from superintelligence makes it more accessible: https://www.amazon.com/Life-3-0-Being-Artificial-Intelligenc...

Blog Posts:

- https://intelligence.org/2017/10/13/fire-alarm/

- https://www.lesswrong.com/tag/artificial-general-intelligenc...

- https://www.alexirpan.com/2020/08/18/ai-timelines.html

The reason the groups overlap a lot with AGI is that Eliezer Yudkowsky started less wrong and founded MIRI (the machine intelligence research institute). He's also formalized a lot of the thinking around the goal alignment problem and the existential risk of discovering how to create an AGI that can improve itself without first figuring out how to align it to human goals.

For an example of why this is hard: https://www.lesswrong.com/posts/4ARaTpNX62uaL86j6/the-hidden... and probably the most famous example is the paperclip maximizer: https://www.lesswrong.com/tag/paperclip-maximizer


Great yeah that sounds like something I wish I knew existed

Its been very hard to find people able to separate their emotions from an accurate description of reality even if it sounds like a different political tribe, or moreso that people are more willing to assume you are part of a political tribe if some words don't match their political tribe’s description of reality even if what was said was most accurate

I’m curious what I will see in these communities


I recommended some of my favorites in another comment: https://news.ycombinator.com/item?id=25866701

I found the community around 2012 and I remember wishing I had known it existed too.

In that list, the less wrong posts are probably what I'd read first since they're generally short (Scott Alexander's are usually long) and you'll get a feel for the writing.

Specifically this is a good one for the political tribe bit: https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of...

As an aside about the emotions bit, it’s not so much separating them but recognizing when they’re aligned with the truth and when they’re not: https://www.lesswrong.com/tag/emotions


Good... content. I don't really have a good summary except that he digs into random issues (and thought experiments) with an intellectual honesty and wit + clarity that I haven't found elsewhere.


Popsci about "nootropics" and QI.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: