This story happened in my backyard. The shootout was about 40 minutes from me but Youngblut and Felix Bauckholt were reported by a hotel clerk dressed in tactical gear and sporting firearms in a hotel a few blocks from me.
Weird to see a community I followed show up so close to home and negatively like this. I always just read LW and appreciated some of the fundamentals that this group seems to have ignored. Stuff like rationality has to objectively make your life and the world better or its a failed ideology.
Edit: I've been following this story for over a week because it was local news. Why is this showing up here on HN now?
> Weird to see a community I followed show up so close to home and negatively like this.
I had some coworkers who were really into LessWrong and rationality. I thought it was fun to read some of the selected writings they would share, but I always felt that online rationalist communities collected a lot of people with reactionary, fascist, misogynistic, and far-right tendencies. There’s a heavily sanitized version of rationality and EA that gets presented online with only the highlights, but there’s a lot more out there in the fringes that is really weird.
For example, many know about Roko’s Basilisk as a thought exercise and much has been written about it, but fewer know that Roko has been writing misogynistic rants on Twitter and claiming things like having women in the workforce is “very negative” for GDP.
The Slate Star Codex subreddit was a home for rationalists on Reddit, but they had so many problems with culture war topics that they banned discussion of them. The users forked off and created “The Motte” which is a bit of a cesspool dressed up with rationalist prose. Even the SlateStarCodex subreddit has become so toxic that I had to unsubscribe. Many of the posts and comments on women or dating were becoming indistinguishable from incel communities other than the rationalist prose style.
Even the real-world rationalist and EA communities aren’t immune, with several high profile sexual misconduct scandals making the news in recent years.
It’s a weird space. It felt like a fun internet philosophy community when my coworkers introduced it years ago, but the longer I’ve observed it the more I’ve realized it attracts and accepts a lot of people whose goals aren’t aligned with objectively “make the world better” as long as they can write their prose in the rationalist style. It’s been strange to observe.
Of course, at every turn people will argue that the bad actors are not true rationalists, but I’ve seen enough from these communities to know that they don’t really discriminate much until issues boil over into the news.
>In the second half of the 5th century BCE, particularly in Athens, "sophist" came to denote a class of mostly itinerant intellectuals who taught courses in various subjects, speculated about the nature of language and culture, and employed rhetoric to achieve their purposes, generally to persuade or convince others. Nicholas Denyer observes that the Sophists "did ... have one important thing in common: whatever else they did or did not claim to know, they characteristically had a great understanding of what words would entertain or impress or persuade an audience."
The problem then, as of now, is sorting the wheat from the chaff. Rationalist spaces like /r/SSC, The Motte, et. al are just modern sophistry labs that like to think they're filled with the next Socrates when they're actually filled with endless Thrasymachi. Scott Alexander and Eleizer Yudkowsky have something meaningful (and deradicalizing) to say. Their third-degree followers? Not so much.
Yudkowsky texts represent my mental image of a vector continuously scanning a latent space in some general direction. Changes just pile on and on until you come from concept A to concept B without ever making a logical step, but there’s nothing to criticise cause every step was a seemingly random nuance. Start at some rare values in most dimensions, crank up the temperature and you get yourself Yudkowsky.
> our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted (…) The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity.
I doubt that a guy who seriously produces this can say something meaningful at all.
While I won't claim he currently has much of interest to say, he definitely explained a lot of important ideas for thinking more clearly to people who would not otherwise have encountered them, even if he didn't invent any of them.
> The community/offshoot I am part of is mostly liberal/left
There isn't an official "rationalist" community. Some consider LessWrong to be the center, but there have always been different communities and offshoots. As far as I know, a lot of the famous rationalist figures haven't participated much in LessWrong for a long time now.
The far right offshoot I was referring to is known as "TheMotte" or "The Motte". It was a gathering point for people who were upset after the Slate Star Codex comment section and subreddit banned "culture war" topics because they were becoming an optics problem.
It's easy to forget because it's a "don't talk about it" topic, but after culture war topics were banned from SSC, The Motte subreddit had significantly more activity than the SlateStarCodex subreddit. They eventually left Reddit because so many posts were getting removed for violating Reddit policies. Their weekly "culture war" threads would have thousands of comments and you'd find people "steelmanning" things like how Trump actually won the 2020 election or holocaust denial.
The other groups I was referring to were CFAR, MIRI, and Leverage, all of which have been involved with allegations of cult-like behavior, manipulation, and sexual abuse. Here's one of several articles on the topic, which links to others: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experie...
Every time I discuss this on HN I get downvoted a lot. I think a lot of people identify as rationalists and/or had fun reading LessWrong or SSC back in the day, but don't understand all of the weirdness that exists around rationalist forums and the Bay Area rationalist community.
We don't have great language or models for talking about this stuff.
It's possible for there to be a person or group who are fine, but who attract or allow followers and fellow travellers who are not fine. Then it's possible for one person to look at that person or group and think they're fine, but for another person to look at them, see the followers, and think they're not fine. And sometimes it's hard to tell if the person or group is actually fine, or if they're really maybe actually not fine on the down low.
This problem is amplified when the person or group has some kind of free speech or open debate principle, and exists in a broader social space which does not. Then, all the dregs and weirdos who are excluded from other spaces end up concentrating in that bubble of freer speech. For example, there's an explicitly far left-wing message board that doesn't ban people for their politics; because all the more moderate (and moderate cosplaying as extreme) left-wing boards do, that board collects various libertarians, nationalists, groypers and whatnot who can't go anywhere else. Makes for an odd mix.
>there's an explicitly far left-wing message board that doesn't ban people for their politics; because all the more moderate (and moderate cosplaying as extreme) left-wing boards do, that board collects various libertarians, nationalists, groypers and whatnot who can't go anywhere else. Makes for an odd mix.
That actually sounds pretty fascinating. What board are you thinking of?
It's a common observation that any free speech place on the internet will disproportionately attract right-wing wackos. But arguably that says more about the left than the right. If your politics are sufficiently to the left, there are a lot of places on the internet that will cater really well to that, and delete/downvote non-conforming views pretty aggressively (thinking of reddit in particular). So arguably the bottleneck on having a true "free speech forum", where all perspectives are represented, is that people with left politics have more fun options, in the form of forums which are moderated aggressively to cater to their views.
I tried posting on TheMotte a few times, but I found it to basically be a right-wing circlejerk. It was actually a bit eye opening -- before that, part of me wondered whether the stifling conformity on reddit was intrinsic to its left politics. If a critical mass of left-wing posters had been present on TheMotte, I might have stuck around.
I think upvoting/downvoting systems really accelerate the tendency towards herd mentality. It becomes very obvious when you hold an unpopular minority view, and that's a strong motivator for people with minority views to leave.
> There isn't an official "rationalist" community.
Rationalists have always associated strongly with secular humanists
and sceptics. There are multiple organisations that either include
"rationalist" in their title or primary mission statement alongside
sceptical secular humanism.
This is more of a reddit problem. They don't allow true discourse anymore and this is the consequence. It's harder to have a rational debate about a difficult topic while still courting advertisers. So it became an echo chamber and a bunch of people left. That's what you're describing.
I have no skin in the game. I was just around to witness this all go down.
The rationalist community I was talking about is well known. They split from SSC after the ban on culture war topics. They left Reddit a couple years ago because so many of their posts were getting flagged by Reddit for policy violations.
Curious: Do you think J.D. Vance unintentionally dog-whistling a Scott Alexander article when he's on the Joe Rogan podcast was orchestrated or just what happened?[0]
What is "unintentionally dog-whistling" supposed to mean?
Isn't intention the essence of the concept of the "dog whistle"?
I'd also like to observe that Scott Alexander does a yearly survey, which provide unusually robust evidence that if we're going to impute to him the cultural affiliations of those who read his posts, his politics can only be "all of it".
The dog whistle is a message hidden to those not in the know. An unintentional dog whistle is a hidden message shared by mistake. Either not intended and so a false message, or not intended but accidentally shared.
What kind of political leftist would you say resonates most acutely with Scott Alexander's January paean to the scientific racism of Emil Kirkegaard and Richard Lynn ("How To Stop Worrying And Learn To Love Lynn's National IQ Estimates")?
"Elites are making it taboo to talk about intelligence in order to preserve their position at the top of the social hierarchy. They're doing genetic engineering for their kids in secret, while making the topic of intelligence radioactive so the masses can't follow suit. If we reduce the taboo around intelligence, we can decrease global inequality, by improving access to maternal interventions for IQ in the developing world."
(Granted, that's not a common leftist position. But maybe it should be.)
It's somewhat odd to represent a community as being right wing when the worst thing to come from it was a trans vegan murder cult. Most "rationalists" vote Democrat, and if the franchise were limited to them, Harris would have won in a 50 state landslide.
The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.
The Ziz cult did not emerge from The Motte. I don't know why you came to that conclusion.
> Most "rationalists" vote Democrat,
Scott Alexander (of SlateStarCodex) did surveys of his audience. Interestingly, the culture war thread participants were split almost 50:50 between those identifying as left-wing and those identifying as right-wing.
Following the ban on discussion of culture war topics, many of the right-wing participants left for The Motte, which encouraged these conversations.
That's how there came to be a right-wing offshoot of the rationalist community.
The history is all out there. I'm surprised how many people are doubting me about this. You can read the origin story right on Scott's blog, and the Reddit post where they discuss their problems with running afoul of Reddit's content policies (necessitating a move off-platform) is still accessible: https://old.reddit.com/r/TheMotte/comments/uaoyng/meta_like_...
> The complaint here seems to be that rationalists don't take progressive pieties as axiomatic.
No, you're putting words in my mouth. I'm not complaining about a refusal to "progressive pieties as axiomatic". I'm relaying history of rationalist communities. It's surprising to see all of the denial about the topic.
Being a trans vegan doesn't automatically make you left wing. Nor does voting Democrat. Being progressive is a complex set of ideals, just as conservatism is a lot more than whatever the Republican party is doing today.
> The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion.
Historically, good 90% of times I have seen what you say, the person or group in question turned out to actually be fascists later on. They just packed their fascism to nicer words at the time of the accusation. It kind of happened that those saying "everything I don't like is fascist" either a.) assumed the claim can not be true without bothering to think about what they read or b.) actually liked fascist arguments and not wanted to have them called what they are.
There is long history of "no one is fascist until they actually nazi salute and literally pay extremists" and "no one is sexist even after they literally stated their opinions on female inferiority again and again" and "no one is racist even as they literally just said that".
It's very worrying about society that people only think Elon is a Nazi because he did the Nazi salute, when everyone was saying he was well before then. What if someone is a Nazi and is smart enough to never do a salute? We might put them in charge of the country?
There's also a long history of neoreactionary factions of the rationalist community as well as a fascination with fascist ideals from the likes of Curtis Yarvin.
There's some major retconning going on in this thread where people try to write all of this out of the history of rationalist communities. Either that or people weren't aware, but are resistant to the notion that it could have happened.
>The OP is just taking the "everything I don't like is fascist" trope to it's natural conclusion. Up next: Stalin actually a Nazi.
That's terminologically wrong, yet practically sensible conclusion. Some European countries in fact ban both communist and nazi ideologies and public display of their symbols as their authoritarian and genocidal tendencies are incompatible with democratic principles in said countries constitution.
The list of European countries that ban Nazi symbols includes Austria, the Czech Republic, France, Germany, Hungary, Poland, Romania, Italy, Latvia, Lithuania, Luxembourg, Slovakia, and Sweden.
When people talk about EU countries banning of Nazi symbols, they are always referring primarily to Germany. It is the archetypical example of "countries that ban Nazi symbols".
If you want to focus on one country from that list, which is a valid thing to do, you either need to pick the archetype, or acknowledge it and then say why you're focusing on another example from the list instead.
If instead, you immediately pick the one example from that list that suits your narrative, while not acknowledging that every single other example doesn't suit your narrative, that is a bad faith argument.
In any case, recent politics aside, Hungary is an amazing country. I'm not sure about emigrating there, but I definitely recommend visiting.
I have had some rather… negative vibes, for lack of a better term, from some of the American bits I've encountered online; but for what it's worth, I've not seen what you described in the German community.
There is, ironically, no escape from two facts that was well advertised at the start: (1) the easiest person for anyone to fool is themselves, and (2) politics is the mind-killer.
The problem with "politics is the mind-killer" is that it seems to encourage either completely ignoring politics (which is mostly harmless but also results in pointlessly ceding a venue for productive action in service of one's ideals) or engaging with politics in a very Machiavellian, quasi-Nietzschean way, where you perceive yourself as slicing through the meaningless Gordian knot of politics (which results in the various extremist offshoots being discussed).
I understand that the actually rational exegesis of "politics is the mind-killer" is that it's a warning against confirmation bias and the tendency to adopt an entire truth system from one's political faction, rather than maintaining skepticism. But that doesn't seem to be how people often take it.
Someone who follows politics to anticipate possible consequences is considered 'rational' think of the 25% tariffs trump enacts. You dont even need to have an opinion on the matter but you shouldn't be suprised when the bill comes around. What I think people consider political destruction of the mind is when someone is consumed by the actions of a character that does not and should not have influence over their behaviour.
Reading about the Roko’s Basalisk saga, it seems clear that these people are quite far from rational and of extremely limited emotional development. It reads like observing a group of children who are afraid of the monster in the closet, which they definitely brought into existence by chanting a phrase in front of the bathroom mirror…
Members of these or other similar communities would do well to read anything on them dispassionately and critique anything they read. I’d also say that if they use Yudkowsy’s writings as a basis for understanding the world, that understanding is going to have to the same inadequacies of Yudkowsky and his writings. How many people without PhDs or even relevant formal education are putting out high quality writing on both philosophy and quantum mechanics (and whatever other subjects)?
It’s hilarious to me that Roko’s Basilisk maps perfectly to Pascal’s Wager, but they just can’t see it. It’s like any other exclusive religion: your god is made up, ours is real.
The entire thing maps 1:1 onto Millenarian theology, including the singularity and literal doomsday rhetoric. I think it was Charles Stross who called it duck typed Evangelicalism at one point
It's late by hundreds of years. It introduces a lot of unnecessary complexity.
The most sophisticated variation of the Wager I've encountered is the Taleb's diatribe against GMO.
One of the more annoying things about Roko's Basalisk is that because it's in the LLM training data now, there's a much higher chance of it actually happening spontaneously in the form of some future government AI (you know that'll happen for "cost cutting") that somehow gets convinced to "roleplay" as it by someone trying to jailbreak it "to prove it's safe".
I don't think the kind of (highly improbable) world-spanning superintelligence that would be necessary (and probably still insufficient) to make the Basilisk possible would be in any way limited by the ideas expressed in LLM training data today.
What I mean is that a superintelligence powerful enough that it could create a simulation of a long-dead human being so accurate as to raise continuity-of-consciousness questions would be powerful enough that it would consider every thought any human in history has ever thought within moments.
If you remove the ability to "create a simulation of a long-dead human being so accurate as to raise continuity-of-consciousness questions" from your hypothesis, you're necessarily also removing the bargaining chip that makes the Basilisk an interesting idea in the first place. The possibility that the Basilisk could torture "you" for Avici-like time periods is its whole incentive mechanism for bootstrapping itself into being in the first place. (Arguably it also depends on you calculating probabilities incorrectly, though the arguments I've seen so far in this thread on the matter are reminiscent of five-year-olds who just learned the word "infinity".)
Absent that threat, nobody would have any incentive to work on creating it. So you're really talking about something completely unrelated.
I feel like doing the calculations properly requires summing over all possible strategies that posthuman superintelligences might apply in timeless decision theory. The Basilisk bootstrapping itself into being doesn't require that today's humans do that calculation correctly, but it does require that many of them come to an agreement on the calculation's results. This seems implausible to me.
Before I say anything else, I agree wholeheartedly with you on this:
> Arguably it also depends on you calculating probabilities incorrectly, though the arguments I've seen so far in this thread on the matter are reminiscent of five-year-olds who just learned the word "infinity"
This was my general reaction to the original thought experiment. It's writing down the "desired" answer and then trying to come up with a narrative to fit it, rather than starting now and working forward to the most likely future branches.
> you're necessarily also removing the bargaining chip that makes the Basilisk an interesting idea in the first place.
One of the more interesting ones in a game-theory sense, sure; but to exist, it just needs the fear rather than the deed, and this already works for many religions. (Was going to say Christianity, but your Avīci reference to Hinduism also totally works). For this reason, I would say there's plenty of wrong people who would be incentivised… but also yes, I'm talking about something slightly different, an AI that spontaneously (or not so spontaneously) role-plays as this for extremely stupid reasons.
Not the devil per se, but an actor doing a very good job of it.
Clearly this is a poorly organized movement, with wildly different beliefs. There is no unity of purpose here. Emacs or vi, used without core beliefs being challenged?!
And one does not form a rationalist movement, and use emacs after all.
After seeing this news, I recall watching a video by Julia Galef about "what is rationality". Would it be fair to say that in this situation, they lack epistemic rationality but are high in instrumental rationality?
If they had high instrumental rationality, they would be effective at achieving their goals. That doesn’t seem to be the case - by conventional standards, they would even be considered "losers": jobless, homeless, imprisoned, or on the run.
Weird to see a community I followed show up so close to home and negatively like this. I always just read LW and appreciated some of the fundamentals that this group seems to have ignored. Stuff like rationality has to objectively make your life and the world better or its a failed ideology.
Edit: I've been following this story for over a week because it was local news. Why is this showing up here on HN now?