One of the things that shocked me the most about the futurist/ai/techbro sphere is that folks take people like Eliezer Yudkowski seriously. Lex Friedman asked him what's his advice for young people and Eliezer answered "don't expect to live long". He's advocated for airstrikes on datacenters that try to advance AI. I don't know what to say, if these are the intellectual references of our brightest minds, makes me terrified that they're all children mentally.
> I don't know what to say, if these are the intellectual references of our brightest minds, makes me terrified that they're all children mentally.
It is part of the fear - human intelligence is extremely limited. That could easily be representative (although to be fair, probably not).
Reasoning about the future is really hard and usually everyone who makes predictions is wrong. So far we've been fine through pretty much every challenge. That being said, "we'll survive this" is a statement that is only false once and this one looks pretty serious. We might be dealing with a world where human intelligence is economically uncompetitive for the first time in recorded history.
Yudkowsky sticks out because he's weird and his position extreme, but by no means is his p(doom) representative of the increasingly-many worried researchers.
I find it tricky to think about cases like Yudkowsky (full disclosure I used to read LessWrong a lot), because if he has sufficient credibility then loudly staking your extreme position can indeed move the Overton window.
Yudkowsky comes across as the archetypal fedora-wearing nerd. His constant self-flattery is socially obtuse, his relevant credentials are entirely lacking, and his doomerism is an instant turnoff to a large segment of the public. He's a perfect example of the OP's "outside argument".
If Yudkowsky is moving the Overton window, it's in a direction opposite from what he intends.
There is a quote from Eliezer in the linked article that I hope he is remembered by, because it's actually quite beautiful:
"Coherent Extrapolated Volition (CEV) is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted."
We should, as a society, offer help to people who are caught in the cynicism trap, because we could still have all those things Eliezer sees as our better selves. We just have to spend more time focusing on being on our better selves and less time giving up on each other.
Hmm, if it's spreading so much, maybe it's not a cult?
Even if you still have the cynical perspective of thinking it's all wrong--Christianity took over and nobody calls it a cult anymore. At least give it the dignity of 'religion'.
Totally agree. There do seem to be a bunch of accomplished people that have acquired "mind bugs" from people whose job it is to write and be popular, and not produce very much.
Yudkowsky is a prime example of the latter group. I think I became aware of him through the Roko's Basilisk meme many years ago, and I thought "wow so this is what happens when you have a lot to think about and nothing to do". It's basically nonsense.
Two other people like that are Nick Bostrom and Will MacAskill.
I heard of "Superintelligence" by Bostrom probably through an Elon Musk recommendation in 2015 or so. That's when he started OpenAI, and I was working on AI at the time. (Remember Elon had a completely different reputation then; he was respected as a person who built things, and wasn't associated with any political ideas.)
So I got the book, and to its credit, the beginning of the book acknowledges that the rest of the book may be complete bullshit. It's extremely low confidence extrapolation. And I pretty much agreed with that preface -- most of the book was nonsense. However many people seemed to take it seriously.
(I'm not saying AI is going to be great, or catastrophic either. I was just looking for some kind of high quality analysis from someone with domain knowledge, and found none of it in that book.)
----
I think I became aware of MacAskill through a NYTimes article puffing his book last year. There was something "off" about it.
And few months later, it turned out that he was conveniently unaware that the only reason that his projects were funded was because his friend was committing a huge crime:
So yeah, obviously I can't take seriously the moral opinions of somebody whose morals fall down immediately and spectacularly when faced with a whiff of the real world.
It seems like his main job is to create and advocate ideas that are intellectually appealing to and in the interest of billionaires. It came out in the FTX aftermath that he was telling Elon that SBF could help him buy Twitter.
BTW Elon also recommended the MacAskill book, which another thing that has lowered my opinion of Elon. People are people are flattering him with ideas, and he's taking the bait.
----
Anyway, normally I wouldn't have any awareness of the kind of writing that these 3 people produce. Much of it is low quality, which I would expect given their lack of experience and expertise in the domain of AI. And 75% of it couldn't have any effect on the world -- even in principle.
But like you, I'm puzzled that some people I respect seem to take them seriously. I mean I do respect Fridman, since I've learned a bunch of things from his podcasts with other guests. (I understand why a lot of people don't like him, but I prefer to focus on a person's best work and not their worst.)
But yeah the best work of Yudovsky, Bostrom, and MacAskill doesn't seem noteworthy.
It seems designed to generate attention, and that's mostly it.
The extremely, extremely obvious thing they're missing (or just fail to emphasize because it won't get them attention) is that you should be scared of corporations and governments with AI (first), not scared of autonomous AI with its own will.
That is, the "superintelligence" and "longterm-ism" ideas are basically deflections from the real issues with AI.
----
So yeah I was puzzled about this whole thing, and someone on HN recommended this substack, and I recommend it too. It's pretty interesting and it draws a line between the 90's extropians@ mailing list, Yudowsky, Bostrom, and multiple founders of Bitcoin.
Also a lot of dark stuff like suicides, mental illness, and abuse in the rationalist movement.
What also seems to have happened in this era is that Yudkowsky and MIRI, flush with newly donated millions, decided to try to evolve from theory and community-building to actual practical AI research. It did not go well.
This was quite a claim: essentially, that this research would, if published, meaningfully accelerate perhaps the most extraordinarily difficult challenge in the history of human innovation. It is a claim which remains 100% unsubstantiated.
He isn't basically running a cult, he is literally running a cult where he teaches people a new philosophy of thought that causes them to realize they should logically move into a group home in Berkeley and join a polycule - ie a high engagement alternate sexual practice that's extremely difficult to leave.
This shouldn't be surprising though (it should "conform to your priors") because that's just what people in Berkeley do, they start hippie sex cults. When they're not inventing nuclear weapons and blocking new housing projects.
Usually comes in the form of biographies of young tech workers with an aside about how they logiced their girlfriends into open relationships and then got depressed because they got broken up with. There's at least one of these in the New Yorker.