He has no formal education. He hasn't produced anything in the actual AI field, ever, except his very general thoughts (first that it would come, then about alignment and doomsday scenarios).
He isn't an AI researcher except he created an institution that says he is one, kind of as if I created a club and declared myself president of that club.
He has no credentials (that aren't made up), isn't acknowledged by real AI researchers or scientists, and shows no accomplishments in the field.
His actual verifiable accomplishments seem to be having written fan fiction about Harry Potter that was well received online, and also some (dodgy) explanations of Bayes, a topic that he is bizarrely obsessed with. Apparently learning Bayes in a statistics class, where normal people learn it, isn't enough -- he had to make something mystical out of it.
Why does anyone care what EY has to say? He's just an internet celebrity for nerds.
It is true that he has no academic credentials, but people with academic credentials have been employed on the research program led by him: Andrew Critch for example, who has a PhD in math from UC Berkeley, and Jesse Liptrap who also has a math PhD from a prestigious department although I cannot recall which one.
It's not only that he has no academic credentials, he also has no accomplishments in the field. He has no relevant peer reviewed publications (in mainstream venues; of course he publishes stuff under his own institutions. I don't consider those peer reviewed). Even if you're skeptical about academia and only care about practical achievements... Yudkowsky is also not a businessman/engineer who built something. He doesn't actually work with AI, he hasn't built anything tangible, he just speaks about alignment in the most vague terms possible.
At best -- if one is feeling generous -- you could say he is a "philosopher of AI"... and not a very good one, but that's just my opinion.
Eliezer looks to me like a scifi fan who theorizes a lot, instead of a scientist. So why do (some) people pay any credence to his opinions on AI? He's not a subject matter expert!
Ok, but hundreds of thousands of people have worked for Google without being experts on AI. Anyone who employs one, doesn't automatically become more credible. If you believe that then I want you to know that this comment was written by an ex-Google employee and thus must be authoritative ;)
Good point! If I could write the comment over again, I'd probably leave out the ex-Googlers. But I thought of another math PhD who was happy to work for Eliezer's institute, Scott Garrabrant. I could probably find more if I did a search of the web.
If you believed (like Eliezer has since about 2003) that AI research is a potent danger, you are not going to do anything to help AI researchers. You are for example, not going to publish any insights you may have that might advance the AI state of the art.
Your comment is like dismissing someone who is opposed to human cloning on the grounds that he hasn't published any papers that advance the enterprise of human cloning and hasn't worked in a cloning lab.
> [...] remember the point I was responding to, namely, Eliezer should be ignored because he has no academic credentials.
That's not the full claim you were responding to.
You were responding to me, and I was arguing that Yudkowsky has no academic credentials, but also no background in the field he claims to be an expert on, he self-publishes and is not peer-reviewed by mainstream AI researchers or the scientific community, and he has no practical AI achievements either.
So it's not just lack of academic credentials, there's also no achievements in the field he claims to research. Both facts together present a damning picture of Yudkowsky.
To be honest he seems like a scifi author who took himself too seriously. He writes scifi, he's not a scientist.
OK, but other scientists think he is a scientist or an expert on AI. Stephen Wolfram for example sat down recently for a four-hour-long interview about AI with Eliezer, during which Wolfram refers to a previous (in-person) conversation the 2 had and says he hopes the 2 can have another (in-person) conversation in the future:
His book _Rationality: A-Z_ is widely admired including by people you would concede are machine-learning researchers: https://www.lesswrong.com/rationality
Anyway, this thread began as an answer to a question about the community of tens of thousands of people that has no better name than "the rationalists". I didn't want to get in a long conversation about Eliezer though I'm willing to continue to converse about the rationalists or on the proposition that AI is a potent extinction risk, which proposition is taken seriously by many people besides just Eliezer.
He has received a salary for working on AI since 2000 (having the title "research fellow"). In contrast, he didn't start publishing his Harry Potter fan-fiction till 2010. I seem to recall his publishing a few sci-fi short stories before then, but his non-fiction public written output has always greatly exceeded his fiction output until a few years ago after he became semi-retired due to chronic health problems.
>He’s basically a PR person for OpenAI and Anthropic
How in the world did you arrive at that belief? If it was up to him, OpenAI and Anthropic would be shut down tomorrow and their assets returned to shareholders.
Since 2004 or so, he has been of the view that most research in AI is dangerous and counterproductive and he has not been shy about saying so at length in public, e.g., getting a piece published in Time Magazine a few years ago opining that the US government should shut down all AI labs and start pressuring China and other countries to shut down the labs there.
> He has received a salary for working on AI since 2000 (having the title "research fellow")
He is a "research fellow" in an institution he created, MIRI, outside the actual AI research community (or any scientific community, for that matter). This is like creating a club and calling yourself the president. I mean, as an accomplishment it's very suspect.
As for his publications, most are self-published and very "soft" (on alignment, ethics of AI, etc). What are his bona fide AI works? What makes him a "researcher", what did he actually research, how/when was it reviewed by peers (non-MIRI adjacent peers) and how is it different to just publishing blog posts on the internet?
On what does he base his AI doomsday predictions? Which models, which assumptions? What makes him different to any scifi geek who's read and watched scifi fiction about apocalyptic scenarios?
He has no formal education. He hasn't produced anything in the actual AI field, ever, except his very general thoughts (first that it would come, then about alignment and doomsday scenarios).
He isn't an AI researcher except he created an institution that says he is one, kind of as if I created a club and declared myself president of that club.
He has no credentials (that aren't made up), isn't acknowledged by real AI researchers or scientists, and shows no accomplishments in the field.
His actual verifiable accomplishments seem to be having written fan fiction about Harry Potter that was well received online, and also some (dodgy) explanations of Bayes, a topic that he is bizarrely obsessed with. Apparently learning Bayes in a statistics class, where normal people learn it, isn't enough -- he had to make something mystical out of it.
Why does anyone care what EY has to say? He's just an internet celebrity for nerds.