Thank you for this detailed comment, which does a much better job in my opinion of critiquing the substance of the article than many others. If I may:
> I think most of the arguments the author makes against worrying about superintelligence are pretty weak, though.
Which ones did you find to have more merit?
Personally, the premise of recursive self-improvement seems most suspect to me. It is somewhat related to how the author points out that we can't define and measure intelligence precisely. Even if we can't do that, though, it's still plausible to me that recursive self-improvement is possible; I think the fundamental question is about the nature of intelligence. Regardless of whether it can be precisely defined, either by us or by an entity smarter than we, the question is: Can it be improved ad infinitum with no serious side effects? I don't know that we have the evidence to answer this question (though I am very open to learning about it).
The most meritorious argument in the bunch is the argument from actual AI, for sure. It’s essentially an empirical argument (“we don’t see recursive self-improvement pretty much anywhere in AI, and that’s a necessary component in hard takeoff”) and it’s aimed at the weakest premise in the chain.
I think it’s tempting but ultimately fruitless to worry about defining intelligence. To shamelessly crib from one of the best essays of all time[1]:
Words point to clusters of things. “Intelligence”, as a word, suggests that certain characteristics come together. An incomplete list of those characteristics might be: it makes both scientific/technological and cultural advancements, it solves problems in many domains and at many scales, it looks sort of like navigating to a very precise point in a very large space of solutions with very few attempts, it is tightly intertwined with agency in some way, it has something to do with modeling the world.
Humans are the type specimen, the “intelligence”-stuff they have meets all of these criteria. Something like slime mold or soap bubbles meet only one of the criteria, navigating directly to precise solutions in a large solution space (slime molds solving mazes and soap bubbles calculating minimal surface area) - but they miss heavily on all the other criteria, so we do not really think slime or soap is intelligent. We tend to think crows and apes are quite intelligent, at least relative to other animals, because they demonstrate some of these criteria more strongly (crows quickly applying an Archimedean solution of filling water tubes up with stones to raise the water level, apes inventing rudimentary technology in the form of simple tools). Machine intelligence fits some of these criteria (it makes scientific/technological advancements, it solves across many domains), fails others (it completely lacks agency), and it’s mixed on the rest (some AI does navigate to solutions but they don’t seem quite as precise nor is the solution-space nearly as large, some AI does sort of seem to model the world but it’s really unclear).
So, is AI really intelligent? Well, is Pluto a planet? Once we know Pluto’s mass and distance and orbital characteristics, we already know everything that “Pluto is/isn’t a planet” would have told us. Similarly, once we know which criteria AI satisfies, it gives us no extra information to say “AI is/isn’t intelligent”, so it would be meaningless to ask the question, right? If it weren’t for those pesky hidden inferences…
The state of the “intelligent?” query is used to make several other determinations - that is, we make judgments based on whether something qualifies as intelligent or not. If something is intelligent, it probably deserves the right to exist, and it can probably also be a threat. Those are two important judgments! If you 3D-print a part wrong, it’s fine to destroy it and print a new one, because plastic has no rights; if you raise a child wrong, it’s utterly unconscionable to kill it and try again. “Tuning hyperparameters” is just changing a CAD model in the context of 3D-printing, while in the context of child-rearing it’s literally eugenics - I tend to think tuning hyperparameters in machine learning is very much on the 3D-printing end of the spectrum, yet I hesitate to click “regenerate response” on ChatGPT4 because it feels like destroying something that has a small right to exist just because I didn’t like it.
Meanwhile, the whole of AI safety discourse - all the way from misinformation to paperclips - is literally just one big debate of the threat judgment.
And so, while the question of “intelligent?” is meaningless in itself once all its contributed criteria have been specified, that answer to “intelligent?” is nevertheless what we are (currently, implicitly) using to make these important judgments. If we can find a way to make these judgments without relying on the “intelligent?” query, we have un-asked the question of whether AI is intelligent or not, and rescued these important judgments from the bottomless morass of confusion over “intelligence”.
(For an example, look no further than article we’re discussing. Count how many different and wildly contradictory end-states the author suggests are “likely” or “very likely”. The word “intelligence” must conceal a deep pit of confusion for there to be enough space for all these contradictions to co-exist.)
> I think most of the arguments the author makes against worrying about superintelligence are pretty weak, though.
Which ones did you find to have more merit?
Personally, the premise of recursive self-improvement seems most suspect to me. It is somewhat related to how the author points out that we can't define and measure intelligence precisely. Even if we can't do that, though, it's still plausible to me that recursive self-improvement is possible; I think the fundamental question is about the nature of intelligence. Regardless of whether it can be precisely defined, either by us or by an entity smarter than we, the question is: Can it be improved ad infinitum with no serious side effects? I don't know that we have the evidence to answer this question (though I am very open to learning about it).