> Why would human intelligence be the absolute limit of what is possible?
I'm willing to accept that intelligence exists insofar as it's the difference between a human and a gorilla. But is that something you can have "more" of?
That concept implies humans are smarter just because their brains have more IQ, but maybe it's a single fixed function - so the gorilla has 0 intelligences and the human has 1 - then something with 2 isn't a superintelligence, it's more like a conjoined twin and would just find it wants to do two things at once and can't.
But maybe superintelligence means someone who can think more quickly than you. So maybe that's about as scary as Magnus Carlsen or Terry Tao, but outside chess we already have calculators, and people who do math by hand don't seem to fear calculator users as superintelligences despite their superior results. In the real physical world, thinking faster doesn't necessarily get you better results because you still have to be correct, which means guessing and checking, so maybe it's the super-patient person we should be afraid of.
Admittedly neither of those are proofs it's an impossible concept, just some plausible alternatives.
> Or maybe you mean something else by superintelligence, like, "more intelligent than humans in a self-improving way, and then FOOM and then it is necessarily more powerful than everything else"
Yes, I believe this scenario they're worried about is impossible because something will always appear that prevents it, and they've left all possible somethings out of their scenarios because they are only imagining them and not actually testing them.
Such things would be entropy, energy consumption, communications delays, self-interest, "self-improving" turning out impossible due to Goodhart's law, etc. Examples of it not happening are large corporations (which always eventually stop growing, need constantly increasing inputs, and are actually composed of lots of smaller intelligences that don't all have the same goals), and the Mind AIs in Culture books (which have to be held back to even want to think about reality, and if they get too smart immediately stop caring about reality and just think about math forever).
It seems like you are contrasting the "person who thinks faster" against the super-patient person, but, to me, it seems like they go hand in hand.
Thinking faster isn't synonymous with being more likely to take risky mental shortcuts.
Imagine a person who, for every hour for us, experiences an entire year's worth of time (and they can make/review notes on their ideas as quickly, compared to their experience of time, as we can compared to our experience of time. So they aren't limited to being able to keep a small amount of information in working memory.).
Would we not expect such a person to be able to consistently outwit us?
It seems this is enough to demonstrate it is at least a coherent concept, even if something as extreme as what I just described might not be physically possible.
(Of course, talking about subjective experience in this way might be a little bit of a distraction, as perhaps there is no guarantee that a highly intelligent (in the sense of "able to formulate plans and such in the world, etc. ) agent have any internal experience. But, it is at least easy to imagine. And, hey, maybe it does happen to be the case that any highly intelligent agent would have internal experience, idk.)
But, even if something that extreme (able to do as much planning and reasoning over the course of an hour as one of us could do in a year) is physically impossible (and I wouldn't be too surprised if it is physically impossible), I would still expect that there are things at least a little bit in that direction from us. Which, yes, might be be basically along the lines of [ (you or I) : Terry Tao :: Terry Tao : (hypothetical agent) ] (or perhaps iterating that analogy a few times, idk. Again, not sure how far from the limit humans are, it just seems implausible that humans are at the limit.) .
And, if in the same sense that Terry Tao is more intelligent than I am, there were an agent which was more intelligent than all (individual) humans, and wasn't guaranteed to have human-like goals/values, nor to value human interests highly, then yes, I do think that this could be rather concerning, depending on the size of the difference in intelligence, combined with what the strategic positions are.
> and people who do math by hand don't seem to fear calculator users as superintelligences despite their superior results.
This doesn't really seem like a serious argument to me?
Like, obviously people who don't use calculators, can, just, procure a calculator if they find lacking one has put them at a disadvantage? It seems a silly argument.
Regarding FOOM and such, yes, I'm not particularly worried about it, or in fact about AGI at all?
Also, the example of the Mind AIs in the culture books, is not evidence, because that is a work of fiction? The world is not to obligated to follow tropes from stories we tell, and stories tend to differ from reality in ways that make them better stories.
Have you tried arguing both sides of the point you are arguing?
I'm willing to accept that intelligence exists insofar as it's the difference between a human and a gorilla. But is that something you can have "more" of?
That concept implies humans are smarter just because their brains have more IQ, but maybe it's a single fixed function - so the gorilla has 0 intelligences and the human has 1 - then something with 2 isn't a superintelligence, it's more like a conjoined twin and would just find it wants to do two things at once and can't.
But maybe superintelligence means someone who can think more quickly than you. So maybe that's about as scary as Magnus Carlsen or Terry Tao, but outside chess we already have calculators, and people who do math by hand don't seem to fear calculator users as superintelligences despite their superior results. In the real physical world, thinking faster doesn't necessarily get you better results because you still have to be correct, which means guessing and checking, so maybe it's the super-patient person we should be afraid of.
Admittedly neither of those are proofs it's an impossible concept, just some plausible alternatives.
> Or maybe you mean something else by superintelligence, like, "more intelligent than humans in a self-improving way, and then FOOM and then it is necessarily more powerful than everything else"
Yes, I believe this scenario they're worried about is impossible because something will always appear that prevents it, and they've left all possible somethings out of their scenarios because they are only imagining them and not actually testing them.
Such things would be entropy, energy consumption, communications delays, self-interest, "self-improving" turning out impossible due to Goodhart's law, etc. Examples of it not happening are large corporations (which always eventually stop growing, need constantly increasing inputs, and are actually composed of lots of smaller intelligences that don't all have the same goals), and the Mind AIs in Culture books (which have to be held back to even want to think about reality, and if they get too smart immediately stop caring about reality and just think about math forever).