He originally made the argument about gender, not intelligence. I think he was arguing for a whole class of properties for which there's no difference between authenticity and convincing fakery.
I think the point is less that there is a truth and we're too dumb to figure it out, and more that in certain circumstances we'll just have to accept a lower bar for evidence about whether those properties apply.
It reminds me of how no class of computer can solve the halting problem for itself. No matter how intelligent you are, there will be holes like this in your certainty about some things.
Or another way to put this, it's not a binary problem, it's a probability continuum.
Even the definition of 'human intelligence' is a continuum from the smartest to the dumbest of us, that doesn't even stop there and descends thought all animal life.
I did some research to prove you wrong, because I don't think continuum is the right concept, but it turns out that Turing seems to agree with you. Quoting him in "Computing Machinery and Intelligence":
> In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.
So now I think you're both wrong :) Particularly I take issue with the assumption that the "cleverer" relation is transitive. We've only really studied a few relations in this space:
- pushdown automatons are cleverer than finite state machines
- turing machines are cleverer than pushdown automata
- humans are cleverer than turing machines (I'd argue for this, but others would disagree)
Presumably there are other points which we have overlooked or not yet discovered. For instance, maybe something which has the "memory" quality of a pushdown automaton, but lacks the "state tracking" property of a finite state machine. When compared with an FSM, such a thing would not be more or less clever than it, it would just be clever in an orthogonal way.
I strongly suspect that two intelligences (of greater power than the theoretical machines that we yet have) could meet and discover that they each have a capability that the other lacks. This would be a situation that you couldn't map onto a continuum--you'd need something with branches, a tree or a dag or a topological space: something on which the two intelligences could be considered cousins: neither possessing more capabilities than the other, but each possessing different capabilities. (Unlike the FSM example, they would have to share some capabilities, otherwise they couldn't recognize each other as intelligent).
Further, I suspect that in order to adequately classify both intelligences as cousins, you'd have to be cleverer than both. Each of the cousin-intelligences would be able to prove among themselves that theirs is the superior kind, but they'd also have to doubt these proofs because the unfamiliar intelligence would be capable of spooky things which the familiar intelligence is not.
I mean an evolutionary tree where intelligence features are added in some branches makes sense.
I guess part of what I was trying to address is that we like to think of intelligence as what people do and are the pinnacle of, and discounting anything that is not covered by that.
I definitely agree that defining intelligence as what humans do is a problematic practice. I guess I just wanted to nit pick a little.
There's definitely a lot of "it's not real intelligence because it's not human intelligence" going around these days. Doesn't seem like it's going anywhere useful though.
I think the point is less that there is a truth and we're too dumb to figure it out, and more that in certain circumstances we'll just have to accept a lower bar for evidence about whether those properties apply.
It reminds me of how no class of computer can solve the halting problem for itself. No matter how intelligent you are, there will be holes like this in your certainty about some things.