I may have been unclear about how I am defining this, but it is not unclear to me and I don't see how "going down that path" automatically makes it unclear.
Humans are pretty bad about imposing third party points of view on behavior and reasoning. Until we get better at understanding reasoning from a first person point of view for humans, we are going to have trouble figuring out how to write effective algorithms for AI.
I guess you are contradicting yourself, while you say
>humans don't really do intelligence all that well themselves
>[(the definition of) intelligence] is not unclear to me
because knowing the definition of something and knowing something is largely the same, so you say you know intelligence, but humans in general don't do.
In your previous reply you used the word "intelligence" in a manner which had assumptions in it (this is, after all, how humans communicate). "AI" uses the same word with an overlapping but different set of assumptions.
Not that your reply answers my actual question, but I would be interested in knowing what you believe my assumptions were and how these differ from those used in AI.
I try to avoid making assumptions, including what your assumptions in your definitions of your two uses of the word "intelligence" were. If you used them coherently then that's wonderful but their definitions are not distinct outside your head. Their general (ie. dictionary/scientific) definitions are not absolute.
I'm an intelligent person. I know lots of intelligent people who are stupid and I know lots of stupid people who are intelligent. And I'm one of them, and I don't even know which one.
Well, that's a rather weasel-y non-answer answer, but it sounds to me like you think I am calling people "stupid" and that isn't what I am doing. I do know something about the background of intelligence testing and what not for humans. That definition of intelligence is inherently problematic.
Again: My point is that people frame things far too often from a third party point of view. This inherently causes problems in decision-making. Sometimes, humans can kind of muddle through anyway, in spite of that default standard. But AI is much less likely to muddle through anyway when coded that way.
If you (or anyone) would like to engage that point, awesome! Otherwise, I think I am done here.