I'm okay with this: humans likely make hilariously bad guesses about things that are obvious and easily accessible to machines, and therefore the reverse is also true.
Guessing "what are trousers" for king of Egypt isn't in itself an indicator the whole Watson system is flawed. Although you're right: it's an indicator the intelligence is non human-like.
Just like, from Watson's perspective, a human named John making hilariously bad guesses related to coin flips isn't in itself an indicator that John isn't intelligent either.
Just that there are some categories of knowledge or application of that knowledge that some systems are bad at handling.
> I'm okay with this: humans likely make hilariously bad guesses about things that are obvious and easily accessible to machines, and therefore the reverse is also true.
Yes, it's almost perfectly dual: the things we do easily, without thinking, are hard for machines. Many things that we can only do with years of training, machines do effortlessly.
I think technology like Watson has a bright future when applied in the right way, but I think it's counter-productive to wrap it in anthropomorphic marketing, and especially to give it these direct natural language interfaces. Because that makes people misunderstand what it is.
That's really a choice. I mean most machine learning models wind up outputting a confidence distribution over possible outputs, so it's up to the user to decide how to extract an answer from that. They can and do have low confidence when they aren't sure.
Guessing "what are trousers" for king of Egypt isn't in itself an indicator the whole Watson system is flawed. Although you're right: it's an indicator the intelligence is non human-like.
Just like, from Watson's perspective, a human named John making hilariously bad guesses related to coin flips isn't in itself an indicator that John isn't intelligent either.
Just that there are some categories of knowledge or application of that knowledge that some systems are bad at handling.