> My gut feeling is that it is hubris to think that we are going to "figure out" intelligence with increasingly sophisticated mathematical models anytime soon.
We did it already. Compter understand language, translate it, react to it. They can recognize items on a picture. Is there a task left which can't be done by computers better and faster than by humans?
>Almost by definition, if we can analytically understand it, it's not going to be interesting enough.
I think current ML is magic. I understand the math behind it. But still, I'm amazed every time when the training is over and it actually works like intended. Everything which is big enough is more than the sum of it's parts.
>I'm amazed every time when the training is over and it actually works like intended. Everything which is big enough is more than the sum of it's parts.
What about when it doesn't work as intended and fails ridiculously, even though it usually works perfectly well?
Humans do the same thing; not working as intended some of the time. It's just that the failure modes for ML are different, and so we see them as ridiculous.
Well, if the result of the different failure modes means that a machine can't tell the difference between what obviously resembles a turtle and a rifle, or a cat and guacamole, then that's something that anyone who watched the video is better at. You can call it a different failure mode, but these are things no human would misclassify unless seriously ill, and being able to classify simple objects is important to our day to day lives.
Imagine some sort of Robocop deciding to neutralize someone for holding a turtle toy.
Probably better if you gave a task example like being a mother or a father or a grandparent or an uncle or an aunt or a mentor or a friend or anything to do with human interaction.
Show this leopard sofa to 100 humans for 1 second each and see how many will make the same mistake. We are 99% there and you point to the 1% to prove how bad we failed. Sure, the error rate will go down even further in the next years. But what we have is more than good enough to be used in commercial products. It's not like we are trying to win a contest man vs. machine (which we will or already have).
That's a poor example, since self-checkout scanners have existed for a while now. But notice how they aren't used exclusively. The bigger orders still require the manned scanners, and the self-checkout always has someone on duty.
A better example is plumbing. How would you go about automating a human plumber who handles all sorts of piping and crawl spaces in a large variety of settings?
It was an example of a job which can be automated (and as you pointed out, is already automated), but which is still dominated by human workers. You suggested that all the jobs done by humans can't be automated. My response is: most (or at least some) can, but aren't.
Sure, plumbing is a much more challenging example. But it is an economic problem. The cost to automate plumbing is much higher than the utility of it.
1. We haven't 'figured out' intelligence, far from it, we don't even know what we don't know.
Your comment sounds like Lord Kelvin proclaiming that physics is "over" a couple years before people figured out there were huge holes in the theory which eventually led to quantum mechanics. Our understanding of intelligence is probably less complete than our understanding of physics _back then_.
2. ML is really underwhelming if you measure it against actually intelligent behavior. Figuring out cool regression mechanisms is neat and all, but that's what it is, and it has nothing to do with intelligence in the general sense, much like expert systems had nothing to do with actual domain knowledge, they were just one of the most primitive models, low-hanging fruit that we could exploit.
We did it already. Compter understand language, translate it, react to it. They can recognize items on a picture. Is there a task left which can't be done by computers better and faster than by humans?
>Almost by definition, if we can analytically understand it, it's not going to be interesting enough.
I think current ML is magic. I understand the math behind it. But still, I'm amazed every time when the training is over and it actually works like intended. Everything which is big enough is more than the sum of it's parts.