I'm asking why it's so impressive that it learns by watching a person. Specifically, why we think Terry Winograd would be impressed that 50 years of AI research led to this.
Didn't Deep Blue do the same thing as part of its chess training in the 1990s? Social/observational/modeling learning has been studied in psychological research for over 50 years: https://en.wikipedia.org/wiki/Social_learning_theory Isn't basically every recommender system, all of the vision systems, and the machine translation we all use... isn't all of that just observational learning?
I'm not saying it's not interesting. It's a cool toy. But Elon Musk gets an idea for a neat machine learning toy, using an idea that predates AI as a field, and Terry Winograd is supposed to be impressed by this?
The point is, AI has only been narrowly successful. Narrow success is rad. I'm not hating. But none of the promises of broad intelligence have really progressed. Siri is not a particularly meaningful advance beyond SHRDLU. Instead of just stacking blocks, she stacks text on Messages.app and pushes buttons in Weather.app.
What's interesting about this particular paper, is that it can learn from a single demonstration. Instead of taking thousands of examples before it can do anything. Even cooler, it learns how to learn. They didn't hand code how to do this, it figured it out through training. On top of that it uses machine vision to learn to see all on it's own, without having to be hand coded with information about where things are.
More generally, it uses a deep neural network. Which is very different than the older approaches you mention, and can learn much more sophisticated functions. And has enabled a lot of results that would have been unimaginable previously.
As for the early AI researchers? They were insanely overly optimistic about how difficult AI would be. It didn't seem like a hard problem. They thought they could just assign a bunch of graduate students to solve machine vision over the summer. It seems pretty simple. We can see great without even thinking about it, how hard can it be?
But after biting their teeth into it a bit, I'm sure they would appreciate our modern methods. They might not be as elegant as they hoped. They wanted to find "top down" solutions to AI. Simple algorithms based on symbolic manipulation and logic and reasoning. Such an algorithm probably doesn't exist, and an enormous amount of the history of AI was wasted searching for it.
And even if they did discover our modern methods much earlier, they wouldn't have been able to use them. It's only recently that computers have gotten fast enough to do anything interesting. It's like they were trying to go to the moon with nothing but hand tools. Sometimes you just need to wait for the tech tree to unlock the prerequisite branches first.
Didn't Deep Blue do the same thing as part of its chess training in the 1990s? Social/observational/modeling learning has been studied in psychological research for over 50 years: https://en.wikipedia.org/wiki/Social_learning_theory Isn't basically every recommender system, all of the vision systems, and the machine translation we all use... isn't all of that just observational learning?
I'm not saying it's not interesting. It's a cool toy. But Elon Musk gets an idea for a neat machine learning toy, using an idea that predates AI as a field, and Terry Winograd is supposed to be impressed by this?
The point is, AI has only been narrowly successful. Narrow success is rad. I'm not hating. But none of the promises of broad intelligence have really progressed. Siri is not a particularly meaningful advance beyond SHRDLU. Instead of just stacking blocks, she stacks text on Messages.app and pushes buttons in Weather.app.