I spent a lot of time in the 1980s experimenting with Roger Schank's and Chris Reisbeck's Conceptual Dependency Theory, a theory that is not much thought of anymore but at the time I thought it was a good notation for encoding knowledge, like case based reasoning.
Having helped a friend use Watson a year ago, I sort-of agree with Schank's opinion in this article. IBM Watson is sound technology, but I think that it is hyped in the wrong direction. And over hyped. This seems like a case of business driven rather than science driven descriptions of IBM Watson. Kudos to the development team, but perhaps not to the marketers.
Really off topic, but as much as I love the advances in many layer neural networks, I am sorry to not also see tons of resources aimed at what we used to call 'symbolic AI.'
Back in the 80's, there were two AI camps: Syntax and Semantics. My favorite for syntax was Terry Winograd. For semantics was Roger Schank. Back then, I was trying to map AI onto biology. Syntax was easier; you could easily map the edges to a McCullough and Pitts neural model. Semantic nets was harder; the edges were more of a way of modeling symbolic relationships. So, I couldn't wrap my head around Shank. Wish I had; it felt like I was missing the point.
There is fortunately a good amount of money being spread across different AI techniques still. Not from every funding source, but the traditional ones are still following the usual model of hedging their bets somewhat conservatively. It's mostly industry, and closely industry-aligned nonprofits like OpenAI that are going 100% all-in on nothing but deep networks. But those kinds of groups have always been a bit short-term driven and susceptible to hype; in the '80s they were putting all their money into expert systems.
If you look at which AI projects the National Science Foundation is funding (or in Europe, the Horizon 2020 program), it's a lot more diverse. Even just in machine learning they're not putting all their eggs into the deep neural nets basket, with considerable funding going to the other major areas of ML (e.g. Bayesian methods). Symbolic methods have a decent amount of funding too, including some explicitly "cognitive systems" grants. Some other symbolic techniques are still funded but not as much by "AI" bodies, e.g. the logic-based branch of AI now gets a decent amount of its funding from the software engineering community, because verification is a big application of solver / theorem-prover techniques.
Attending this past year's AAAI in Phoenix was kind of funny in that respect. Maybe 95% of the consultants and recruiters there were solely interested in hiring people to tweak deep networks. But the scientific side of the conference didn't look quite like that.
Do you still think it has merit? I would be curious to know what parts of it you think are worth adopting (or have been) into modern theories.
Personally I'd love to see more research go into forms of symbolic AI that can deal with uncertainty, probability, partiality etc. The Research VP of Cycorp told me he wanted to go in that direction with Cyc, but couldn't find anyone with a promising and rigorous proposal. I think those sorts of considerations often lead either to pure theory with no thought to implementation, or the worst kind of adhoc "fuzzy logic" (scare quotes because fuzzy logic is a proper mathematical discipline).
Having helped a friend use Watson a year ago, I sort-of agree with Schank's opinion in this article. IBM Watson is sound technology, but I think that it is hyped in the wrong direction. And over hyped. This seems like a case of business driven rather than science driven descriptions of IBM Watson. Kudos to the development team, but perhaps not to the marketers.
Really off topic, but as much as I love the advances in many layer neural networks, I am sorry to not also see tons of resources aimed at what we used to call 'symbolic AI.'