>>I can't be the only one that considers all these AI articles just smoke and mirror puff pieces to prop up the companies value by capitalizing on the hype (hysteria?)
You're not.
So far I have not seen/heard anything remotely resembling AI. Neural nets are just weighted graphs.
Is anyone arguing that machine learning is not AI? The gist of the article is that Google is leaning toward ML/DL and away from the rules engines/Knowledge Graph. The headline is a shorthand, which, although it could be more precise, is not inaccurate.
Interesting that you feel that. The article mentions nothing about Google's Knowledge Graph. I don't have any privileged insight into Google, just the same surface data as the rest of you all - but I would say that, if anything, Google's Knowledge Graph can fit with _both_ a rules engine strategy and a machine learning one.
How is Google going to "organise the world's information" unless it has a model of how all the facts in the world line up? That model is the Knowledge Graph. How does Google intend to map queries that it has never seen before to pages in its vast index? With the help of the Knowledge Graph and natural language processing and machine learning.
I'm going to try to articulate something here that I've not fully worked out but that I'm sort of intuiting so cut me some slack for the next paragraph :)
People keep banging on about machine learning and the impact that it is having. This is undeniable. But we can see even from AlphaGo that a hybrid approach that combines artificial neural nets with some sort of symbolic system outperforms neural nets on their own. For AlphGo that symbolic system is tied to the mechanics of the game of Go. For internet search that symbolic system is a generalised knowledge graph.
Do you get what I mean? I'd love to hear what others think …
The article doesn't mention Google's Knowledge graph by name. But that is what the reporter is referring in sentences such as these, which mention "a strict set of rules set by humans":
> But for a time, some say, he [Singhal] represented a steadfast resistance to the use of machine learning inside Google Search. In the past, Google relied mostly on algorithms that followed a strict set of rules set by humans.
I know because I spoke with Metz at length and was quoted in the article.
The Knowledge Graph was, by definition, a rules engine. It was GOFAI in the tradition of Minsky, the semantic web and all the brittleness and human intervention that entailed.
What he's saying here is that Google has relied on machine learning in the form of RankBrain to figure out which results to serve when it's never seen a query before. And the news, in this case, is that statistical methods like RankBrain will take a larger and larger role, and symbolic scaffolding like the Knowledge Graph will take a smaller one.
You are right that the most powerful, recent demonstrations of AI combine neural nets with other algorithms. In the case of AlphaGo, NNs were combined with reinforcement learning and Monte Carlo Tree Search. I don't think a rules engine (the symbolic system you refer to) was involved at all there. Nor is it necessary, if by studying the world our algorithms can intuit its statistical structure and correlations without having them hard coded by humans before hand. It turns they do OK learning from scratch, given enough data.
So in many cases we don't need the massive data entry of a rules engine created painstakingly by humans, which is great, because those are brittle and adapt poorly to the world if left to themselves.
The Knowledge Graph is just a way of encoding the world's structure. The world may reveal its structures to our neural networks, given enough time, data and processing power.
Hmm, are you sure? Doesn't "a strict set of rules set by humans" refer to the PageRank algo alongside rules for spammy content, nd rules like whether meta keywords are set, and so on, all the little rules that feed into deciding where a page that matches ranks in the resultset. That's why it's tweakable by engineers..?
"The Knowledge Graph is just a way of encoding the world's structure." Precisely. Very well said. "The world may reveal its structures to our neural networks, given enough time, data and processing power." But that's the point, NNs don't have to perform this uncovering because we do the hard work for them in the form of Wikidata and Freebase and what have you. I don't get what you think is brittle about this.
I was referring to the very recent article[1] by Gary Marcus, I need to quote a good chunk:
"""To anyone who knows their history of cognitive science, two people ought to be really pleased by this result: Steven Pinker, and myself. Pinker and I spent the 1990’s lobbying — against enormous hostility from the field — for hybrid systems, modular systems that combined associative networks (forerunners of today’s deep learning) with classical symbolic systems. This was the central thesis of Pinker’s book Words and Rules and the work that was at the core of my 1993 dissertation. Dozens of academics bitterly contested our claims, arguing that single, undifferentiated neural networks would suffice. Two of the leading advocates of neural networks famously argued that the classical symbol-manipulating systems that Pinker and I lobbied for were not “of the essence of human computation.”""
For Marcus the symbolic system in AlphaGo _is_ Monte Carlo Tree Search. I'm saying that for the so-called Semantic Web the symbolic system is the Knowledge Graph. This Steven Levy article[2] from Jan. 2015 put the queries that evoke it at 25% back then. I figure it's more now and growing slowly, alongside the ML of RankBrain.
You're not. So far I have not seen/heard anything remotely resembling AI. Neural nets are just weighted graphs.