I think the mere fact that the OP feels the need to state that (paraphrasing) "possibly additional techniques besides deep learning will be likely necessary to reach AGI" reveals just how deeply the hype has infected the research community. This overblown self-delusion infects reporting on self-driving cars, automatic translation, facial recognition, content generation, and any number of other tasks that have reached the sort-of-works-but-not-really point with deep learning methods. But however rapid recent progress has been, these things won't be "solved" anytime soon, and we keep falling into the trap of believing the hype based on toy results. It'll be better for the researchers, investors, and society to be a little more skeptical of the claim that "computers can solve everything, we're 80% of the way there, just give us more time and money, and don't try to solve the problems any other way while you wait!"
Agreed. The hype surrounding machine learning is quite disproportionate to what's actually going on. But it's always been that way with machine learning -- maybe because it captures the public's imagination like few other fields do.
And there are definitely researchers, top ones no less, who play along with the hype. Very likely to secure more funding, and more attention for themselves and the field. Which has turned out to be quite an effective strategy, if you think about it.
The other upside of this hype is that it ends up attracting a lot of really smart people to work on this field, because of the money involved. So each hype cycle leads to greater progress.
The crash afterwards might slow things down a bit, particularly in the private sector. But the quantum of government funding available changes much more slowly, and could well last until the next hype cycle starts.
>> The other upside of this hype is that it ends up attracting a lot of really smart people to work on this field, because of the money involved.
The hype certainly attracts people who are "smart" in the sense that they know how to profit from it, but that doesn't mean they can actually do useful research. The result is, like the other poster says, a huge number of papers that claim to have solved really hard problems, which of course remain far from solved; in other words, so much useless noise.
It's what you can expect when you see everyone and their little sister jumping on a bandwagon when the money starts pouring in. Greed is great for making money, but not so much for making progress.
> The result is, like the other poster says, a huge number of papers that claim to have solved really hard problems, which of course remain far from solved; in other words, so much useless noise.
Could the answer be holding these papers to a stricter standard during peer review?
Ah. To give a more controversial answer to your comment; you are asking, very reasonably: "isn't the solution to a deficit of scientific rigour, to increase scientific rigour"?
Unfortunately, while machine learning is a very active research field that has contributed much technology, certainly to the industry but also occasionally to the sciences, it has been a long time since anyone has successfully accused it of science. There is not so much a deficit of scientific rigour, as a complete and utter disregard for it.
Machine learning isn't science. It's a bunch of grown-up scientists banging their toy blocks together and gloating for having made the tallest tower.
Machine learning researchers publish most of their work on Arxiv first (and often, only), so peer review will not stop wild claims from being publicised- and overhyped. The popular press helps with that as do blogs and youtube accounts that present the latest splashy paper for the lay audience (without, of course, any attempt at critical analysis).
As to traditional publications in the field, these have often been criticised for their preference for work reporting high performance. In fact that's pretty much a requirement for publication in the most prestigious machine learning conferences and journals, to show improved performance against some previous work. This strongly motivates researchers to focus on one-off solutions to narrow problems, so that they can typeset one of those classic comparison tables with the best results highlighted, and claim a new record in some benchmark.
This has now become the norm and it's difficult to see how it is going to change any time soon. Most probably the field will need to go through a serious crisis (an AI winter or something of that magnitude) before things seriously change.