Is OP inferring we are entering another AI Winter after this most recent hype cycle for general AI?
Considering we use neural networks to encode audio (in Opus), transcribe our speech, secure our homes and much more, this most recent AI wave has been quite productive.
All AI waves have been very productive. The last ones gave us huge advancements in programming language design, operating systems, networking, hci, spawned a few programming paradigms. ..
It does not really translate to the final objective though.
The research in AI directly coincided with the development of early computer systems. Lisp and friends (and small but important ideas that come with it ranging from string interning to garbage collection to the macro system to the fundamental idea of compilers and interpreters etc) was contributed to by people mostly involved with Symbolic AI research.
The early programming language parsing research is the direct product of researchers working on natural language processing, and in fact the BNF form was developed for natural languages but later adapted and improved for programming languages.
The idea of Logic programming with prolog and friends comes directly from AI research.
Most of the search algorithms we use unknowingly in various machines have origins in the first AI wave.
The human computer interaction research directly dealt with development of fundamental ideas on speech synthesis, graphical user interfaces and computer graphics.
All in all, The field of applied AI and Computers developed together and a lot of early ideas spearheaded by AI transferred into fundamental general computing, ideas so trivial we do not even think about them now. But they were not so trivial when they were developed, specifically for AI
I did not post it trying to make any prediction, I just found curious that the term “AI Winter” was coined long before. The article also highlights important historical moments for AI development and how investors appear to overreact to AI results — both good and bad.
The first AI winter was people making grandiose claims about real AI being 'just around the corner' as soon as we had figured out whether to call the main loop 'Ego' or 'Frank'.
That of course went nowhere. The current AI revolution has produced a lot of tangible results but is - as far as I understand it - not much closer to AGI than the first one was. And some are - again - overpromising and under-delivering which risks a second AI winter, though for less good reasons.
All in all it would be nice if people would stop to make these claims, it isn't helping at all.
AI hype is pretty excessive right now, but there’s a lot of fuel to keep the fire going: the internet makes the acquisition of data much easier versus the 1980s, accelerators are ubiquitous (and even present in mobile devices), and most importantly current methods (e.g. deep nets and very large linear models) have shown relatively excellent generalization accuracy. The internet also helps with dissemination. The hype isn’t going anywhere, but there’s probably still at least another 5 years of explosive growth left for the field.
Considering we use neural networks to encode audio (in Opus), transcribe our speech, secure our homes and much more, this most recent AI wave has been quite productive.