Hacker News new | past | comments | ask | show | jobs | submit login
AI Winter (wikipedia.org)
55 points by leonidasv on Nov 25, 2019 | hide | past | favorite | 41 comments



Really what we are in the midst of is AI click bait. People are predicting the next "AI Winter", like major publications are predicting the next recession.


Journalists successfully predict 11 out of the last 6 AI winters.


It's a new thing every year. It was blockchain. Now it's AI and machine learning. Soon there will be a different buzzword to generate clicks and investor funds.


Well IoT has come back a bit too with "edge processing". Buzzword trends kill me, mostly because of non-technical managers asking about using them with no real understanding of what, how and why, rather than the public buzz they generate. Blockchain is the king of that, but I do think calling ML AI was ridiculous development and I hold steadfast in my use of ML.

You are right too, of course, about the investor situation. That one is really bad because nothing says "employ a technology you don't consider a good fit for your business" like $1M...

I had to switch prematurely to microservices before due to investor pressure, and the company wasn't ready and you can imagine the rest.


And when one goes through enough cycles, they learn to wait for the dust to settle or sell shovels, instead of jumping into the race.


This is as good a place as any to ask this I suppose.

Google results being as manipulated as they are these days, what is the best course/book to get to self teach on artificial intelligence? I'd like to spend a few months familiarizing myself with concepts and writing programs with what I learn to see if anything useful comes of it.

edit: Found this which answers this perfectly https://news.ycombinator.com/item?id=15689399


I'd suggest a university statistics book and a book on data analysis or data mining


For a non-traditional hands-on approach, I gathered some resources here: https://news.ycombinator.com/item?id=21241092


I used this book in school: http://aima.cs.berkeley.edu


fast.ai


Fast.ai is a great resource but it is really narrowly focused on deep learning and especially transfer learning


GP wants to do self study and only a few months, I'm not aware of a better resource for that purpose.


Russell and Norvig [0]. The best introduction IMO to "real" AI, as opposed to the ML stuff that currently passes for AI.

[0] http://aima.cs.berkeley.edu/


I find this vaguely derogatory. What the linked course looks like to me is old-school, mostly symbolic AI, that most experts would agree hit something of a brick wall decades ago.


I'm an expert and I don't agree. The successes of symbolic AI have been many; they're just not called AI any more. The failures of ML are many; they're just ignored. How many layers do you need and why? How many training cases do you need and why? What has the network learned and how do you know that? What important things has the network not learned? When will it fail?

Until you can answer these questions, you're not doing science. You're just throwing data at Tensorflow or Keras until you guess it's good enough. And then someone dies in a self-driving car accident because the network didn't learn that a transverse truck or a pedestrian was an obstacle.

ML does impressive stuff but as several recent HN articles have pointed out it's basically just curve fitting. And voodoo curve fitting at that.


and in how many recent applications has this "old" symbolic "AI" surpassed ML? I agree with the parent commenter.

>"hit something of a brick wall decades ago"

Is true. Why do you disagree? What improvements did "old-school, mostly symbolic AI" bring to the current field of research?

Sure, ML has failures - but those failures are in applications and fields where old school symbolic AI can't even reasonably be applied to. We have to start somewhere and just using symbolic AI is far behind in terms of the requirements we have currently.

>"How many layers do you need and why? How many training cases do you need and why? What has the network learned and how do you know that? What important things has the network not learned? When will it fail?"

A lot of these issues have been addressed in many recent papers. A lot of these papers have been solely focused on understandable/explainable machine learning which is an overarching topic that covers all your questions.

>"Until you can answer these questions, you're not doing science."

So, you are essentially saying a large part of CS academia is not doing "science". I'm not sure what kind of "science" you have done to make such comments. But I'm pretty sure there are plenty of researchers out there who are far more of an expert than you are in this field would wholly disagree with you.


I think there is definitely a path towards making neural nets more of a science. Trouble is, it's just so easy to get results quickly and publish without too much deep understanding. The meme is that neural nets work only in practice, not theory. I do not think we should dismiss it though, because people are working hard to bring the theory up to speed.

Neural nets are also showing a lot of promise for symbolic tasks like automated theorem proving. So I would predict that in time they eventually become a standard technique in what would otherwise be the domain of older techniques.


Is OP inferring we are entering another AI Winter after this most recent hype cycle for general AI?

Considering we use neural networks to encode audio (in Opus), transcribe our speech, secure our homes and much more, this most recent AI wave has been quite productive.


All AI waves have been very productive. The last ones gave us huge advancements in programming language design, operating systems, networking, hci, spawned a few programming paradigms. ..

It does not really translate to the final objective though.


What is it that AI has done for all those fields that you mentioned?


The research in AI directly coincided with the development of early computer systems. Lisp and friends (and small but important ideas that come with it ranging from string interning to garbage collection to the macro system to the fundamental idea of compilers and interpreters etc) was contributed to by people mostly involved with Symbolic AI research.

The early programming language parsing research is the direct product of researchers working on natural language processing, and in fact the BNF form was developed for natural languages but later adapted and improved for programming languages.

The idea of Logic programming with prolog and friends comes directly from AI research.

Most of the search algorithms we use unknowingly in various machines have origins in the first AI wave.

The human computer interaction research directly dealt with development of fundamental ideas on speech synthesis, graphical user interfaces and computer graphics.

All in all, The field of applied AI and Computers developed together and a lot of early ideas spearheaded by AI transferred into fundamental general computing, ideas so trivial we do not even think about them now. But they were not so trivial when they were developed, specifically for AI


I did not post it trying to make any prediction, I just found curious that the term “AI Winter” was coined long before. The article also highlights important historical moments for AI development and how investors appear to overreact to AI results — both good and bad.


The first AI winter was people making grandiose claims about real AI being 'just around the corner' as soon as we had figured out whether to call the main loop 'Ego' or 'Frank'.

That of course went nowhere. The current AI revolution has produced a lot of tangible results but is - as far as I understand it - not much closer to AGI than the first one was. And some are - again - overpromising and under-delivering which risks a second AI winter, though for less good reasons.

All in all it would be nice if people would stop to make these claims, it isn't helping at all.


AI hype is pretty excessive right now, but there’s a lot of fuel to keep the fire going: the internet makes the acquisition of data much easier versus the 1980s, accelerators are ubiquitous (and even present in mobile devices), and most importantly current methods (e.g. deep nets and very large linear models) have shown relatively excellent generalization accuracy. The internet also helps with dissemination. The hype isn’t going anywhere, but there’s probably still at least another 5 years of explosive growth left for the field.


Won't we enter another AI winter unless we can solve things like finding something better than backpropagation or just jumping to the Student model in a Distillation teacher-student model, letting us know there have to be better means out there that we haven't achieved. Though I could be completely wrong and Hinton could come out tomorrow with a groundbreaking paper that furthers another 5 to 10 years of iterative improvements.


Hinton already came out with a “groundbreaking” paper not too long ago - capsule networks.


Capsules papers don’t strike me as particularly groundbreaking. Still convolving over every single location in the input image, still using backrop. Vectors instead of scalars for activations and a novel way to pass signals between layers? Sure, why not. But if it wasn’t from Hinton no one would’ve noticed. The initial results were underwhelming, and there has been zero progress even 2 years later.


In this 1988-video Teuvo Kohonen tells us that future AIs will be "separate units you can attach to your regular computer". This sounds true, methinks real AIs will be treated like human personalities with certain skillz and experience. "This AI has 3 years of experience of driving on congested roads in India, 300€". They will be separate units, because the hardware will be more brain-like, and architecture is constructed or grown as it gains knowledge and skills. https://youtu.be/Qy3h7kT3P5I?t=2479


With just incrementalism there’s still enough value to extract from the current methods of deep learning to propel a small country into a golden age. Leaps and bounds are needed to get to AGI, but winter comes when there’s no more money on the table, not because we can’t reach our scifi fantasies.


AGI = Artificial General Intelligence?


Yes!


Can’t learn to ski without snow...

Also the first line on the wiki mention that this entry needs to be updated.

The level of funding is as hot as it’s ever been. NOAA predicts a winter with temperatures above the normals. So I would go with that.


With Elon's latest truck, we've just crawled out of the cyber winter. And that was a cold one.


Yeah, it’s winter time and we have AI.


I'm rather confused by the misuse of the term "AI", to the point, I no longer know what is true. I thought true AI needs to pass some sort of test that demonstrates real intelligence? So, did the Singularity event happen already and I didn't notice?


>In computer science, artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines

The term AI doesn't actually imply human-level intelligence.


Missing "is coming" in the title.


We're in an AI Summer if anything.


Idk, summer is typically something you enjoy.


i don't see any signs of funding or enthusiasm letting up though, these types of posts are more reactionary and goal post moving "well its not really "strong" ai because x" never-mind that lots of people are finding real applications for ML. Processing power to support it is increasing and funding is still flowing.


We are currently in the AI winter. What we have now is not AI it’s curve fitting. Don’t be fooled, the computer does not “understand” it’s doing blind curve fitting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: