Two generations ago we were calling chess programs AI. Last generation it was face detection in images. Each generation makes a new clever thing, calls it AI for a while, puts it to good use and forgets about it for the next shiny thing called AI for real this time.
Don’t get me wrong: the new large models are amazing, and will be useful. Not dissing them. Just observing the history of the term ‘AI’ in popular usage.
This is a irony I've noticed too. Some engineers tell me their managers are in a state of panic about how to "best make use of AI" (translated: how to ride the current hype wave to get nice PR points). The correct answer of course is "we already use AI and have been doing so since the 80s".
Considering it is ridiculously impractical to use ChatGPT in many areas where 1980s AI techniques are in use (think metaheuristics, chess, etc.), I suspect this is rather common.
There is an old quote about this, I’m too lazy to find at the moment (from the 80s or 90s?). Gist was that we call the new thing AI until we really understand it, then we start calling it machine learning.
It would be nice to have a table where, say, column A has "linear regression," and column B has something the business already uses that for.
Sure, we can also associate supply chain and logistics bin packing, inventory forecasting, existing warehouse automation--including computer vision and such.
But once you get to NLP, deep learning, and LLM, doesn't it kind of go off the rails?
Two generations ago we were calling chess programs AI...
I think this observation held up well until AlphaGo hit the scene. Then it started to sound a bit less insightful. At this point, it's just whistling past the proverbial graveyard.
Don’t get me wrong: the new large models are amazing, and will be useful. Not dissing them. Just observing the history of the term ‘AI’ in popular usage.