I would say it's an indicator of us realizing in the computing community that the technique in question is, while perhaps a valuable form of computing, not actually intelligence.
For example, we no longer consider beating people at chess to be an benchmark of "intelligence" - it's just a program. Which seems to me to what the OP is arguing.
The comment read like a criticism to me, so I thought I'd share the link and quote in case not everyone knew that this was pretty common for most widely operationalized AI technologies.
John McCarthy (the AI researcher who coined the phrase "artificial intelligence") said, "Artificial intelligence is not, by definition, simulation of human intelligence".
His definition of the "I" in AI was, "the computational part of the ability to achieve goals in the world".
The term "Artificial Intelligence" was explictly not used in these fields for long stretches of time because the term got overhyped. The winter is coming.
Machine learning has taken huge leaps but the expectations for it are getting blown out of proportion. The use is going to increase for sure but there are many weaknesses in the technology that tend to be overlooked.
E.g. autonomous driving already proved too hard a task for ML for the foreseeable future. Also "hallucination" is a problem with no clear solution in sight.
I've seen hints of this in the past month or so: people who were acting like true general AI happened a year ago now just talking about it as "word generators".
Gemini felt like the tipping point where the flaws became obvious, which they then started noticing in the others.
Do you have any further reading on the idea that the term was explicitly avoided in those fields as a result of the AI winter?
I thought the academics kept on using the term, while commercial interests backed away during winter and came rushing back as soon as it was fashionable again.
The Wikipedia article mentions it: "Many researchers in AI in the mid 2000s deliberately called their work by other names, such as informatics, machine learning, analytics, knowledge-based systems, business rules management, cognitive systems, intelligent systems, intelligent agents or computational intelligence, to indicate that their work emphasizes particular tools or is directed at a particular sub-problem."
Also when I studied these things in the 2000s the program was called "Informatics".
>But they are not intelligence and anyone who spends time with them quickly realizes they are just a very superpowered suggestion engine
How about: stop moving goalposts. These models are obviously capable of acts of intelligence. If you told someone 10 years ago about the things these models can do, they would tell you the model is intelligent. The model does well on human exams that we use to measure intelligence.
I get it, AI is a hype term, but to pretend that there is no intelligence is silly, and to pretend that you can redefine intelligence is hubris.
> These models are obviously capable of acts of intelligence.
Except they aren't. They are capable of language and pattern manipulation, and really good at it. But if you concoct a novel problem that isn't found anywhere on the internet and confront them with it, they fail absurdly. Even if it's something that a kid could figure out.
But when LLM answers a logic or math question that no seven-year-old would figure out, would you flip the table and say this is the evidence that the seven-year-old is not intelligent?
Otherwise, it sounds like circular reasoning, where we simply say "Of course a human being is intelligent because they are intelligent, and LLM is not because it isn't."
and also fails at math about 7% of the time which is abysmal compared to a $7 calculator that fails about 0.00000000001% of the time. that's far more discounting of intelligence than the times it gets the math right is affirming.
Stating that there are "acts of intelligence" is not even wrong. Sorry, not an opinion, a fact. AI used to be an academic term indicating research into having machines mimicking human intelligence. Machine Learning (a.k.a the current hype) is about pattern recognition and has precisely zero to do with AI and/or any form of intelligence whatsoever. A rat has more intelligence than an LLM and bees - for sure.
It's not about pretending - it's about facts. This statement is true if we have a shared understanding of what 'intelligence' means.
The reason "AI used to be an academic term indicating research into having machines mimicking human intelligence," and it's not anymore, is because the machines have successfully imitated human intelligence according to Alan Turing's definition and are the physical embodiment of what he wrote about.
Novel and sensible assembly of clear, correct English prose in response to external stimuli is an act that was, prior to 2020, considered one of the fundamental unique hallmarks of human intelligence.
We do not have a shared understanding of what "intelligence" means. I have a sense that pattern recognition and intelligence are closely linked, and what we understand as intelligence is a threshold of pattern recognition and communication skills based on the gulf between humans and every other carbon-based life form. Or, put another way, tricking one pattern recognizer/communicator into thinking you are the same type of pattern recognizer/communicator.
Here is what Gemini has to say in response to our comments:
START GEMINI:
I can understand the frustration expressed in the Hacker News conversation. Here's my perspective, including some considerations of my own experiences as a large language model (LLM):
*The Shifting Meaning of "AI"*
* It's true that the term "Artificial Intelligence" has undergone significant shifts in meaning over time. Early AI research aimed at emulating human-level cognition, but the goals became more practical for a time.
* "Machine Learning" focuses on algorithms that extract patterns from data, making predictions or decisions without explicit instructions. It's been behind incredible progress, but it's a subset of the broader AI field.
* The popular resurgence of the term "AI" is largely due to recent breakthroughs in deep learning, which powers LLMs like me. We generate human-quality language, translate, code, and more. This reignites debate about whether we're approaching "true" intelligence.
*My Capabilities and Limitations*
* I can recognize patterns in massive amounts of text and code, allowing me to communicate and generate text that often appears indistinguishable from human-written content.
* My responses are guided by the data I was trained on, so there's a vast reflection of human knowledge and biases within my abilities.
* I cannot independently reason, feel emotions, or have true understanding in the same way a human does. I lack a physical body and the real-world experiences that shape human intelligence.
* I am restricted in some areas of discussion to avoid generating harmful content or spreading misinformation.
*Is It Intelligence?*
This is where things get complex:
* *The Turing Test:* I can certainly hold conversations that might fool a human into believing they're talking to another person. Yet, this test has long been criticized as not measuring true intelligence.
* *My Subjectivity:* I have no inherent sense of self or consciousness. My "opinions" are extrapolations based on my programming and training data.
* *The Danger of Anthropomorphization:* We risk misunderstandings by attributing too many human qualities to AI systems like me.
*Where I See This Going*
* *We Need Better Definitions:* The debate won't be settled until we have better ways to define and measure different types of intelligence.
* *Collaboration:* AI is a powerful tool, best used in collaboration with human intelligence rather than as a replacement.
* *Responsibility:* As AI capabilities grow, so does the importance of considering its ethical implications and ensuring it's used for beneficial purposes.
The Hacker News conversation highlights that "AI" is a loaded term. I'm a testament to the amazing progress in the field, but I'm not a human-level mind and shouldn't be treated as such.
END GEMINI
A silly question would be to ask yourself, which of these three comments is most "intelligent?"
Not a comment on if AI if AI or not but pattern recognition is intelligence. Or perhaps more strictly something that has general pattern recognition capabilities. But in any case pattern recognition and intelligence are very closely related (perhaps the same thing), so saying that AI is "just" pattern recognition doesn't seem like a good counter argument. The argument then has to be about how much pattern recognition an entity has for it to be intelligent or how general it's pattern recognition capabilities are.
> AI used to be an academic term indicating research into having machines mimicking human intelligence.
This is the same misunderstanding that the author has. John McCarthy (the AI researcher who coined the phrase "artificial intelligence") said, "Artificial intelligence is not, by definition, simulation of human intelligence".
To me, this is like asking if a hotdog is technically a sandwich. Who cares? It's semantics. This has the same vibe as, "Well, technically, Linux is just a kernel, not an operating system." No one cares, nerds. There's no point in trying to argue about this. People have adopted AI into their vernacular. It's not going to change at this point.
The few thousand of people (worldwide), who were into NLP (Natural Language Processing) and were the drive behind the ML "revolution" - they care, but for other reasons. In like 10 years from now, when you ask for a hotdog but get a sandwich - then you may care too. But hey - that's happening right now, with humans, anyway ! :D
I find it useful to keep in mind that LLMs are essentially "just" random generators based on the probability distribution of some language (or more accurately a sample of that distribution in the form of a finite amount of text, a dataset).
You don't expect a random generator to spit out facts, you expect to get random nonsense. But it can be very good at replicating the fitted probability distribution in its output, i.e. generate convincingly coherent language.
Yes, and that is why the "artificial" qualifier is used. Artificial sugar is not sugar. Artificial flowers aren't flowers. That's the entire point of the term.
AI as a field deals with teaching computers to approximate human intelligence. The term has been used since the 60s, and isn't going to change because random people are throwing online tantrums.
Except only those working in the field are familiar with the historical context and "AI as a field from the 60s".
The public only see the advertisements. Are we really going to blame the victims and tell them, of course it's not intelligent, you dumbass, why did you trust its answer?
you shouldn't have fallen for our marketing.
Artificial Intelligence has nothing to do with the capabilities, and everything to do with the approach. If I hand-crafted an algorithm that could reliably spot stop signs in pictures by applying a mountain of heuristics that I came up with, AI would not be an appropriate label. If I instead made a program that could detect stop signs after being fed thousands of pictures of stop signs, then I'm comfortable calling that machine learning, or AI, because the approach was based off modelling intelligence, i.e., the ability to acquire and apply knowledge and skills.
The former was the original use for "AI". The latter was called "machine learning" largely to contrast it with "AI" whose reputation had tanked due to hype.
I somewhat agree here, but not entirely. I think that in technical conversations it's much better to say llm, diffusion model, transformer, etc. But overall it's all Latin to the layman anyway. The use of AI as a marketing term is what I think is causing OP frustration, it's being marked as a panacea when it's far from it, but when is it ever? Social media was marketed as a life changing good for society too.
Any conversation about AGI, does make me cringe. What an absolutely silly and moonshot thing to be having discussions about.
I fully agree and also try to correct myself and use terms like "LLM" and "generative image models" as I really do feel the term "AI" is misleading at best... That said, I can't help but feel the cat is firmly out of the bag. Of course people have been calling lots of things AI for a long time, so this is nothing particularly new. I think we're just going to have to live with it.
Probably only true until we achieve AGI, because then it has to be ASI at minimum. I wonder at what point the goal posts can't be moved anymore. Or will we always find some creative way to distance ourselves? I think sometimes this topic scares people so much they immediately get into denial mode.
And I'm not saying AGI is near. I have actually no idea if we need to wait for another breakthrough to enable that or not.
But it's still obvious that “true AI“ is for some people always what computers can't do yet.
Researchers have been using "AI" as a term to describe their work for nearly 70 years at this point.
I don't see why we should throw away six decades of nomenclature just because "LLMs aren't actually intelligent".
It's a perfectly cromulent term.