Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] AI will be dead in five years (erikgahner.dk)
41 points by erikgahner 5 months ago | hide | past | favorite | 38 comments



>I doubt we will talk a lot about AI in five years because AI will be an integral part of how people search for and interact with information. Or, if we talk about AI in five years, it will most likely refer to different technologies and solutions than what we denote as AI in 2024.

It's the latter. In 5 years, what we consider to be AI today will just be standard product features and the AI being discussed will be some new set of capabilities that are just being developed, and the same will happen 5 years after that.


AI as it's called today is an improvement over an existing thing, e.g. image generators, conversational interfaces, customer support automation, that kinda thing; that aspect of it won't go away, especially not if "good enough" becomes/stays affordable.

But whatever we have in five years will probably take over the "AI" moniker, like how ML had it ~10 years ago.


I also take your perspective.

During the last AI winter, we relied on things like Bayesian filtering to categorize spam without a lot of ado, and it was a product of the previous AI surge.

Transformer based inference and generation are already useful, and there's no going back from that, even if the architecture will never get us artificial super intelligence.


Ah the classic make a bold prediction to grab headlines because it doesn't matter if you're right or wrong, memory's are short.


Read the article.

It's talking about how what we can "AI" today will be treated as bog standard - the same way this happened with Recommender systems 3-4 years ago, Neural networks 4-7 years ago, statistical learning (eg. SVM) 7-10 years ago, etc.

The title is a reference to a fairly prominent article about "Big Data" that was based on the same premise.


Read the article but I still think the criticism of the title is valid. The claim is that the way we talk about AI will be different in 5 years, not that AI will be dead. Likewise recommender systems, Neural Networks, and Statistical Learning are not certainly not dead. It's an abuse of a term to grab clicks.


I strongly disagree - especially because the title is itself a reference to "Big Data Will Be Dead In 5 Years" which itself talked about this same phenomenon, albeit for Data Engineering.

Titles are not arguments. Some people may want them to be, but they are not.

Engaging with a title just distracts a discussion from the core thesis of a post.


Have these AI True Believers heard of the phrase: sufficiently advanced technology is indistinguishable from magic? What’s magical about a recommendation system? Or statistical learning?

Well maybe they are? But they are all very specialized tools. And it’s not difficult to understand how they conceptually work. I guess...

Meanwhile a conversational partner in a box that can answer any of my questions? (If they eventually live up to the promise) Umm, how the hell would a normal person not call that magical and indeed “AI”?

I’m sorry but the True Believers haven’t made anything before which is remotely close to AI. Not until contemporary LLMs. That’s why people don’t call it that.


Point taken! I should have considered a subtitle for the post to make it clear what my main point is. That being said, I tried to nuance the post and not make any bold predictions. I am happy to revisit the argument in five years and see where we are.


Are you reviewing the article or AI?


Yep. We rarely talk about “the internet” because everything is the internet. We even more rarely talk about transistors, outside of tech enthusiast circles. And electrons get even less attention.

AI is an ingredient that will power products and experiences, and we will talk about those.


We will still talk about AI in five years because AI is still massively behind what the general public thinks of as AI

Think skynet or HAL. All powerful and all knowing systems that run every aspect of our lives.

Even our concept of VR is so far behind what people have expected from sci-fi movies for decades. Until we can jack in and get full sensory feedback, VR will continue to get talked about out as more than fancy 3D glasses.

In the 80s we all wanted the hoverboards from back to the future. We never got them but we keep calling things with wheels hoverboards because these ideas are too good to die and the general public still wants them.


I doubt it. A recommender system is not "obviously" AI - it could easily be just static lists, or based on viewing counts (wouldn't be surprised given how bad Netflix's algorithm is).

LLMs and gen AI are totally different. They are very clearly AI. You're literally having a conversation with a computer! I don't think people will stop calling it AI.


30 years ago, graph traversal algorithms using weights and min-max were treated as AI.

And recommender systems were/are absolutely a ML/AI subfield.

> I don't think people will stop calling it AI

That already did in the above two cases, as well as plenty of other cases in the entire ML/AI world.


> 30 years ago, graph traversal algorithms using weights and min-max were treated as AI.

Yes, by AI practitioners. The difference is that the general public has always had an idea of what "AI" means, and LLMs are the first thing that actually matches that.


Site is dead for me, here is a link

https://archive.ph/FxbQH


I don't mind the classification that "machine learning equals AI plus time". To me, AI is any surprisingly effective use of machine learning. As time goes by we expect more, but you'd have to be the most cynical curmudgeon in the world not to look at the last 15-20 years and be amazed at what we've achieved. But that doesn't mean that anything short of AGI now will fail to justify the hype. I'm not going to begrudge the hype if self-driving cars roll out to more countries, or if I get a robot doing household chores in 10 years time.


AI right now reminds me exactly of the blockchain/crypto hype. People trying to apply blockchain to everything, calling something a "blockchain" based solution. However, blockchain based solutions never panned out, as it was an attempt to apply technologies to use cases rather than solving actual problems. Now when you say "blockchain based X", people will scoff at you. AI will follow the same path I believe, as LLMs are less applicable to solve meaningful problems than expected.


With Bitcoin at 100k+ USD and LLMs and generative AI being used daily by tons of people including myself, your comment makes 0 sense to me.

The only reason people will stop talking about them is because they've become completely normal in their daily lives IMO.


They're referring to a trend from around a decade ago where people started trying to add "blockchain" (not bitcoin) to projects that really didn't need anything of the sort because it sounded neat and they wanted a marketing boost.


ChatGPT and coding helpers are the only useful applications. Just like Bitcoin and Ethereum are the only useful cryptocurrencies. Hence the limited applications for either technology


Blockchain and crypto need a solid technical understanding just to interface with it. With ChatGPT and AI, it's accessible to people with basic literacy, and with voice, even that's optional. I do see how these emerging technologies are similar but they also have key differences.


At a surface level, but to do anything meaningful, you likely need to understand deeper technical details like RAG, context windows, tokenization, etc


Good point. In 5 years we will have a different name for a new hype related to AI :-)


- AAI - Advanced AI

- AI 2.0

- AI 4.0 (Industrial revolution, again, so let's use higher numbers)

- WebAI 5.0 (skipping enough numbers to now be sure it will be _novel_)

Any more ideas? No wrong answers, in five years no one will look at this list.


The Machine

https://en.wikipedia.org/wiki/The_Machine_Stops

(May be further out than five years)


There will always be a desire to inflate some damned bubble or other


Yes. Most SaaS tools are just CRUD operations on databases. But no-one talks about databases as a magical tool anymore - even thought thats what they were in the 1990s with Oracle vs. Sybase vs. Informix vs. Ingres.


But is this still true considering LLMs? Is it possible that LLMs could save AI?

https://news.ycombinator.com/item?id=42431690


We have had two years of "breakthrough" LLMs and every company now shipping it makes mediocre products. The article is exactly about "LLMs won't be AI in 5 years".

Also, we have been using if/else and math as AI systems for decades. Wikipedia lists of lot of ideas under AI. Every calculator was AI, since it took a job (calculating) from a human, when it was invented. Says wiki, not (only) my opinion.


CEO's have historically blamed AI and despite the massive layoffs, it seems like you're also attributing the reduction in quality to AI.


Click the link.


The next eventual leap in ML, RL, and AI could come from large quantum computing based accelerated training of models. It might take twenty years to get there, however.


I really do think this time is different, we are on the precipice of something huge


-


TLDR: In five years, AI will fade as a buzzword because its success will make it an invisible, standard part of technology.


TLDR version:

This is not to say that we will not talk about AI at all in five years, in the same way that we still talk about big data today, only that we will not talk about AI in five years because of its success.


After skynet kills all humans and AI at the same time?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: