Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
[flagged] AI Bubble 2027 (wheresyoured.at)
73 points by speckx 22 days ago | hide | past | favorite | 65 comments


Really hard to believe articles like this and even more hard to believe this is the hive mind of hacker news today.

Work for a major research lab. So much headroom, so much left on the table with every project, so many obvious directions to go to tackle major problems. These last 3 years have been chaotic sprints. Transfusion, better compressed latent representations, better curation signals, better synthetic data, more flywheel data, insane progress in these last 3 years that somehow just gets continually denigrated by this community.

There is hype and bullshit and stupid money and annoying influencers and hyperbolic executives, but “it’s a bubble” is absurd to me.

It would be colossally stupid for these companies to not pour the money they are pouring into infrastructure buildouts and R&D. They know it’s going to be a ton of waste, nobody in these articles are surprising anyone. These articles are just not very insightful. Only silver lining to reading the comments and these articles is the hope that all of you are investing optimally for your beliefs.


I agree completely.

I work as a ML researcher in a small startup researching, developing and training large models on a daily basis. I see the improvements done in my field every day in academia and in the industry, and newer models come out constantly that continue to improve the product's performance. It feels as if people who talk about AI being a bubble are not familiar with AI which is not LLMs, and the amazing advancements it already did in drug discovery, ASR, media generation, etc.

If foundation model development stopped right now and chatgpt would not be any better, there would be at least five if not ten years of new technological developments just to build off the models we have trained so far.


Yes, HN discussions of LLMs are quite tiresome. I make indie apps, but it has been getting worse and worse over the years, as the API surfaces and UI variety of iOS and Android have grown.

Claude Code and ChatGPT brought me back to the early 2010s golden age when indies could be a one-man army. Not only code, but also for localizations, marketing. I'm even finally building some infrastructure for QA automation! And tests, lots of tests. Unimaginable for me before because I never had that bandwidth.

Not to mention that they unblock me and have basically fixed a large part of my ADHD issues because I can easily kickstart whatever task or delegate the most numbing routine work to an agent.

Just released a huge update of my language-learning app that I would never dreamed of without LLM assistance (lots of meticulous grammar-related work over many months) and have been getting a stream of great reviews. And all of that for only $100+20 a month – I was paying almost twice as much for Unity3d subscription a decade ago.


All that is fine. The bubble only happens if in your ecstasy you manage to think more of your indie apps, in which case Wallstreet has no qualms about taking any rando AI app public. When this is done at scale, you create the toxic asset that 401ks pile into.

In short, you and others like you will enjoy your time, but will care very little of the systemic risk you are introducing.

But hey, whatever, gotta nut, right?

—-

I don’t mean you specifically. Companies like Windsurf, Cursor, many, they are all currently building the package for Wallstreet with literally no care that it will pull in retail investment en masse. This is going to be a fucked up rug pull for regular investors in a few years.

We’re in a much wilder financial environment since 2008. It’s very normal for crypto to be seen as a viable investment. AI is going to appear even more viable. Things are primed.


Upvoted for a different perspective.

The thing to remember about the HN crowd is it can be a bit cynical. At the same time, realize that everyone's judging AI progress not on headroom and synthetic data usage, but on how well it feels like it's doing, external benchmarks, hallucinations, and how much value it's really delivering. The concern is that for all the enthusiasm, generative AI's hard problems still seem unsolved, the output quality is seeing diminishing returns, and actually applying it outside language settings has been challenging.


Yea a lot of this I understand and appreciate!

- offline and even online benchmarks are terrible unless actually a standard product experiment (a/b test etc). Evaluation science is extremely flawed.

- skepticism is healthy!

- measure on delivered value vs promised value!

- there are hard problems! Possibly ones that require paradigm shifts that need time to develop!

But

- delivered value and developments alone are extraordinary. Problems originally thought unsolvable are now completely tractable or solved even if you rightfully don’t trust eval numbers like LLMArena, market copy, and offline evals.

- output quality is seeing diminishing returns? I cannot understand this argument at all. We have scaled the first good idea with great success. People really believe this is the end of the line? We’re out of great ideas? We’ve just scratched the surface.

- even with a “feels” approach, people are unimpressed?? It’s subjective, you are welcome to be unimpressed. But I just cannot understand or fathom how


The way I've been thinking about this is that there is The Tech and The Business. The Tech is amazing and improving all the time at the core, then there are the apps being built to take advantage of the Tech, a lot of which are also amazing.

But The Business is the bubble part. Like all the companies during the first internet boom/bubble who did stuff like lay tons of fiber and raise tons of money for rickety business plans. Those companies went out of business but the fiber was still there and still useful. So I think you're right in that the Tech part is being shafted a little in the conversation because the Business part is so bubbly.


The community is divided about this. There's no one hivemind.

There's a general negativity bias on the internet (and probably in humans at large) which skews the discourse on this topic as any other - but there are plenty of active, creative LLM enthusiasts here.


I agree — probably my own selective memory and straw-manning. It just feels in my mind like the “vibe” on HN (in terms of articles that reach the front page and top rated comments) is very anti-AI. But of course even if true it is a biased picture of HN readers.

Would be interesting to see some analysis from HN data to understand just how accurate my perception is; of course doesn’t clear up the bias issue.


I'll take a shot at rationale for this perspective, which is similar to a peer comment:

The tech is undoubtedly impressive, and I'm sure has a ton of headroom to grow (although I have no direct knowledge of this, but I'd take you at your word, because I'm sure it's true).

But at least my perception of the idea that this is a "bubble" presently is rooted in the businesses that are created using the technology. Tons of money spent to power AI agents to conduct tasks that would be 99% less expensive to conduct via a simple API call, or because the actual unstructured work is 2 or 3 levels higher in the value chain, and given enough time, there will be new vertically integrated companies that use AI to solve the problem at the root and eliminate the need for entire categories of companies at the level below.

In other words: the root of the bubble (to me) is not that the value will never be realized, but that many (if not most) of this crop of companies, given the amount of time the workflows and technology have had to take hold in organizations, will almost certainly not be able to survive long enough to be the ones to realize it.

This also seems to be why folks draw comparison to the dot com bubble, because it was quite similar. The tech was undoubtedly world changing. But the world needed time to adapt, and most of those companies no longer exist, even though many of the problems were solved a decade later by a new startup who achieved incredible scale.


I don’t think people know what the definition of this bubble is yet. I can provide one:

- AI-first app companies that actually go public on the stock exchange

- Massive influx of investment from retail as the basket of “AI” is just too much to pass up

- This basket is no longer a collection of top tier hardware and software titans, but led by resellers and wrappers like Palantir, like something like Cursor, like Windsurf, and finally rounded out with crud-apps turned publicly traded companies. Figma going public is a very bad indicator of what’s to come. Perplexity going public would be one of my biggest Red Flag moments.

- The basket I’m describing is the package that includes all these “toxic” assets.

- Some really dumb big players will lose here too because they will acquire some of these resellers and wrappers at prices they’ll never recoup (Newscorp buying MySpace).

- And finally, those who know, know, and they will bail first unscathed. Say it ain’t so, the story of our lives.

That will be the vehicle retail will pile into. We’re a little bit aways from that as companies are still building out their AI offerings. We’ll need a flurry of companies like that to go public soon after OpenAI does, sparking the beginning of one of the worst bubbles ever. You won’t be able to make sense of it because the bull market will make it impossible to not FOMO in.

That’s the systemic risk to this entire industry and the broader economy in about a few years.

Remember, humans can’t have nice things. If the secondary companies didn’t rush to the stock market as their prime imperative, we wouldn’t have to worry about it because all sensible investment will be in the large caps. The pursuit of gaudy returns will fail humans again, as always.

Stay safe and right-sized, all. The actual tech is not over-hyped.


So, one difference between this and the dot-com bubble is that it is much, much harder to go public now, and much, much easier to raise funds as a private company. This has lead to loss-making private companies with valuations that would not have been remotely plausible a couple of decades ago. Arguably a more likely end to all of this is that the VCs turn off the tap, which will kill most companies in the space within a year or so, with fairly limited contagion to the broader markets; public companies who've gone heavily into it may be badly burned, but that would be about it.

Retail may never really get to participate at all, beyond trading Nvidia and similar.


Lol why would VC's want retail to participate until the very end? This is the very nature of the VC game. Come on wakey wakey!!!


Data point of two, but this podcast also recently floated 2027 as the crunch point: https://youtu.be/vp1-3Ypmr1Y?si=p4GlyPwZRWOkxFtt

In my uninformed opinion, though, companies who spent excessively on bad AI initiatives will begin to introspect as the fiscal year comes to an end. By summer 2026 I think a lot of execs will be getting antsy if they can't defend their investments


I'm a little skeptical of a full on 2008-style 'burst'. I imagine it'll be closer to a slow deflation as these companies need to turn a profit.

Fundamentally, serving a model via API is profitable (re: Dario, OpenAI), and inference costs come down drastically over time.

The main expense comes twofold: 1. The cost of train a new model is extremely expensive. GPUs / yolo runs / data

2. Newer models tend to churn through more tokens and be more expensive to serve in the beginning before optimizations are made.

(not including payrolls)

OpenAI and Anthropic can become money printers once they downgrade the Free tiers, add ads or other attention monetizing methods, and rely on a usage model once people and businesses become more and more integrated with LLMs, which are undoubtedly useful.



Not really sure how this article refutes what I said?

He defines it as "everything that happens from when you put a prompt in to generate an output" -> but he seems to conflate inference with a query. Putting in input to generate the next single token is inference. A query or response just means the LLM repeats this until the stop token is emitted. (Happy to be corrected here)

The cost of inference per token is going down - the cost per query goes up because models consume more tokens, which was my point.

Either way, charging consumers per token pretty much guarantees that serving models is profitable (each of Anthropic's prior models turn a profit). The consumer-friendly flat 20$ subscription is not sustainable in the long run.

https://epoch.ai/data-insights/llm-inference-price-trends

https://www.snellman.net/blog/archive/2025-06-02-llms-are-ch...

https://x.com/eladgil/status/1827521805755806107


There is no question LLMs are truly useful in some areas, and the LLM bubble will inevitably burst. Both can be simultaneously true, and we're just running up the big first slope on the hype curve [0].

As we learn more about the capabilities and limits of LLMs, I see no serious arguments scaling up LLMs with increasingly massive data centers and training will actually reach anything like breakthrough to AGI or even anything beyond the magnitude of usefulness already available. Quite the opposite — most experts argue fundamental breakthroughs will be needed in different areas to yield orders-of-magnitude greater utility, nevermind yielding AGI (not that much more refinement won't yield useful results, only that it won't break out).

So one question is timing — When will the crash come?

The next is, how can we collect in an open and preferable independently/distributed/locally-usable way the best usable models to retain access to the tech when the VC-funded data centers shut down?

[0] https://en.wikipedia.org/wiki/Gartner_hype_cycle


We even have prior art. Web 1.0 and e-Commerce were truly useful and the bubble also burst.

I should also think further, railroads and radio also good examples!


Yes well bubbles are a core part of the innovation process (new tech being useful doesn't imply a lack of bubbles), see e.g."Technological Revolutions and Financial Capital" by Carlota Perez https://en.wikipedia.org/wiki/Technological_Revolutions_and_...


Unlike that time, some money is actually being made. I heard some figures thrown around yesterday, total combined investments of over 500 billion! and revenues of about 30 billion, 10 bil of which was payments to cloud providers, so actually 20 billion in revenues. that's not nothing.


Plenty of e-commerce places had revenue, they just didn't have profit and they usually spent on crazy stuff, like Super Bowl ads.


It might not be a paradox: Bubbles are most likely to occur when something is plausibly valuable.

If GenAI really was just a "glorified autocorrect", a "stochastic parrot", etc, it would be much easier to deflate AI Booster claims and contextualise what it is and isn't good at.

Instead, LLMs exist in a blurry space where they are sometimes genuinely decent, occasionally completely broken, and often subtly wrong in ways not obvious to their users. That uncertainty is what breeds FOMO and hype in the investor class.


I use LLMs all the time and do ML and stuff. But at the same time, they are literally averaging the internet, approximately. I think the terms glorified autocomplete and stochastic parrot describe how they work under the hood really well.


A top expert in US Trust & Estate Tax law whom I know well tells me that although their firm is pushing use of LLMs, and they are useful for some things, there are serious limitations.

In the world of T&E law, there are a lot of mediocre (to be kind) attorneys who claim expertise but are very bad at it (causing a lot of work for the more serious firms and a lot of costs & losses for the intended heirs). They often write papers for marketing themselves as experts, so the internet is flooded with many papers giving advice that is exactly wrong and much more that is wrong in more subtle ways that will blow up decades later.

If an LLM could reason, it would be able to sort out the wrong nonsense from the real expertise by applying reason, e.g., comparing the advice to the actual legal code and precedent-setting rulings, and by comparing it to results, and be able to identify the real experts, and generate output based on the writings of the real experts only.

However, LLMs show zero sign of any similar reasoning. They simply output something resembling the average of all the dreck of the mediocre-minus attorneys posting blogs.

I'm not saying this could not be fixed by Altman et. al. applying a large amount of computer power to exactly the loops I described above (check legal advice against the actual code and judges' rulings, check against actual results, select only the credible sources and retrain), but it is obviously no where near that yet.

The big problem, is that this is only obvious to a top expert in the field who deeply knows from training and experience the difference between the top experts and the dreck.

To the rest of us who actually need the advice, the LLMs sound great.

Very smart parrot, but still dumbly averaging and stochastic.


Yup, I find LLMs are fantastic for surfacing all kinds of "middle of the road" information that is common and well-documented. So, for getting up to speed or extracting particular answers about a field of knowledge with which I'm unfamiliar, LLMs are wonderfully helpful. Even using later ChatGPT versions for tech support on software often works very well.

And the conversational style makes it all look like good reasoning.

But as soon as the wanders off the highways into little-used areas of knowledge (such as wiring for a CNC machine controller board instead of a software package with millions of users' forum posts), even pre-stuffing the context with heaps of specifically relevant documents rapidly reveals there is zero reasoning happening.

Similarly, the occasional excursions into completely the wrong field even with a detailed prompt show that the LLM really does not have a clue what it is 'reasoning' about. Even with thinking, multiple steps, etc., the 'stochastic parrot' moniker remains applicable — a very damn smart parrot, but still.


When the bubble burts, what kind of effects are we going to see? What are your thoughts on this?


Massive layoffs from BigTech and lots of startups going under.


When AI is on the rise, layoffs are "because AI", and then when the AI bubble pops the layoffs are also conveniently "because AI".


Pre ChatGPT:

•largest publicly-traded company in the world was ~$2T (Saudi Aramco, not even top ten anymore).

•nVidea (current largest @ $4.3T) was "only" ~$0.6T [$600,000 x Million]

•Top 7 public techs are where predominant gains have grown / held

•March 16, 2020, all publicly-traded companies worth ~$78T; at present, ~$129T

•Gold has doubled, to present.

>what kind of effects are we going to see

•Starvation and theft like you've probably barely witnessed in your 1st- or 3rd-world lifetime. Not from former stock-holders, but from former underling employees, out of simple desperation. Everywhere, indiscriminantly from the majority.

•UBI & conscription, if only to lessen previous bullet-point.

¢¢, hoping I'm wrong. But if I'm not, maybe we can focus on domestics instead of endless struggles abroad (reimpliment Civilian Conservation Corps?).


I think you are a bit pessimistic on the economics. AI should increase overall output and prosperity and there a bunch of ways for politicians to redistribute things if people vote for it.


There is no scenario where these AI companies collapse and the working class come out better off. We saw it with the 2008 crisis, we saw it with the dotcom bubble, and we’ll see it with this: The people responsible will ride off into the sunset on a golden horse whilst everyone else is left picking up the pieces.


>The people responsible will ride off into the sunset on a golden horse whilst everyone else is left picking up the pieces.

I'm going to quote my favorite client's eighty-eight year old wife, a miserly multi-millionaire:

>"Nobody wants to be the last one at the party, because then you have to help clean up all the mess!"

She is a die-hard Reaganomicist, unable to comprehend why none of her grand-children (and only one of her daughters) is reproducing. My response to her husband, my friend, is not every fish needs to see the shark for them to all respond appropriately.

Only one of my own brothers has a child, only one. These wealthiest brothers (and the above friend) are just now beginning to realize that something is massively wrong with how we're allowing society to continue operating. It's heartbreaking to witness their own awakenings, years behind my own apathetic view(s).

>"It's incredible that I have all this inside of me — and to you it's just words..." —DFWallace (Pale King)

¢¢


> OpenAI began this hype cycle [...] and its death (or, as mentioned, some other kind of collapse, such as acquisition) is the sign that we’re done here, in the same way that FTX signaled the end of the cryptocurrency boom..

The collapse of FTX sent bitcoin from ~$20k to ~$17k. It's now $110k. I imagine the AI boom will 'collapse' in the same sort of way.

A lot of the economics depends on whether you think human level intelligence is coming or not. Zitron kind of assumes not in which case his economic doomerism makes sense. But if it does come you could effectively double gdp which is a lot of financial upside.


Having been through at least two AI hype cycles professionally, this is just another one.

Each cycle filters out people who are not actually interested in AI, they are grifters and sheisters trying to make money.

I have a private list of these starting from 2006 to today.

LLMs =/= AI and if you don’t know this then you should be worried because you are going to get left behind because you don’t actually understand the world of AI.

Those of us that are “forever AI” people are the cockroaches of the tech world and eventually we’ll be all that is left.

Every former “expert systems scientist”, “Bayesian probably engineers” “Computer vision experts” “Big Data Analysts” and “LSTM gurus” are having no trouble implementing LLMs

We’ll be fine


>this is just another [hype cycle]

As a casual observer for decades I think this one is different in that we are at approximate hardware equivalence with the human brain and advancing which will have interesting economic implications.


Yeah but it will deflate slightly as hype transitions to RL (finally!) and LLMs just become boring regular “tech.”


Im no expert - could you explain this in laymans terms? I'd love to be able to make proper sense of stuff outside of LLMs.


Too long and complex for my morning but ultimately we’re at a point of hardware/computing commodification and ubiquitous data that online RL can start to provide meaningful efficiencies across any possible real task

Here’s two good papers to start with:

https://arxiv.org/abs/2410.14606

https://storage.googleapis.com/deepmind-media/Era-of-Experie...


Thanks, will read them.


It's a race to see which runs out of steam first: AI investment or Ed Zitron's schtick.


I'm more than a little worried about his mental state since so much of his identify seems tied up in AI collapsing and also in people rejecting AI. Both of those things seem unlikely.


Lol, basically. I do enjoy his consistent doomerism though, even if it's just for a laugh.


So many hot takes for the AI bubble bursting ANY DAY NOW, yet we keep chugging on.


they said there's 6 more quarters of funding left, so should be busted by early to mid 2027


I bet not. Let's come back in a couple years and see.


Lots of AI apps are creating a lot of value, that somehow gets overlooked in these convos


Can you provide some names of AI apps who’s revenue > cost ?


I mean, ChatGPT could easily be profitable today if they wanted to, but they're prioritizing growth


[citation needed]


Please stop the BS and take a basic corporate finance class.

FCFF = EBIT(1-t) - Reinvestment.

If OAI stops the Reinvestment, they lose to competition. Got it? Simple.


A lot of value is being created with some of these AI apps but are the people funding the development of these apps seeing a return on investment? (Honest question, I don't really know)

The article mentions

> This is a bubble driven by vibes not returns ...

I think this indicates some investors are seeing a return. I know AI is expensive to train and somewhat expensive to run though so I am not really sure what the reality is.


meta already has a hiring freeze in AI


What I think is, the team that pulled such large LLM off, is no stupid.


Being smart doesn’t necessarily make your tech better.


This is the best bubble post I’ve seen this week on HN: https://craigmccaskill.com/ai-bubble-history

(Although I think the utility of server farms will not be high after the bubble bursts: even if cheap they will quickly become outdated. In that respect things are different from railway tracks)


Discussed here: https://news.ycombinator.com/item?id=45008209 - Aug 2025 (122 comments)


Link's title: "The Bubble that Knows it's a Bubble"

That is... certainly something to think about (and clever).

cogito ergo sum™ / attention is all you need™


The Internet bubble left physical artifacts behind, like thousands of miles of unlit fiber. However, that pales in comparison to the value of virtual artifacts like Apache et al. Similarly, the AI bubble's artifacts will primarily be virtual.


The author labels LLMs as "empty hype".

LLMs are inappropriately hyped. Surrounded in shady practices to make them a reality. I understand why so many people are anti-LLM.

But empty hype? I just can't disagree more.

They are generalized approximation functions that can approximate all manner of modalities, surprisingly quickly.

That's incredibly powerful.

They can be horribly abused, the failure modes unintuitive, using them can open entirely new classes of security vulnerabilities and we don't have proper observability tooling to deeply understand what's going on under the hood.

But empty hype?

Maybe we'll move away from them and adopt something closer to world models or use RL / something more like Sutton's OaK architecture, or replace back prop with something like forward-forward, but it's hard to believe Hal-style AI is going anywhere.

They are just too useful.

We have a rough draft of AI we've only seen in sci-fi. Pandora's box is open and I don't see us closing it.


I would love to reach a point where competent language models become commodities that anyone can run on low key hardware. Having one at your disposal can open up for some gorgeous applications and workflows by the community. As it stands at present though, there are insurmountable moats or very expensive ones.


Paywalled.


Not for me? Never heard of this site but had no issues.


The introduction to the article is not paywalled. But the actual 2027 ai story is paywalled


Ah.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: