Forget the talk about bubbles and corrections. Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate? Good business would have driven us very far away from this point years ago. This is very deep in the "because we can" territory. It's not FOMO.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
Sure!
Google began investing heavily in AI (LLMs, actually) to catch up to the other frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue). They recognized this, and set about becoming a leader in the emerging field.
Is it not better to be a leader in the nascent industry that is poised to kill your profitability?
This is the same approach that Google took with smartphones. They saw Apple as a threat not because they had a product that was directly competing, but because they recognized that allowing Apple to monopolize mobile computing would put them in a position to take Google’s ad revenue — or allow them to extract rent in the form of payments to ensure Apple didn’t direct their users to a competing service. Android was not initially intended to be a revenue source, at least not most importantly. It was intended to limit the problem that Apple represented. Later, once Google had a large part of the market, they found ways to monetize the platform via both their ad network and an app store.
AI is no different. If Google does nothing, they lose. If they catch up and take the lead, they limit the size of the future threat and if all goes well, will be able to monetize their newfound market share down the road - but monetization is a problem for future Google. Today’s Google’s problem is getting the market share.
> frontier labs, which had already produced a product that was going to eviscerate Google Search (and therefore, Google ad revenue)
> If Google does nothing, they lose.
Is any of that actually true though? In retrospect, had google done nothing their search product would still work. Currently it's pretty profoundly broken, at least from a functional standpoint--no idea how that impacts revenue if at all. To me it seems like google in particular took the bait and went after a paper tiger, and in doing so damaged their product.
Even before recent "AI improvements" for us tech nerds Google search was broken ad invaded something. But for average Joe up until recently it's was still okay because it served purpose of whatever normal people use search for: find some rumors about their favorite celebs, find some car parts information or just "buy X".
Problem for Google is that for a good chunk of normal non-techy people LLM chats looks like talking to genius super intelligence and they was not burned by it yet. So they trust it.
And now good chunk of non-tech people now go and ask ChatGPT instead of using google search. And they do it simply because it's less enshittified than Google search.
I wonder is Google's AI investment a rational reaction to real competition or something else? My strong suspicion is that it's in fact delusional beliefs held by their management--something to do with "AGI"--that drives this activity, perhaps combined with the effects of information monoculture/social isolation/groupthink. It seems the simpler explanation that a very small group of people are behaving insanely than a very large number.
I'm honestly clueless about reasoning behind bigtech investment into AI. For me it's all just look like another seasonal fad like we had many of during last two decades. Everyone invests into AI because of FOMO.
I know the tech itself is real and people do use it. And it will certainly change the world. Yet I doubt even fraction of money burnt on it will ever be recuperated because race to the bottom.
But yeah - I'm just random tech guy who has not built a big successful company and honestly have very little clue how to make money this way.
> I'm just random tech guy who has not built a big successful company and honestly have very little clue how to make money this way.
Hey, me too :)
I’ve been at this for a couple of decades, though, and from what I’ve seen the key to building a “successful” company is to ride the wave of popular interest to get funding, build an effective team, and then (and only then) try to find a way to make it profitable enough to exit.
I do think “AI” (really, LLMs, and GPTs in particular) are going to have a transformative impact on a scale and at a rate we’ve never seen before - I just have zero confidence that I can accurately predict what it’s going to look like when the dust settles.
Users still googled before. Now they just move to chatbots. Regular people don't really notice the search degradation as much and enshittification helps Google, as revenues kept going up. Chatbots are an existential threat since they will add ads and that's where Google's ad revenue dies.
Did any users actually move to chatbots? By which I don't mean the 0.001% of tech nerds who buy chatgpt subscriptions, but in aggregate did a meaningful number of google searchers defect to chatgpt or
other llm services? I really doubt that. Data would be interesting but there's a credibility problem...
Yes. People do use them and they trust them, unfortunately.
Tech nerds know what ChatGPT is, they know llm limits somewhat and they know it's hallucinating. Normal people do not - for them is a magical all knowing oracle.
> People do use them and they trust them, unfortunately.
Yep, and it’s hard to communicate that to them. It’s hard to accurately describe even to someone familiar with the context.
I don’t think “trust” is the right word. Sitting here on 19 Nov 2025, I do in fact trust LLMs to reason. I don’t trust LLMs to be truthful.
If I ask for a fact, I always consider what I’d lose if that fact were wrong.
If I ask for reasoning, I provide the facts that I believe are required to make the decision. I then double-check that reasoning by inverting the prompt and comparing the output in the other direction. For more critical decisions, I make sure I use different models, from different providers, with completely separate context. If I’ve done all that, I think I can honestly say that I trust it.
These days, I would describe it as “I don’t trust AI to distinguish truth”
I don’t have data for it, and would love to dig it up at some point. My head is too deep in a problem at the moment to make space for it …but I did just add it to my task list via ChatGPT :)
Anecdotally, I believe they did.
My wife is decidedly not a tech nerd, but had her own ChatGPT subscription without my input before I switched us over to a business account.
My mother is 58, and a retired executive for a “traditional” Fortune 100 company. She’s competent with MS productivity tools and the software she used for work, but has little interest outside that. She also had her own ChatGPT subscription.
Both of them were using it for at least a large subset of what they’d previously used Google for.
Gemini, ChatGPT and probably all of the others have free tiers that can be used as an enhanced web search. And they're probably better in many regards, since they do the aggregation directly. Plus regular users don't really check sources, can't really verify the trustworthiness of a website, etc, so their results were always hit or miss.
As someone who deeply dislikes using chatbots for information there is a lot of stuff that is easily and reliably anwsered by GPT
You must know the limitations of the medium but searching for how much and at what temperature should i bake my broccoli is so fucking annoying to search on google
Google has the capital to spend, and this effort needn’t succeed to be worthwhile. My point is that the scope of the potential future risk more than justifies the expense.
> and in doing so damaged their product
Only in objective terms.
The overall size of the market Google is operating in hasn’t changed, and I’m not aware of anyone positioned to provide a better alternative. Even if we assume that Google Search has gotten worse as a result of this, their traditional competitors aren’t stealing marketshare. They’re all either worse than the current state of Search, are making the same bet, or both.
This is very revisionist. While they have been catching up quickly there was no master 4D chess strategy here. Google was incredibly late to this game - Sergey had to come back from retirement because most of the research team had a Sarah Connor complex and couldn’t ship. The saving grace is that AdWords picked up the tab again and founders shook the place up when it became clear the golden goose was being cooked.
> Sergey had to come back from retirement because most of the research team had a Sarah Connor complex and couldn’t ship.
What, you don't want to ship the Torment Nexus? You're fired! We must ship a Torment Nexus, we must maintain our market share even if it means our destruction!
This makes sense, but if the goal was to avoid business failure from disruption / losing customers, then why would the companies not be behaving in ways that maximize their customers? Intermediate value theorem applies here. There is no number of net negative customer base that can be spun in a good way. There is no path to recouping their lost customers. These fundamentals must be seen by the decision makers.
This has been how I've framed a lot of the expenditure despite lack of immediate substantial new revenues. Everyone including Google is driven to protecting current revenues from prospective disruption. But the vulnerability AI created for Google is to other companies worth positioning themselves to take advantage of if Google falls behind and loses chunks of marketshare.
Yeah, like imagine if the LLM's don't advance that much, the agentic stuff doesn't really take off etc.
Even in this conservative case, ChatGPT could seriously erode Google Search revenues. That alone would be a massive disruption and Google wants to ensure they end up as the Google in that scenario and not the Lycos, AltaVista, AskJeeves etc. etc.
But what Google is doing, is like what Firefox did when Chrome came out. Panicking.
Panicking, and therefore making horrible design and product choices.
Google has made their main search engine output utter and complete junk. It's just terrible. If they didn't have 'web' search, I'd never be able to use it.
In almost every search for the last month, normal search results in horrible matches. Switch to web? Bam! First result.
Not web? The same perfect result might be 3 or 4 pages deep. If that.
(I am comparing web results in both cases, and ignoring the also broken 80% of the pages of AI junk.)
In an attempt to compete, they're literally driving people to use ChatGPT for search in droves.
They could compete, and do so without this panicky disaster of a response.
I didn’t see the comment, but I assume they were accusing me of generating it.
For what it’s worth, I didn’t use AI at all for it. I just tend to sound like an LLM, likely because I have ADHD and grew up very rural (most of my vocabulary and grammar from reading).
I also tend to use a lot of parenthetical phrases, semi-colons, and lists. ¯\_(ツ)_/¯
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
I'll take a stab at this. It's not 100% clear to me which product you're referring to, so I'll try to answer as if the product is something that already has customers, and the maker of the product is shoving AI into it. The rationale is that the group you're trying to convince that you're doing a good job is your shareholders or investors, not your actual customers. You can justify some limited customer attrition by noting that your competitors are doing the same thing, and that maybe if you shove the _right_ AI features into the product, you'll win those customers back.
I'm not saying it's a _good_ rationale, but that seems to be what's at play in many cases.
This is what I feel is the case. That is putting a lot of customers on the table. Valve / GabeN get it. In 2025 they make a marketing point about how their hardware runs arch and you can put whatever software you want on it. Valve is positioned to eat Microsoft's and Google's lunch if they're not careful. While chasing the fear of losing customers they are actively losing customers.
There are at least a few stories from the 90s where companies that readily could have invested in “getting online” instead decided that it would only harm their existing business. The hype at the time was extraordinary to be sure, but after the dust settled the internet did change the shape of the world.
Nobody can really know what things will look like in 10 years, but if you have the capital to deploy and any conviction at all that this might be a sea-change moment, it seems foolish to not pursue it.
You are quoting and responding to a rhetorical question from a person with essentialy the same criticism as you, to respond as if they are making the claim.. why??
No I am not, you took the opposite of the intended meaning.
I'm saying the risk aspect is similar, but I take issue with equating (product) investment and gambling, since one has a potential to create, and the other just shifts money around.
The golden goose is not you or I. It is our boss who will buy this junk for us and expect us to integrate it into our workflows or be shown the door. It is the broccoli headed kids who don’t even have to crack open cliffnotes to shirk their academic responsibilities anymore. It is universities that are trying to “keep up” by forcing an AI prompting class as a prerequisite for most majors. These groups represent a lot of people and a lot of money.
It doesn’t have to work. It just has to sell and become entrenched enough. And by all metrics that is what is happening. A self fulfilling prophecy, no different than your org buying redundant enterprise software from all the major vendors today.
anyway i totally agree with your reasoning. one might as well ask "why is MS Teams so bad? it's bloated, slow, buggy, nasty to use from a UX pov... yet it's everywhere"
this shitware -- ms teams, llm slopguns, whatever -- never had to work, they just have to sell.
Eastman Kodak tried your implied proposed strategy, of ignoring technological developments that undermine their core product. It didn't go so well. Naturally technology companies have learned from this and other past mistakes.
Having the shinyest toys is useless if you don't play with them.
They had the tech but didn't see the danger in it, they belived that the lower quality digital camera would fail and investing in them would mean exploring a new market and they choose the safe corporate move, buisness as usual.
Turns out that low quality but way more pratical and cheaper on the long run really sells
“One of Job's business rules was to never be afraid of cannibalizing yourself. " If you don't cannibalize yourself, someone else will," he said. So even though an Iphone might cannibalize sales of an IPod, or an IPad might cannibalize sales of a laptop, that did not deter him.”
Kodak did sell digital cameras but they were so scared of protecting their film business I don't think they went all in on digital and let the other camera companies take over.
Kodak operated in a region that was a manufacturing and technology hub until the mid 1900s. The region started to decline significantly in the 1960s. By the 1990s it was basically a ruins compared to the 1950s.
So by the time Kodak made this strategic mistake, I imagine they already would have had a hard time recruiting talent into that obviously dying region for a decade or so, and many people who were there already were actively leaving the region by that time.
I suspect that in the counterfactual where the region stayed as it was in the 1950s in terms of economic prosperity, Kodak probably could have successfully played catch up once it was clear where the game was going.
So yes they made a strategic mistake, but they did so while simultaneously “brain drain” bleeding out due to other unrelated factors.
If you're talking about all of AI with your statement I think you may need to reconcile that opinion with the fact that chat GPT alone has almost 1 billion daily users. Clearly lots of people derive enormous value from AI.
If there's something more specific and different you were referring to I'd love to hear what it is.
I’m probably an outlier: I use chatgpt/gemini for specific purposes, but ai summaries on eg google search or youtube gives me negative value (I never read them and they take up space).
I can't say I find them 100% useless - though I'd rather they not pop up by default - and I understand why people like them so much. They let people type in a question and get a confident and definitive answer all in natural language, which is what it seems like the average person has tried to do all along with search engines. The issue is that they think whatever it spits out is 100% true and factual, which is scary.
> Clearly lots of people derive enormous value from AI.
I don’t really see this. Lots of people like freebies, but the value aspect is less clear. AFAIK, none of these chatbots are profitable. You would not see nearly as many users if they had to actually pay for the thing.
> "It doesn't matter whether you want to be a teacher [or] a doctor. All those professions will be around, but the people who will do well in each of those professions are people who learn how to use these tools."
Bullshit. Citation very much needed. It's a shame--a shameful stain on the profession--that journalists don't respond critically to such absurd nonsense and ask the obvious question: are you fucking lying?. It is absolutely not true that AI tools make doctors more effective, or teachers, or programmers. It would be very convenient to people like Pichai and Scam Altman, but that don't make it so.
And AI skeptics are waiting to see the proof in the pudding. If we have a new tool that makes hundreds of thousands of devs vastly more productive, I expect to see the results of that in new, improved software. So far, I'm just seeing more churn and more bugs. It may well be the case that in a couple years we'll see the fruits of AI productivity gains, but talk is cheap.
The proof is in feature velocity of devs/teams that use it and in the layoffs due to efficiency gains.
I think it's very hard to convince AI skeptics since for some reason they feel more threatened by it than rest. It's counterproductive and would hinder them professionally but then it's their choice.
Without rigorous, controlled study I'm not ready to accept claims of velocity, efficiency, etc. I'm a professional software engineer, I have tried various AI tools in the workplace both for code review and development. I found personally that they were more harmful than effective. But I don't think my personal experience is really important data here. Just like I don't think yours is. What matters is whether these tools actually do something or whether instead they just make some users feel something.
The studies I've seen--and there are very few--seem to indicate the effect is more placebo than pharmacological.
Regardless, breathless claims that I'm somehow damaging my career by wondering whether these tools actually work are going to do nothing to persuade me. I'm quite secure in my career prospects, thank you kindly.
I do admit I don't much like being labeled an "AI skeptic" either. I've been following developments in machine learning for like 2 decades and I'm familiar with results in the field going back to the 1950s. You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.
You say you've used AI tools for code review and deploys, but do you ever just use chat GPT as a faster version of Google for things like understanding a language you aren't familiar with, finding bugs in existing code, or generating boilerplate?
Really I only use chat GPT and sometimes Claude code, I haven't used these special-cased AI tools
> You have the opportunity here to convince me, I want to believe there is some merit to this latest AI summer. But I am not seeing the evidence for it.
As I said the evidence is in companies not hiring anymore since they don't need as many developers as before. If you want rigorous controlled studies you'll get it in due time. In the meantime maybe just look into the workflows of how people are using
re AI skeptics: I started pushing AI in our company early this year, and one of the first questions I got was that "are we doing it to reduce costs". I fully understood and sympathize with the fact many engineers feel threatened and feel they are being replaced. So I clarified it's just to increase our feature velocity which was my honest intention since ofc I'm not a monster.
I then asked this engineer to develop a feature using bolt, and he partially managed to do it but in the worst way possible. His approach was to spend no time on planning/architecture and to just ask AI to do it in a few lines. When hit with bugs he would ask the AI "to fix the bug" without even describing the bug. His reasoning was that if he had to do this prep work then why would he use AI. Nonetheless he finished entire month's worth of credit in a single day.
I can't find the proper words, but there's a certain amount of dishonesty in this attitude that really turns me off. Like turbotax sabotaging tax reforms so they can rent seek.
> If you want rigorous controlled studies you'll get it in due time.
I hope so, because the alternative is grim. But to be quite honest I don't expect it'll happen, based on what I've seen so far. Obviously your experience is different, and you probably don't agree--which is fine. That's the great thing about science. When done properly it transcends personal experience, "common sense", faith, and other imprecise ways of thinking. It obviates the need to agree--you have a result and if the methodology is sound in the famous words of Dr. Malcolm "well, there it is." The reason I think we won't get results showing AI tooling meaningfully impacts worker productivity are twofold:
(1) Early results indicate it doesn't. Experiences differ of course but overall it doesn't seem like the tools are measurably moving the needle one way or the other. That could change over time.
(2) It would be extremely favorably in the interests of companies selling AI dev tools to show clearly and inarguably that the things they're selling actually do something. Quantifying this value would help them set prices. They must be analyzing this problem, but they're not publishing or otherwise communicating their findings. Why? I can only conclude it's because they're not favorable.
So given these two indications at this point in time, a placebo like effect seems most likely. That would not inspire me to sign a purchase agreement. This makes me afraid for the economy.
It's not really about optimism or pessimism, it's effect vs no effect. Self reported anecdotes like yours abound, but as far as I'm aware the effect isn't real. That is, it's not in fact true that if a business buys AI tools for its developers their output will increase in some way that impacts the business meaningfully. So while you may feel more productive using AI tooling, in fact you probably aren't, actually.
No. If you're trying to make a causal link between some layoffs and AI tooling you need to bring the receipts. Show that the layoffs were caused by AI tooling, don't just assume it. I don't think you can, or that anyone has.
I am very much not an AI skeptic, I use AI every day for work, and it's quite clear to me that most of the layoffs of the past few years are correcting for the absurd over hiring from the Covid era. Every software company really convinced themselves that they needed like 2-3x the workforce they actually did because "the world changed". Then it became clear that the world in fact did not fundamentally change in the ways they thought.
Chat GPT just happened to come out around the same time so we get all this misattribution
I don't know which product you're even talking about.
If you mean AI Overview, you really need to cite the source of this claim:
> seeing that it drives consumers away from your product
Because every single source I can find claims that Google search grew in 2024[0]. HN is not a good focus group for a product that targets billions of people.
> Can someone explain to me the rationale of investing in a product, marketing it, seeing that it drives consumers away from your product and erodes trust, and then you continue to invest at an accelerating rate?
Hey now, Google Plus was more than a decade ago. I didn't like it either, but maybe it's time to move on? I think they learned their lesson.
What evidence do you have that it's driving consumers away from the product? The people who bother to say anything on the internet are the extreme dedicated minority and are often not representative of a silent majority. Unless you have access to analytics, you can't make this inference.
That's only one article and with only a small focus: Americans only.
U.S. adults are generally pessimistic about AI’s effect on people’s ability to think creatively and form meaningful relationships: 53% say AI will worsen people’s ability to think creatively, compared with 16% who say it will improve this....
One of the most consistent themes in the research is fear of misinformation. In an era where AI-generated content can be nearly indistinguishable from authentic material, the potential for deception is enormous, and people know it. A full 76% of Americans say they are concerned about AI tools producing false or misleading information.
That doesn’t say anything about pushing people away from using products with AI though. People are enormously negative about the effects of social media, and yet social media use is incredibly pervasive and sticky.
Researchers have found that including the words “artificial intelligence” in product marketing is a major turn-off for consumers, suggesting a growing backlash and disillusionment with the tech — and that startups trying to cram “AI” into their product are actually making a grave error.
My point is that what people say and what people do are not the same thing. It may sound self-explanatory that if people don’t trust AI, they will avoid AI products, but I’m interested in data proving this. Self-reported attitudes regarding AI are not the same as customers actively avoiding products using AI.
I agree with your observation re. what people say/do. However, you know just as well as I do, that there's never studies/data of people avoiding stuff. How would you even go about proving a negative? So, let's turn this around: can you show me data that confirms people are enthusiastic to buy AI enhanced things? Data that confirm people's widespread acceptance and/or even preference of AI enhanced commodities?
There is no need for us random civilians to know the truth of these matters. Employees inside the company can see analytics that show whether the features are working or not.
Eh... That's only worth anything if the new version of "you" is better than the original.
I'm certain Jobs thought there's no need to point that detail, because nobody still living¹ would be dumb enough not to understand it implicitly. Too bad that we have evidence otherwise.
1 - At the worst case, selection bias should make it true.
I'm not really sure what you're saying. He was talking specifically about the iPhone vs the iPod, and anyway, people buy inferior products all the time in tech history so there's no guarantee a better product would succeed anyway.
It's at least something people prefer. There's no way he even thought people would use that rationale for taking products people like (or dislike less) out of the market to cannibalize the market by force with something people don't like (or dislike more).
Good thing no one is talking about that sort of thing then. In the context of Google, either they implement LLMs themselves or another company will come and do so in order to cannibalize Google.