Of any company, Google has the largest moat:
1) Google AI datacenters built around TPUs are much more efficient than anything Nvidia based
2) Google has the crawl infrastructure and experience to continually get the freshest data
3) Google has lots of paid and voluntary training data from users
Most importantly, Google has the userbase to rule all userbases. I’d argue that for over 90% of people online, Google is their gateway to information. Whether it’s search, Chrome, or Android.
Not to mention countless other popular apps that Google has. YouTube anyone?
They’re also the most well positioned company to profit from cheap AI, their ads network is a behemoth.
So yeah, add that up with the compute, the data, and the talent, and it’s pretty clear that Google is not a force to dismiss.
If anything I think DeepSeek is great news for Google.
I already have. Never thought I would, but Google search results are literally unusable for me.
Not to mention, LLMs are way better at synthesizing multiple sources into coherent response. I end up asking and LLM then searching only as secondary research.
> Google search results are literally unusable for me
Totally on the same boat. Information is just much harder to find and friction becomes higher. I'd rather deal with the occasional hallucination than with the utterly enshittified SERP experience.
That's only a problem if you presume that Google cannot figure out a way to monetize being the second brain that you offload a lot of cognitive tasks to. Hey google I'm hungry, ok how does Pizza sound? Great, make it so. OK, sending an order to pizza-company-that-paid-Google.
They stand to tap into something far more powerful than advertising if they can position themselves as your agent.
Because they didn't start the enshittification of Gemini chat results yet, where you have to wade through paid ads spam, similar to hearing politicians talk.
That's the real advantage of the open deepseek weights. They cannot enshittify this, you can run it locally. Just with an old snapshot
It ironically seems like a very similiar market to internet search. There was no moat there either, other than the capital needed to bankroll a better search engine.
A lot of these AI companies will eventually fail (not because their models will be significantly worse but because of failure to commercialize), the market will consolidate with only a couple of players (maybe two in the US, one in China and maybe one in Russia). And once that happens the idea of raising enough capital and building a competitive AI company will seem impossible. Exactly like what transpired with internet search after Google won most of the market.
Oof, no -- it's quite the opposite, much to the likely collapse of google in the future.
Holding exabytes of data to be processed on commodity hardware to enable internet-wide search, all the while it was man-in-the-middle monetised by an ad-business, created tremendous moats. Entering that market is limited to tech multinationals, and they have to deliver a much superior experience to overcome them. To perform a google search you need google-sized data-centres.
Here we have exactly the opposite dynamics: high-quality search results (/prompt-answers) are as-of-now incredibly commodotized, and accessible at-inferecence-time to any person who has $25k. That's going to be <= 10k soon.
And innovation in the space has also gone from needing >1Bn to <=50Mil
A higher quality search experience is available now at absolutely trivial prices.
That's only because LLMs haven't been a target until now. Search worked great back before everything became algorithmically optimised to high hell. Over time, the quality of information degrades because as metric manipulation becomes more effective, every quality signal becomes weaker.
Right now, automated knowledge gathering absolutely wipes the floor with automated bias. Cloudflare has an AI blocker which still can't stop residential proxies with suitably configured crawlers. The technology for LLM crawling/training is still mostly unknown, even to engineers, so no SEO wranglers have been able to game training data filters successfully. All LLMs have access to the same dataset - the internet.
Once you:
1. Publicly reveal how training data is pre-processed
2. Roll out a reputation score that makes it hard for bots to operate
3. Begin training on non-public data, such as synthetic datasets
4. Give manipulated data a few more years to accumulate and find its way into training data
In fairness, it's at least a $5.6M moat at the moment, which is not exactly "no moat" but it is demonstrably not an insurmountable one, and it might become more shallow yet with time.