Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why Google failed to make GPT-3 (latent.space)
41 points by bookofjoe on March 26, 2024 | hide | past | favorite | 40 comments


So far, Google's missed opportunity to create GPT-3 doesn't fee like much of a setback. They're at least neck-and-neck with GPT-4, and in some areas (e.g. million token context window) clearly ahead.


The problem is that Google has been positioning itself as the leader in AI for over a decade, and now that powerful AI is starting to become a reality, Google seems completely blindsided. They have no idea how to capitalize on this development.

Google's problem isn't lack of engineering talent, it's lack of leadership. If Google's board of directors gets their act together and finds new leadership, Google could yet harness what remains of their engineering talent and stay in the game, or even pull ahead.

A lot of the pieces are there for Google to succeed. They still have top researchers and the institutional skill to run AI at scale. What they lack is clarity of vision. Under the current management, Google is following the paths of Xerox and Kodak.


I think their problem is that it’s not clear how not to cannibalize their core revenue model if this replaces some percentage of search (a large percentage, in my case).


It's surprising to me that Sundar hasn't been fired after missing so badly on AI (along with a host of other misses in his tenure). Maybe it's coming, but I really don't see what the Google board sees in him.


Sundar is to Google what Ballmer was to Microsoft.

If the board wants to shift focus from high confidence near/medium term revenue they likely need a new leader.


Since Sundar became CEO in August 2015, Google's market cap went from ~350B to ~2T , why do you think he's an ineffective CEO?

Thats 6x returns!


Check the competitors - why didn't Google grow as fast as Microsoft or Nvidia? So yeah, that's ineffective. Maybe you could have grown Google as much as Sundar because he was essentially sitting on a gold mine found by the Google founders.


So, he consistently delivered higher returns than the market, but didn't grow as fast as the 2 fastest companies?


Oh please. I don’t think I need to explain the flaw in this logic


You took the time to type this out, so why don't you explain further?


[flagged]


Perhaps, but there are other indications that "deranged activists" are not in charge:

AI researcher Timnit Gebru resigns from Google https://news.ycombinator.com/item?id=25292386

Google to pause Gemini image generation of people after issues https://news.ycombinator.com/item?id=39465250

You and I may want a totally unfiltered AI tool, but I don't expect a company with billions of users of all ages and situations to release that tool without guardrails.


The woke nonsense isn't the source of the problem, it's a symptom of a deeper problem.

The Gemeni image model itself is fine. Despite what some people want to believe, it wasn't trained to be anti-white. The problem was some people in high level management position decided they should insert DEI stuff into the prompt without asking or telling the users, and they forced the engineers to implement this without thinking through the obvious consequences, or even performing basic testing.

A competently managed company could still make reasonable efforts to promote DEI (or not), but it would look nothing like the nonsensical images that Gemeni was putting out on launch. Jamming DEI in sideways where it doesn't belong and then having it blow up in their face is a symptom of deeper problems from the top. The DEI is just exacerbating the lack of clarity, which is a much bigger problem.

Fixing Gemeni image generation is easy: just undo the stupid prompt insertions. Fixing the chain of decision-making authority that led to those prompt insertions is difficult. They need to restructure management.


It sounds like we're saying the same thing?

To stop the the prompts being messed with, you have to get rid of a deep set belief system that's grown within Google and culture as a whole at this point. If Google hired a person to start cutting the fat and getting rid of those people, there's gonna be a torrent of lawsuits claiming they're doing discriminatory termination, bad press framing Google as the next Twitter/X, etc.

It would be simple technologically, but it's a cultural issue. Which is hard to control when steering a ship as big as Google. For starters they have to get rid of Sundar and put in someone with an actual opinion on anything other than optimizing to grow the stock at any cost. Which I imagine shareholders aren't interested in!


AI is more than managing prompts and Google's arsenal of ML talent is sky high. I recommend decoupling AI's sociopolitical journey from the underlying technologies. You are going to miss the forest for the trees. I look towards market adoption as the critical milestones, but those figures are closely guarded today. For example, Microsoft baked Copilot into many of its products but which ones have high user engagement and long term adoption?


They offered Gemini advanced for 20 bucks a month. But the performance there is maybe neck and neck with GPT 3.5.

They don't offer the long token Gemini model to people paying money... They just expect you to Shell out 240 a year for a GPT 3.5 class model.

It really feels like they're trying to sell to people who don't know how to evaluate llms, and they're not ready to offer anything out that competes with gpt4. At least not at the gpt4 price point.

I feel that this indicates a leadership that either doesn't understand what it means to deliver value, or does not have a feasible opportunity to deliver value.

It feels hey're BSing the market without knowing what they're doing.


Is the "million token context window" available to anyone outside of a few enterprise customers?

Will it be affordable?

Gemini 1.5 still lags behind GPT-4 for most of my programming related questions. In fact, Claude-3 is a better competitor.

I'm skeptical of Google - all of their "breakthroughs" seem to be unreleased research. Meanwhile, the rest of the world passes them by.


i believe it is generally available in aistudio today, and jeff dean just rolled out limited api access yesterday. some folks in the latent space community alr reported getting access so i think its a pretty broad rollout.

ive not read any reports on pricing which obviously would be the main constraint on using this thing


Seems reasonable if Google had made GPT-3 when they had their own hardware, the biggest datasets, the most money, the best people, had invented a lot of the science then everyone else may have decided not to compete, and they wouldn't only be neck-and-neck with a young startup.


they have definitely caught up and then some. still feel like there’s interesting organizational lessons here to be learned, that is less about google and just more about how to take research bets for any company, especially a large company, especially a company accused of being too comfortable/egalitarian


To be clear, the best available version of Gemini has barely caught up with what OpenAI released over a year ago.

Google's a year behind. They haven't really caught up.


It is curious that Google was able to make moonshots in areas outside their direct expertise (balloons, glucose-reading contact lenses, etc.) but maybe not to the same degree in AI.


i mean the order of magnitude of money going into those things was probably pocket change whereas in AI there were real billions being horse traded. not the same stakes.


Click bait title, but there is a lot of interesting stuff here.


as a content creator who mixes reach and professional standards i mostly just subscribe to the veritasium school of ethics here https://m.youtube.com/watch?v=S2xHZPH5Sng


what would your title be?


"David Luan on his struggles to champion LLMs at Google"

"We tried to interview David Luan about Adept but he ran out of Brain Credits"


I assume because it would directly compete with search but at the same time kill the source of much of the knowledge it needs to train from (people stop making websites cause google ain't sending them traffic).

So they never made a product.

Now they can say they didn't kill the web, they can point to someone else.


sure at a high level but the podcast contains actual practical reasons why even the people who wanted to do it inside google couldnt


> during my year where I led the Google LM effort and I was one of the brain leads, you know, it became really clear why. At the time, there was a thing called the Brain Credit Marketplace. Everyone's assigned a credit. So if you have a credit, you get to buy end chips according to supply and demand. So if you want to go do a giant job, you had to convince like 19 or 20 of your colleagues not to do work

wat

Can anyone confirm this? Surely you can just bill all the expenses to your project and get it signed off by someone instead of collecting credits from individuals, right?


There are only so many TPUs available and they're kept at 100% utilization.

Google has had complex "marketplaces" around company resources for years: https://www.youtube.com/watch?v=3t6L-FlfeaI

It's a cultural value, like playing board games.


Marketplaces for resources are great. I was questioning the use of per-person credits as opposed to per-project credits. The former seems ridiculous, especially if everyone gets the same amount


The evidence indicates that the TPU marketplace caused a ton of friction and hampered Google Brain’s ability to innovate. They got merged with Deepmind after all.

Google’s golden years / Bell Labs years came from plumbing ads monopoly proceeds into blue sky projects. Contrast with Bytedance / Tiktok, which ruthlessly out-competed Facebook etc in the marketplace for attention. Googlers don’t want to actually build a good product and compete for users, they just want what suffices to feed their personal nerdy interests.


I think this googler-made meme answers your question: https://m.youtube.com/watch?v=3t6L-FlfeaI


Is it maybe because it was impossible to buy as many AI chips as you wanted?


i mean, he was there, and seems to be p confident any google brainer knows it


one word.

antitrust.

Google already made transformers, other AI research, and other products but famously never commercialised it, plus they are already seen in a bad light.

A competitor must emerge e.g. OpenAI in order for Google to compete and argue they aren't a monopoly otherwise they would be target to get broken up first.

I believe Google were more than capable of making a GPT-3, most forget Google already had DeepMind but Google AI, Google Brain and DeepMind were separate (until they merged last year, unsurprisingly)

So they were most likely both complacent and most of all didn't want to spark an investigation for a break up of Google / Alphabets businesses.


The title wasn't an open question directed at you, the link answers this question with references and quotes from insiders.


Through only one person?

Nope, what is said in there isn't enough and only shows through an implementation level of why they failed to do this.

We already know Google had multiple research labs and the technical papers and had no direction to pull a GPT-3 off (hence why they merged last year as I said).

So it's a moot point about insiders, the tech, etc, it's really more than that.


Days of megacorporations getting broken are over for quite some time it seems. Governments are afraid to kill golden geese, markets react quickly and nobody wants to be seen as the guy who tanked literally everyone's pensions.

Or maybe enough bribes at right places, who knows.


CEOs who are product managers and were a mckinsey consultant and did a "MBA" are the worst people to take over an iconic tech startup.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: