Hacker News new | past | comments | ask | show | jobs | submit login

That's in contrast to what OpenAI's David Luan "Why Google couldn’t make GPT-3" (https://www.latent.space/p/adept):

  And it turned out the whole time that they just couldn't get critical mass.
  So during my year where I led the Google LM effort and I was one of the
  brain leads, you know, it became really clear why. At the time, there was a
  thing called the Brain Credit Marketplace. Everyone's assigned a credit. So
  if you have a credit, you get to buy end chips according to supply and
  demand. So if you want to go do a giant job, you had to convince like 19 or
  20 of your colleagues not to do work. And if that's how it works, it's
  really hard to get that bottom up critical mass to go scale these things.
  And the team at Google were fighting valiantly, but we were able to beat
  them simply because we took big swings and we focused.”
The whole episode is very interesting.



Yes. Hence the past tense. There are more reasons why they fell so badly behind, from bureaucracy to eye wateringly decrepit, overengineered, and legacy ridden training code, to slow and hard to debug AI infra, to deliberate forking and siloing of critical projects, etc. Let’s just say Google is now very far from where it was a decade ago. I’m mildly surprised they released anything competitive at all. I’m not surprised that they failed to beat OpenAI (and therefore ironically Microsoft - how do you like them turntables?)


thanks for sharing it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: