Hacker News new | past | comments | ask | show | jobs | submit login

> but this is a disadvantage in a competitive landscape

Or it's a unique advantage because this stuff doesn't happen without good researches who may want:

1) Their name in scientific papers

2) They might actually care about the openess of AI




So far it seems to be a disadvantage as DeepMind has fallen behind OpenAI, despite their size, and to some extent even behind Anthropic.


They feel behind because they didn't have the smart guy with a new idea a few years back, and HE decided to work at a place which started as open.

Playing catch up and trying to attract talent from the hot-new-thing OpenAI requires incentives beyond lots of money. I contend actually being open helps.

I'm sure that's one reason Facebook has an open source model, scientists can care about ethics and could be attracted to openness.


> They feel behind because they didn't have the smart guy with a new idea a few years back, and HE decided to work at a place which started as open.

The "Attention Is All You Need" guys all worked at Google. Google is where they are despite having the smart guys with a new idea a few years back.

Of course, IMHO it wouldn't have have helped Google if they'd kept the transformer architecture secret. They'd have fumbled it because they didn't realise what they had.


Didn't Google have the LaMDA model pretty early, which was even described as "sentient" at some point? That doesn't look "fumbled" to me.


What Google did was sit on their ass, not deigning to release anything. In the meantime, OpenAI became a $150 billion company. And Anthropic came out with Claude, and Facebook with Llama, and Mistral with their models.

Only then did Google realise there might be something to this LLM stuff - so they responded with Bard, a product so poorly received they later had to completely rebrand it. Looks like they didn't have a "sentient" model up their sleeve after all. Then the updated, rebranded model had a bunch of image generation embarrassments of its own.

Admittedly, they have recovered somewhat since then; they're second on some performance leaderboards, which is respectable.

But there was a real tortoise-and-hare situation where they thought they were so far ahead they had time for a nap, until they got overtaken. Any lead they had from inventing transformers and being the only people with TPUs has been squandered.


I have the impression they regarded generative AI as too dangerous. Before the success of ChatGPT, they never considered making PaLM or LaMDA or Chinchilla or Imagen publicly available until they saw themselves in a competitive disadvantage.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: