Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work for Google Brain. I remember meeting Brian at a conference and I have nothing but good things to say about him. That said, I think Brian is underestimating the extent to which the Brain/DeepMind merger is happening because it's what researchers want. Many of us have a strong sense that the future of ML involves models built by large teams in industry environments. My impression is that the goal of the merger is to create a better, more coordinated environment for that kind of research.


> Many of us have a strong sense that the future of ML involves models built by large teams in industry environments

The gradient of current moment is that whatever approach is optimized to use more data and more compute is much easier to invest in than something which can do more with less, but with a significant number of possible dead-ends.

At some point, this will have diminishing returns, but until that is hit, this makes sense as a purely return-on-investment for both research progress and business returns.


Would you say the majority (super majority?) of Google Brain and DeepMind employees are happy about the merge?


I'm having trouble keeping Brain/Brian straight.


The first is an object that extends into a natural number of spacial dimensions. It arises in string theory.

The second is our Lord and Savior.


Follow the gourd!


Monty Python's Life of Brain


The goal of the merger is for execs to look like they are doing something to drive progress. Actual progress comes from the researchers and developers.


Well, where exactly is this progress? Where is Google's answer to GPT-4? Why weren't the 'researchers and developers' making a GPT-4 equivalent?

Turns out you sometimes you need a top down, centralised vision to execute on projects. When the goal is undefined, you can allow researchers to run free and explore, now its full on wartime, with clear goals (make GPT-5,6,7....).


Google is fundamentally allergic to top-down management. Most googlers will reject any attempt to be told what to do as wrong, because lots of IC's voting with their feet are smarter than any (google) exec at figuring out what to do.

Last time Google got spooked by a competitor was Facebook, and they built Google Plus in response. We all know that was an utter failure. Googlers could escape that one with their egos in tact because winning in "social" is just some UX junk, not hard-core engineering like ML.

It's gonna be super hard for them to come to grips with the fact that they are way behind on something that they should be good at. Plan for lots of cognitive dissonance ahead.


I don't think they are behind just because they have released less stuff. They had LaMDA way before ChatGPT, and they had multimodal models (both by DeepMind and by Google Brain) well before OpenAI.


They are behind because what they have released sucks. Have you tried Bard? It's dumb. Like you're talking to some 20th century gimmick dumb. GPT4 is far from perfect, but when it makes mistakes and you point them out, it understands and tries to adapt. Bard just repeats itself saying the same stupid things like a casette-tape answering machine.

If you ask a googler about this, they typically assume GPT is just as stupid as bard. Or say something like "so GPT is just trained on more data - we can do that." As if nothing's wrong.


Bard uses a smaller model currently, which was announced before release.

> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback. [1]

> Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. [2]

[1] https://blog.google/technology/ai/bard-google-ai-search-upda...

[2] https://blog.google/technology/ai/try-bard/


”We’re releasing something useless and uninteresting, so that we can get more user feedback”

Is it only me that sees a problem here?


"We are releasing something that's too underdeveloped, because Wall Street demands we release something. We're using a smaller training set because we have to rush to market with our smaller, stupider model ASAP."


I remember the CEO of google saying that more capable models will be releases a few months (a month?) ago already. Either they delayed or the model is just as bad (maybe they released after the announcement of coding in 20 languages?)


First link is the CEO, it's from February.


GPT-4 and chatGPT are developed from mostly Google papers and Facebook code/frameworks.

OpenAI just focused on making it a great product.


A car is basically 4 wheels and a combustion engine. Henry Ford just focused on making it a great product.

(In case it's not clear, I think you might be underestimating the size of the subsequent contributions)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: