Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They are behind because what they have released sucks. Have you tried Bard? It's dumb. Like you're talking to some 20th century gimmick dumb. GPT4 is far from perfect, but when it makes mistakes and you point them out, it understands and tries to adapt. Bard just repeats itself saying the same stupid things like a casette-tape answering machine.

If you ask a googler about this, they typically assume GPT is just as stupid as bard. Or say something like "so GPT is just trained on more data - we can do that." As if nothing's wrong.



Bard uses a smaller model currently, which was announced before release.

> We’re releasing it initially with our lightweight model version of LaMDA. This much smaller model requires significantly less computing power, enabling us to scale to more users, allowing for more feedback. [1]

> Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. [2]

[1] https://blog.google/technology/ai/bard-google-ai-search-upda...

[2] https://blog.google/technology/ai/try-bard/


”We’re releasing something useless and uninteresting, so that we can get more user feedback”

Is it only me that sees a problem here?


"We are releasing something that's too underdeveloped, because Wall Street demands we release something. We're using a smaller training set because we have to rush to market with our smaller, stupider model ASAP."


I remember the CEO of google saying that more capable models will be releases a few months (a month?) ago already. Either they delayed or the model is just as bad (maybe they released after the announcement of coding in 20 languages?)


First link is the CEO, it's from February.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: