"The affected employees were data scientists, it said."
Yeah, seems likely. The methods they used for the voice assistant are probably no longer relevant so the data science parts of voice assistant probably gets reduced.
However this part is strange:
"The document also noted that some of the affected employees had been transferred from the Bard team to the Assistant team last Friday and received an invite to a meeting about their role just hours later."
Were they already pushed away from the Bard team? Or maybe that is why they ended these roles, to force them to go back to Bard?
Or, they wanted to lay off people from both teams but laying off people from your fledgling product is seen as a bad move so you transfer them to the dying product then lay them off from there and no headline will read "google lays off engineers working on bard due to it being a colossal failure"
It's pretty stunning that no one has hooked their voice assistant up to an LLM yet. Alexa, Siri, Google, etc. are all trash that could easily be improved 1000x by leveraging existing LLM technologies.
I believe it's because it's at least a magnitude more expensive to run an LLM instead of a basic voice assistant ML model. Especially when considering anyone who has purchased a Google Nest Mini can use it, for free, forever.
run "set timer" and "what's the weather" and "turn on the lights" through some regexes as a first pass and save the LLM for the complicated stuff. actually, with some caching they can probably cut costs pretty heavily there too.
I’m guessing they are already doing this. Adding an LLM is asking to add a very expensive thing to the long tail of queries which likely cannot be cached.
And when you consider these voice assistants already feel at risk for further investment due to subpar results, it makes sense. Bit of a catch-22 though as many of us have given up on expecting much out of our existing ones
But a lot of the question /responses could be trivial cached. No need to run expensive LLM every time for the same basic "how are you today?" prompts, it only has to be cached once.
Caching static requests alone is hard enough. With all the ways you can ask this question, welcome to the most complicated caching backend ever. Caching exact matches would also not help much because of this.
Then you’re kind of defeating the purpose of an llm.
Fixed responses for common queries is what we have now.
Not to mention that LLMs tend to be very wordy right now. I’d hate to way 20 seconds to hear my phone say “As a voice assistant I’m not aware of the exact menu of the Thai restaurant on 2nd, but I have opened a google search for it and found the following results.
That will be great, everyone expects much more from "AI" (and at least I consider voice assistants to be an early AI prototype) and it would be great to have voice assistants understand what we ask better (like ChatGPT would, and perhaps Bard).
They're fully committed until they are not. That what sucks about Google and that culture of always buying a fancy domain name and building this new business about it. They were as ecstatic about their Siri alternative, and I wouldn't be surprised they would ditch bard again for whatever fancy new thing is.
It is okay for products to be lame and underwhelming, granted there are users and use cases for them