I don't think that's true at all. Think of it like setting up conversation constraints to reduce the potential pitfalls for a model. You can vastly improve the capability of just about any LLM I've used by being clear about what you specifically want considered, and what you don't want considered when solving a problem.
It'll take you much farther, by allowing you to incrementally solve your problem in smaller steps while giving the model the proper context required for each step of the problem-solving process, and limiting the things it must consider for each branch of your problem.
After my first day with Bard, I would have agreed with you. But since then, I've found that Bard simply has a lot of variance in answer quality. Sometimes it fails for surprisingly simple questions, or hallucinates to an even worse degree than ChatGPT, but other times it gives much better answers than ChatGPT.
On the first day, it felt like 80% of the responses were in the first (fail/hallucinate) category, but over time it feels more like a 50/50 split, which makes it worth running prompts over both ChatGPT and Bard and select the best one. I don't know if the change is because I learnt to prompt it better, or if they improved the models based on all the user chats from the public release - perhaps both.
This is just... false. Bard is not just a little worse than gpt-4 for coding, it's more like several orders of magnitude worse. I can't imagine how you are getting superior outputs from Bard.
I'd be surprised if he can. Both accounts that are purporting how useful Bard is (okdood64, pverghese) have comment histories defending or advocating for Google frequently:
The Bard model (Bison) is available without region lock as part of Google Cloud Platform. In addition to being able to call it via an API, they have a similar developer UI to the OpenAI playground to interactively experiment with it.
They have 100,000 employees pretending to work on the past.
They have no leadership at the top. Nobody that can steer the ship to the next land (or even anybody that has a map). Who is actively working at Alphabet that has the authority to kill Google search through self-cannibalization? Absolutely nobody. They're screwed accordingly. It takes an enormous level of authority (think: Steve Jobs) and leadership to even considering intentionally putting at risk a $200 billion sales product. The trick of course is that it's already at great risk.
They don't know what to do, so they're particularly reactive. It has been that way for a long time though, it's just that Google search was never under serious threat previously, so it didn't really matter as a terminal risk if they failed (eg with their social network efforts; their social networks were reactive).
It's somewhat similar to watching Microsoft under Ballmer and how they lacked direction, didn't know what to do, and were too reactive. You can tell when a giant entity like Google is wandering aimlessly.
But it can't be used unless I enable billing, which I am not willing to do after reading all the horror stories about people getting billed thousands overnight. I'm not willing to take the risk that I forget some script and it keeps creating charges.
Use a CC or debit that can limit charges. Privacy.com is a generic one. There’s others. Also Capital One, Bank of America, Apple Card and maybe some others have some semblance of control over temporary CCs.
Ideally one would want to be able to have a cap on the amount that can be spent in a given period.
Thanks for this! I had a temporary Cap One card on my cloud accounts. I’m going to switch them to Privacy.com ones to limit amount if I can’t find another solution.
I was just on a cruise around the UK and I couldn't access Bard from the ship's wi-fi. That surprised me for some reason. Should've checked where it thought I was ...
Do you have a source on this? Given that the UK has retained the EU GDPR as law[1] - I don't really understand why they would make it available in the UK and not the EU, seeing as they would have to comply with the same law.
This is naïve though. Regulation — especially such as this — has to be enforced and there is obviously room to over and under interpret the text of the law on a whim, or varying fines. OAI knows this and looking at the EU lately, what they’re doing is wise.
Which is interesting, because if they can't comply within the EU, then how do they comply outside of the EU. With that I mean, if they have concerns that there is private data of EU citizens somewhere in that, then that is also in there for users outside of the EU. That said, they do not comply with GDPR anyway. If that its not the case, then they could also enable it for users within the EU.
If Google gobble up data about EU citizens then they fall under GDPR.
It doesn't matter that they don't allow EU citizens to use the result.
If our personal data is in there and they are don't protect it properly they are violating EU law. And protecting it properly means from everyone, not just EU citizens.
GDPR is likely not enforceable if you have no presence in EU whatsoever, if you have no assets in EU and no money coming in from EU.
Anything Google does with data of EU residents is subject to GDPR even if that particular service is not offered within EU, and it is definitely enforceable because Google has a presence in EU, which can be (and has been) subjected to fines, seizures of assets, etc.
That’s a common belief, but it’s wrong. In principle an EU court could decide to apply the GDPR to conduct outside the EU; and in the right circumstances, a non-EU court might rule that the GDPR applies.
Choice of law is anything but simple. Think of geographic scoping of laws as a rough rule of thumb sovereign states use to avoid annoying each other, rather than as a law of nature.
Google (Deepmind) actually has the people and has developed the science to make the best AI products in the world, but unfortunately Bard seems to be thrown together in an afternoon by an intern, and then handed off to a hoard of marketing people. It's not good right now.
Deepmind is one of the best scientifically, they just don't really make products. OpenAI is essentially the direct opposite of that.
Don't you worry, if there is any medium, place or mode of interaction people spend time on, advertising will eventually metastasize to it, and will keep growing until it completely devalues the activity and destroys most of the utility it provides.