"Traditional" Google searches can give you wildly inaccurate information too. It's up to the user to vet the sources and think critically to distinguish what's accurate or not. Bing's new chatbot is no different.
I hope this small but very vocal group of people does not compromise progress of AI development. It feels much like the traditional media lobbyists when the Internet and world wide web was first taking off.
How exactly can I tell if the chat bot is hallucinating or not without actually going into the search result source (at which point the bot becomes less useful)? It's hallucinating what the sources are saying and making up what the source is saying.
At least with humans I can expect there is intent to lie or not. They tell me whether they are confident and I can check how authoritative the source is. Realistically, people tend to operate in good faith. Experts don't usually intend to lie but the bot doesn't even have an intent.
I think pretending that AI development must only occur in productionized environments is a bit naive. It's not like LLM research isn't occurring. It's perfectly fine to leave it in labs if releasing it could have catastrophic consequences.
These models are very impressive, but the issue (imo) is that lay people without an ML background see how plausibly-human the output is and infer that there must be some plausibly-human intelligence behind it that has some plausibly-human learning mechanism -- if your new hire at work made the kinds of mistakes that ChatGPT does, you'd expect them to be up to speed in a couple of weeks. The issue is that ChatGPT really isn't human-like, and removing inaccurate output isn't just a question of correcting it a few times -- it's learning process is truly different and it doesn't understand things how we do.
Traditional google searches are a take it or leave it situation. The result depends on your interpretation of the sources google provides, and therefore, you are expecting a possibility of a source being misleading or inaccurate.
On the other hand, I don't expect to be told an inaccurate & misleading answer from somebody who I was told to ask the question to- and doesn't provide sources.
To conflate the expectations of traditional search results with the output of a supposedly helpful chat bot is wildly inappropriate.
I hope this small but very vocal group of people does not compromise progress of AI development. It feels much like the traditional media lobbyists when the Internet and world wide web was first taking off.