Indeed. I've yet to see a single youtube evaluation video of Gemini that didn't at some point point out that Gemini is hallucinating or trying to convince the user of something that's total bullshit.
It's pretty clever about it. Doing this like. "Smart, clear, eloquent fact A, phrased longer than necessary. Bullshit (short sentence). Smart, clear, eloquent fact B, phrased longer than necessary".
But the bullshit is always ... pointing in the same direction. I must say I wonder if it isn't exactly the result of what is being blamed. The "wokeness". Regardless of the actual ideology people are trying to impose on the model ... at least in some cases the model is lying to make it's answers satisfy what is obviously a political ideology imposed on it.
Because look at those pictures. To a transformer, pictures are stories. A long sequence of tokens that convey thoughts. They are very largely correct. There's a clear Hollywood influence, as you'd expect, but otherwise they're mostly correct. If you took the total amount of information in those pictures, I bet you'd find 99% of it matches the training data! And making the characters black or asian is a little hallucination somewhere in the middle of the picture's story. And, surprise! Those modifications are there ... because that's EXACTLY the ideology Google is trying to impose on the model.
Could it be that the model is lying ... because it thinks that's exactly what you've asked it to do? "System prompts" mostly are just pasted before the input. If the system prompt says "assume all historical accomplishments were by black or asian people", then the model will assume that's what you want!
Makes you wonder ... how often are "hallucinations" the result of the model not making a mistake, but "purposefully" lying to make it's answer comply with a particular worldview? Because that's what you asked it to do, if you look at your full input, including Google's system prompt?
Hell, I'd ask the same question about a lot of human answers too. As soon as a subject becomes even a little bit controversial, people outright lie en masse. I know nobody wants to admit it but that's exactly how humans work.
An image generator is supposed to hallucinate. That's the whole point. If you want a non-hallucinogenic image, then use Google image search.
It's just so ass-backwards to release a creative tool and then attempt to constrain it to a pre-determined set of imagery.
Like, imagine an AI that can produce images of any crazy thing you can think of? Wouldn't that be amazing?
Billions have been invested in this technology, and it sort of works!
The problem is that if you release it to the public people are going to use it to generate any crazy image they can think of. We can't have that. No good. No good at all.
It's pretty clever about it. Doing this like. "Smart, clear, eloquent fact A, phrased longer than necessary. Bullshit (short sentence). Smart, clear, eloquent fact B, phrased longer than necessary".
But the bullshit is always ... pointing in the same direction. I must say I wonder if it isn't exactly the result of what is being blamed. The "wokeness". Regardless of the actual ideology people are trying to impose on the model ... at least in some cases the model is lying to make it's answers satisfy what is obviously a political ideology imposed on it.
Because look at those pictures. To a transformer, pictures are stories. A long sequence of tokens that convey thoughts. They are very largely correct. There's a clear Hollywood influence, as you'd expect, but otherwise they're mostly correct. If you took the total amount of information in those pictures, I bet you'd find 99% of it matches the training data! And making the characters black or asian is a little hallucination somewhere in the middle of the picture's story. And, surprise! Those modifications are there ... because that's EXACTLY the ideology Google is trying to impose on the model.
Could it be that the model is lying ... because it thinks that's exactly what you've asked it to do? "System prompts" mostly are just pasted before the input. If the system prompt says "assume all historical accomplishments were by black or asian people", then the model will assume that's what you want!
Makes you wonder ... how often are "hallucinations" the result of the model not making a mistake, but "purposefully" lying to make it's answer comply with a particular worldview? Because that's what you asked it to do, if you look at your full input, including Google's system prompt?
Hell, I'd ask the same question about a lot of human answers too. As soon as a subject becomes even a little bit controversial, people outright lie en masse. I know nobody wants to admit it but that's exactly how humans work.