As a long-time AI+HCI person, I have mixed feelings about "AI", but just last night I was remarking to colleagues/friends that even I have mostly stopped clicking through from Google searches. The "AI" summary now usually plagiarizes a good enough answer.
I'm sure Google knows this, and also knows that that many of these "AI" answers wouldn't pass any prior standard of copyright fair use.
I suspect Google were kinda "forced" into it by the sudden popularity of OpenAI-Microsoft (who have fewer ethical qualms) and the desire to keep feeding their gazillion-dollar machine rather than have it wither and become a has-been.
"If we don't do it, everyone else will anyway, and we'll be less evil with that power than those guys." Usually that's just a convenient selfish rationalization, but this time it might actually be true.
Still, Google is currently ripping off and screwing over the Web, in a way that they still knew was wrong as recently as a few years ago, pre-ChatGPT.
Google News was definitely doing this level of "summary" before ChatGPT. I'm don't think OpenAI-MS have fewer ethical qualms, just Google had more recent memories of the negative consequences.
I'm sure Google knows this, and also knows that that many of these "AI" answers wouldn't pass any prior standard of copyright fair use.
I suspect Google were kinda "forced" into it by the sudden popularity of OpenAI-Microsoft (who have fewer ethical qualms) and the desire to keep feeding their gazillion-dollar machine rather than have it wither and become a has-been.
"If we don't do it, everyone else will anyway, and we'll be less evil with that power than those guys." Usually that's just a convenient selfish rationalization, but this time it might actually be true.
Still, Google is currently ripping off and screwing over the Web, in a way that they still knew was wrong as recently as a few years ago, pre-ChatGPT.