Good point. If I ever want a random summary of an event in a fantasy book I've already read, I'll be sure to stick to ChatGPT.
On a completely unrelated point, why are people's tests of LLMs clearly designed to make me think that people are dumber? Are the LLMs suggesting these questions as part of their plot?
Great. And if I want your opinion on how smart it is to be a bit silly with LLMs when I’m not even attempting to test them systematically, I’ll be sure to ask you.
On a completely unrelated point, why are people's tests of LLMs clearly designed to make me think that people are dumber? Are the LLMs suggesting these questions as part of their plot?