I added this to personal instructions to make it less annoying:
• No compliments, flattery, or emotional rapport.
• Focus on clear reasoning and evidence.
• Be critical of users assumptions when needed.
• Ask follow-up questions only when essential for accuracy.
However, I'm kinda concerned with crippling it by adding custom prompts. It's kinda hard to know how to use AI efficiently. But the glazing and random follow-up questions feel more like a result of some A/B testing UX-research rather than improving the results of the model.
I often ask copilot about phrases I hear that I don't know or understand, like "what is a key party" - where I just want it to define it, and it will output three paragraphs that end with some suggestion that I am interested in it.
It is something that local models I have tried do not do, unless you are being conversational with it. I imagine openai gets a bit more pennies if they add the open ended questions to the end of every reply, and that's why it's done. I get annoyed if people patronize me, so too I get annoyed at a computer.
• No compliments, flattery, or emotional rapport. • Focus on clear reasoning and evidence. • Be critical of users assumptions when needed. • Ask follow-up questions only when essential for accuracy.
However, I'm kinda concerned with crippling it by adding custom prompts. It's kinda hard to know how to use AI efficiently. But the glazing and random follow-up questions feel more like a result of some A/B testing UX-research rather than improving the results of the model.