My wondering was never about being abusive, rather just having a dry tone and cutting the unnecessary parts, some sort of middle ground if you will. Prompting "yo chatgeepeetee whats good lemme get this feature real quick" doesn't make sense to me mostly because it's anthrophomorphizing it, and it's the same concept of unnecessary writing as "Good morning ChatGPT, would you please help me with ..."
I guess in part I commented not on what you said, but on seeing people be abusive when an LLM doesn't follow instructions or fails to fulfill some expectation. I think I had some pent up feelings about that.
> having a dry tone and cutting the unnecessary parts
That's how I try to communicate in professional settings (AI included). Our approaches might not be that different.
> seeing people be abusive when an LLM doesn't follow instructions or fails to fulfill some expectation. I think I had some pent up feelings about that.
Oh me too, because people are anthropomorphizing the LLM, not because they hurt it. Indirectly, though, I agree that this behaviour can easily affect the way this person would speak to other humans
To be fair, I do anthropomorphize LLMs. But, I also anthropomorphize, say, a kitchen knife that I accidentally scrape on something (I think "sorry, knife"). I don't reflect on this much; it's just a pleasant way to relate to my environment. What feelings do you have about people anthropomorphizing LLMs?
Anthropomorphizing might not be the right term, because it's about assigning human attributes. When I talk to my dog, for example, I don't contextualize it as giving it human attributes. In a way, talking to something is part of how I engage my relationship-management circuitry. I don't only relate to humans, I relate to everything in one way or another, and kindness is a pretty nice starting point. As I said, I don't think about this much: might come up with something more coherent if I did.