Hacker News new | past | comments | ask | show | jobs | submit login

This basically matches my experience in trying to get it to do the right thing. BEING VERY EXPLICIT AND ANGRY WORKS TO REINFORCE A POINT. Specifically telling it to not do a thing it will otherwise do is often necessary.

The only part that surprises me is `Output should be in markdown format`. Usually being that vague results in weird variation in output; I'd have expected a formatted example in the prompt for GPT to copy.




It understands most things without being given examples in my experience. Being explicit is helpful. Being angry is likely inconsequential, but I can't say for sure since I never felt the need TBH. What I can say is that the bot has become spookily like a person, "someone" that I conceive as helpful, courteous and friendly, albeit sometimes wildly wrong in spite of the assertive tone in the response.

I'll probably get used to it over time, as I get a deeper sense of how it works, and how it differs from real persons. ATM the distinction is blurred.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: