Hacker News new | past | comments | ask | show | jobs | submit login

I tried this set of prompts trying to get ChatGPT generate a diet plan with close-to-impossible constraints. Its behaviour is interesting: it generates a response that ignores some of the constraints. When corrected, it admits the mistake, but then it does it again when offering a correction.



With the exception of the ones specifically instructed, ChatGPT does not seem to respond with any acknowledgement that it can't do something. I was incredibly impressed by its ability to write a web scraping script for a specific website but when it reached the limits it would just write what it imagined is the correct solution.


Indeed. The perception it gives is that it's fudging things up. I'm really interested in this because it will impact somehow the credibility it gets / the confidence it inspires in end users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: