Others have replicated this behaviour. If you embed ChatGPT, people will find ways to make it say things you didn't intend it to say.
There's not really any doctoring going on, other than basic prompt injection. However, I can imagine someone accidentally tricking ChatGPT into claiming some ridiculously low priced offer without intentional prompt attacks. If you start bargaining with ChatGPT, it'll play along; it's just repeating the patterns in its training data.
There's not really any doctoring going on, other than basic prompt injection. However, I can imagine someone accidentally tricking ChatGPT into claiming some ridiculously low priced offer without intentional prompt attacks. If you start bargaining with ChatGPT, it'll play along; it's just repeating the patterns in its training data.