That's been my experience. At some point it can't "un-learn" its mistakes because it keeps including the "wrong" bits in scope.
I have some success saying "no, undo that," waiting for it to return the corrected version, and only then continuing.
Oobabooga's UI is better at this, since you can remove erroneous outputs from the context and edit your previous input to steer it in the right direction.
Given that OpenAI mines conversations for training data it seems to align with their interests to make you give up and start a new prompt. More abandoned prompts = more training data.
I have some success saying "no, undo that," waiting for it to return the corrected version, and only then continuing.
Oobabooga's UI is better at this, since you can remove erroneous outputs from the context and edit your previous input to steer it in the right direction.
Given that OpenAI mines conversations for training data it seems to align with their interests to make you give up and start a new prompt. More abandoned prompts = more training data.