This means different top level models will get different results.
You can ask the model to tell you the prompt that it used, and it will answer, but there is no way of being 100% sure it is telling you the truth!
My hunch is that it is telling the truth though, because models are generally very good at repeating text from earlier in their context.
This means different top level models will get different results.
You can ask the model to tell you the prompt that it used, and it will answer, but there is no way of being 100% sure it is telling you the truth!
My hunch is that it is telling the truth though, because models are generally very good at repeating text from earlier in their context.