That's an interesting platform and interesting thread.
I don't believe you can reliably protect secrets that LLMs has access, as this thread promptly shows. Too many undetectable ways to go around.
But it can help for other, more common, use cases where you want the agent to respond in certain ways or avoid mentioning certain types of outputs, just for the purpose of offering good user experience.
I don't believe you can reliably protect secrets that LLMs has access, as this thread promptly shows. Too many undetectable ways to go around.
But it can help for other, more common, use cases where you want the agent to respond in certain ways or avoid mentioning certain types of outputs, just for the purpose of offering good user experience.
Good luck!