I thought this was true, honestly, up until I read it just now. User data is explicitly one of the 3 training sources[^1], with forced opt-ins like "feedback"[^2] lets them store & train on it for 10 years[^3], or tripping the safety classifier"[^2], lets them store & train on it for 7 years.[^3]
"Specifically, we train our models using data from three sources:...[3.] Data that our users or crowd workers provide"..."
[^2]
For all products, we retain inputs and outputs for up to 2 years and trust and safety classification scores for up to 7 years if you submit a prompt that is flagged by our trust and safety classifiers as violating our UP.
Where you have opted in or provided some affirmative consent (e.g., submitting feedback or bug reports), we retain data associated with that submission for 10 years.
[^3]
"We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training."
[^1] https://www.anthropic.com/legal/privacy:
"Specifically, we train our models using data from three sources:...[3.] Data that our users or crowd workers provide"..."
[^2] For all products, we retain inputs and outputs for up to 2 years and trust and safety classification scores for up to 7 years if you submit a prompt that is flagged by our trust and safety classifiers as violating our UP.
Where you have opted in or provided some affirmative consent (e.g., submitting feedback or bug reports), we retain data associated with that submission for 10 years.
[^3] "We will not use your Inputs or Outputs to train our models, unless: (1) your conversations are flagged for Trust & Safety review (in which case we may use or analyze them to improve our ability to detect and enforce our Usage Policy, including training models for use by our Trust and Safety team, consistent with Anthropic’s safety mission), or (2) you’ve explicitly reported the materials to us (for example via our feedback mechanisms), or (3) by otherwise explicitly opting in to training."