I think the question might come down to whether Grok is a "tool" like a paintbrush or Photoshop, or if Grok is some kind of agent of creation, like an intern. If I ask an art intern to make a picture of CSAM and he does it, who did wrong?
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.
Normalizing AI as being human equivalent means the AI is legally culpable for its own actions rather than its creators or the people using it, and not guilty of copyright infringement for having been trained on proprietary data without consent.
I happen to agree with you that the blame should be shared, but we have a lot of people in this thread saying "You can't blame X or Grok at all because it's a mere tool."
If Photoshop had a "Create CSAM" button and the user clicked it, who did wrong?
I think a court is going to step in and help answer these questions sooner rather than later.