If I were the copyright holder of such work, I would argue that the LLM was trained on text, including my copyrighted work, and that if the system produced text that a reasonable person who reads poetry would identify as the copyrighted work, the burden is then logically on the LLM owner to prove the LLM didn't regurgitate a piece of text from something it previously ingested.
The issue isn't that a generator lets you evade copyright somehow; it doesn't. The output is not the issue. If I sit in paint and my assprint happens to perfectly duplicate a Picasso, that's unlikely to fly in court if I try to sell copies. Picasso painted it first.
The point at issue here is that some people are arguing that the models themselves are like a giant collective copyright infringement, since they are in a vague sense simply a sum of the copyrighted works they were trained on. Those people would like to argue that distributing the models or even making use of them is mass copyright infringement. My thought experiment is a reductio ad absurdum of that reasoning.
I think a jury would side with my argument.