It is, which is why I think it's silly to focus on a single drug, when I think it's a pretty obvious systemic issue in how the US has structured its healthcare system
The legal entity owning the brain should be responsible for that brains output. That is, you should be responsible for the output of your brain and OpenAI for the output of its LLMs.
The NYT prompted it with sections of copyright articles to get it to reproduce the rest of the copyrighted article. Which is pretty clearly deliberately trying to get it to reproduce a copyright article.
... Which shows that its capable of regurgitating copyrighted material. What does it matter what the method is? People can deliberately use the thing for copyright infringement. Besides, proving that it can be done on purpose is just the easiest method, your really think it won't happen by accident?
If you prompted me with: "<blank> for a Klondike." and I responded with "What would you do for a Klondike." then I've recreated copyrighted material. Does it make sense to sue me? I've learned the phrase organically, and I may have never seen the original content, only learned it 2nd hand via word of mouth. If I charged you for my time does it matter?
Note I fully agree that OpenAI should acquire the appropriate licenses to use all of the content it uses to train its models. However I'm not as clear on whether anybody should be able to place limits on or modulate/attenuate the output once the input has been appropriately licensed and responsibly consumed.
Not exactly concerning image AI, but regarding LLMs -- in case your site does not depend on indexing -- you can easily generate boatloads of garbage text via Markov chains and invisible-link them.