That argument might work for content which serves a purely informational purpose, such as books teaching the basics of programming languages, for instance, but it doesn't work for art (e.g. works of fiction) because most of the potential for a non-superficial reading of a work relies on being able to trust that there is an author that has made a conscious effort to convey something through that work, and that that something can be a non-obvious perspective on the world that differs from that of the reader. AI-generated content does not have any such intent behind it, and thus you are effectively limited to a superficial reading, or if were to instist on assigning such intent to AI, then at most you would have one "author" per AI model, which additionally has no interesting perspectives to offer, simply those perspectives deemed acceptible in the culture of whatever group of people developed the model, no perspective that could truly surprise or offend the reader with something they had not yet considered and force them to re-evaluate their world view, just a bland average of their dataset with some fine tuning for PR etc. reasons.