devil's advocating, given they have trained it so well to generate images in spite of all expectations, is it really so hard to imagine that they can't also train it to understand what images not to generate? It already had to understand not to generate things that don't make sense to humans. How does this not just amount to "moar training"? The hardest thing is that the training data it will need is a gigantic store of objectionable (and illegal) content ... probably not something many groups are eager to build and host.