That's an interesting observation that human vision can make out detail from a noisy signal. Non-generative CNNs are typically going to filter noise due to the loss functions used. Generative models could in theory learn to recognize whatever signal in that noise humans are queuing on and guess at the missing content. I think there'd be resistance to generative models in photography pipelines though based on the "made-up pixels" argument.