Denoising is one of those machine learning applications where it just seems to work too well. I was thinking about the idea of using machine learning for denoising a few weeks ago and then I stumbled on Nvidia’s work in the area and was totally blown away. The strategy Nvidia are adopting for their GPU where they utilise both ray tracing and deep learning cores for real-time applications is excellent.
I could be wrong, but I feel like deep learning denoising is mostly just blurring in the right places. Deep learning is good at segmenting each part of the image that's supposed to be a different color, and then denoising can be done by blurring within the segmented regions. I'm very curious how a DNN denoiser compares to DNN segmenter + gaussian blur, or even a classical segmenter + blur.
DLSS is for anti-aliasing and sharpening, not for denoising. And the technique is different than fixed frame ray tracing denoising because DLSS makes use of multiple frames and motion vectors.
If a neural network can build an idea of what it looks at over a few frames (or one frame ie. YOLO) then use it to improve quality of an image is it not denoising? Hallucinating extra pixels definitely.
Denoising, maybe, I'd say it fits the bill but I'm not a domain expert.
With a static undersampled ray tracing image, the only thing you're doing is hallucinating extra pixels out of nothing but the surrounding pixels that you already have of the same frame.
With DLSS, it's more a matter of creatively combining pixels that actually exist. When you're upscaling two 1080p consecutive images + motion vectors to one 1440p image, the amount of source pixels is higher than the final number of pixels.
That said, with some config file hacking, people have been able to DLSS upscale from 540p to 1080p (with still surprisingly good results). In that case, the amount of source pixels is still lower than the number of destination pixels.
It is my understanding that for ray-tracing de-noising, the de-noiser also has access to the noise-free textures (in addition to the noisy raytraced frame)
This gives it far more information than typical screenshots hint at.
I believe Blender also uses this denoiser. It is great! Especially for preview renders. The results are comparible to renders which take around 3 times longer.
Surprisingly [sincosf fusion] only happened with -Ofast and not with -O3.
As noted, -Ofast turns on -ffast-math which turns on -funsafe-math-optimizations which "enables optimizations that allow arbitrary reassociations and transformations with no accuracy guarantees."[0] In this case, sincosf by itself is probably more accurate.
Is Optix significantly better than the traditional (presumably faster) approaches used by photo editing software like Lightroom and Rawtherapee? Denoising is of course an extremely well-studied image processing problem with a highly developed state-of-the-art. I haven't looked at comparison recently, but my recollection is that the answer was "no" as of about a year ago.
The video here talks about the difference between the two approaches. There's a comparison at 2:50.
"I think the computers are winning, because in the upper right you can kind of see that back wall, there's a little strip of green. It's kind of blurred out a little bit too much with the human version, but the neural network has picked up on that and kept the vertical stripes there."
I didn't try AI denoisers but, Darktable's profiled denoiser works pretty well. The profiles is per camera and per iso, so the filter knows what to look for. The results are much better than advanced but unaware denoisers.