When I was working on a ray tracer, I found that interpolating the color from neighboring points instead of leaving it blank for in-progress elements was a huge improvement for quickly seeing what the scene is going to look like. In the video examples it doesn't seem like they're doing it. I'm interested to know the rationale.
What do you mean by blank pixels? Cycles is a MC path tracer. This means that it will always trace some paths that don't find high contributions to the final image. When this happens for the first couple of iterations when rendering progressively, these pixels stay darker than the rest. This is just how the algorithm works. You could try to apply a denoiser, but then you're not using the computational power to shoot more rays and you're replacing noise with bias.
Cycles works a bit differently but the same general idea is supposed to be implemented via "CPU viewport rendering with Open Image Denoiser" which live denoises the rendering in a more advanced way. Unfortunately the video for that section doesn't seem to be set up correctly so you can't actually see it in action.
There are a few reasons the speedup scheme you’re talking about doesn’t get used for GPU ray tracing. On a GPU, time to first image that fills all pixels is not typically a problem, and rendering every other pixel and interpolating is more complicated and might take long enough that it doesn’t actually help.
Rendering every other pixel on a CPU in JavaScript is a huge advantage because ray tracing on that platform is incredibly slow compared to GPU ray tracing, and rendering is done sequentially one pixel at a time. Rendering on a GPU is very different because it handles thousands of rays at a time in parallel, and the typical time to get the first complete image on-screen is a fraction of a second. Today’s high end GPUs can trace tens of billions of rays per second, so with that kind of budget it’s easy to get through all the pixels of a 1080p image with multiple rays per pixel. The images in your blog post can be ray traced on today’s high end GPUs at 60hz with tens or even hundreds of samples per pixel for antialiasing.
Another reason a pixel interpolation scheme isn’t used on GPUs is because you don’t have random access to neighboring pixels during a single launch. What this means in practice is you’d have to do a ray tracing launch followed by the interpolation launch. The launches have some overhead, and the UX would be that you first see the checkerboard pattern all at once, and then later you get the interpolated image. You don’t get to see partial progress as you go, unless you’re breaking the image into tiled launches, and that slows down rendering and adds more complication. (Many pro renderers on the market do have tiled rendering to give progressive feedback, BTW).
On the GPU, maybe one of the closest things to what you describe that is being used in games today is DLSS; render a lower resolution image, then upscale to a higher resolution. Instead of interpolating neighbor pixels per se, it’s using a neural network to improve the interpolation based on the image content https://en.wikipedia.org/wiki/Deep_learning_super_sampling
There is also denoising, which has the same high level goal as what you’re doing (fill in missing data to get a high quality preview faster), but uses a much more sophisticated interpolation algorithm, and rather than skipping pixels works on Monte Carlo images with low samples per pixel.
https://developer.nvidia.com/optix-denoiser
(See "Progressive Rendering" section for an example: https://blog.vjeux.com/2012/javascript/javascript-ray-tracer... )