I agree that it sounds a fair amount like ray-tracing.
More specifically, the method of searching for color on a point-on-screen basis as opposed to a plethora of triangles overlaying each other sounds like ray-tracing in my mind. Though, even as I write this, I'm still second-guessing my position now that I've had more time to think about it.
Either way, what makes it impossible for them to use the current algorithm they're using to evaluate lighting on the point they discovered to be the one that's rendered, as well? ... <_< oh wait, that ~is~ raytracing, isn't it!
I wonder how they properly anti-alias it (and there would very likely be aliasing on those questionable/edge-of-object 'points')? Render a larger scene then scale and sharpen it, or something?
[edit: As a side note, why on earth would this affect the graphic card industries? The substantial math involved could likely be converted int matrix multiplications, and we already know graphics cards are stellar at that ]
The Unlimited Detail engine works out which direction the camera is facing and then searches the data to find only the points it needs to put on the screen it doesn’t touch any unneeded points, all it wants is 1024x768 (if that is our resolution) points, one for each pixel of the screen.
[edit: although the comment says the video says that it's not... watching the video now. hmmm. looks like they're exploiting fractals in some way? they seem to have some scale-free way of encoding the data that they use to generate the image?]