This reminds me of Inigo Quilez's work on rendering soft shadows by using distance fields.
The main idea is that you represent a 3D object using not triangles, but a function f(x,y,z) stored on the GPU. The function should return the approximate distance from point (x,y,z) to the object in question, so it's zero on the object's surface and negative on the inside.
It's easy to construct such functions for spheres, boxes, etc. [1] It's also easy to define combinators for moving, adding and subtracting different objects, adding rounded corners, etc.
That representation is well suited for raymarching [2], because the value of f(x,y,z) at your current position along the ray can be safely used as the length of the next raymarching step, so you automatically get larger steps when you're far from the object.
It's also well suited for rendering shadows with penumbras, because the minimum value of f(x,y,z) as you march along the ray gives you the distance between the ray and the object [3]. A slight modification of that trick can give you ambient occlusion as well.
I like the article, but it seems to be missing a pretty major component: how do I cast the ray? It seems like the hardware won't have the geometry available so I can't do it on hardware. (What if this is the first pixel rendered? and even if it does store up all the geometry, OpenGL shaders don't have a ray casting call) And if I do it in software, I can't use a vertex shader (unless I duplicate what it is doing, which kind of defeats the purpose).
This is a marketing piece for their new GPU, which has ray casting built into the silicon. It's not clear how to access this new hardware (some OpenGL extension?), but they do mention that their chip gives shaders the ability to cast rays.
Any chance Imagination is considering a Workstation-level product? I understand that mobile is your money maker, but if the hardware is as fast as it sounds, there are some roles that might be really nice to be able to offload onto this sort of hardware.
And how well is the hardware currently handling highly divergent rays? Such as those used in GI or reflections?
Looks like there are some references to the "Caustic Series2" boards. But these look discontinued. It's hard to invest in the technology if it's unsure that there will be continued hardware support. I mean after all, it doesn't take too many generations of hardware for raw Nvidia Titan power to make up for the differences.
The PowerVR Wizard architecture scales from mobile to desktop/console. We will be bringing out dev kits in the near future that will deliver very (very) high performance.
The article mentions a "distance to occluder buffer". I suspect this is like a shadow buffer, which normally is obtained by rendering the scene using the point light as the eye point, and saving the resulting depth buffer. Perhaps they reverse the depth test, to store, at each pixel in the shadow buffer, the distance to the object furthest from the light source. This would then be distance to the occluder closest to the ground plane.
But I don't see evidence of blockiness that such a pixelized distance map would produce. It's hard to tell what they're doing. One thing I'm sure they're not doing is real ray casting from each hit point to the light source. I don't see how they could test each such ray against the whole scene, and do that using the GPU.
While I'm on the topic, the soft shadows they compute are not exact, since they assume the light source is a circle, and their analytic formula for the penumbra angle is an approximation. Shadows would look different for linear lights like fluorescent tubes.
But hey, if you're in the middle of a game, dodging enemies, you won't stop to criticize the slightly incorrect shadows. In an architectural rendering, the difference might be noticeable.
The distance to occluder buffer is a screen space "inverse shadow map" if you will. For each screen space pixel it stores the distance from the surface under that pixel to closest occluder, should one exist.
We are in fact casting a real ray from each non-trivially shadowed pixel towards the light source. We're using the PowerVR Wizard GPU's hardware ray tracing unit to accelerate process. Pixels who's surface is back-facing WRT the light (i.e. N dot L < 0) we know are shadowed so there's no need to cast any rays.
You can also run the code I've given above on a very fast CPU; however, the PowerVR Wizard GPU has built-in structures designed to accelerate ray tracing functionality in addition to the traditional rasterized architecture, giving you real-time ray tracing + rasterzied graphics + compute in a single GPU.
The author chose an outside scene with blue sky as a demonstration. This is exactly where one in reality does NOT have soft shadows due the the small apparent size of the sun. A indoor scene with an extended light source would have been a better choice.
I disagree - and it would be perhaps helpful to actually go outside and watch physical reality a little bit for tons of counter-examples. Yes, the gradient at the edge of shadows is washed out by the "flood light" of the sky, but it's still visible. This is very obvious with shadows from tall objects such as buildings or towers - just go to the edge of the shadow and observe.
Thank you for the feedback. There will be an article tomorrow analyzing more screenshots and presenting performance data too (memory traffic and rendering speed)
This technique is limited to "unrealistic" point lights (except for stars). To improve the shadow quality it is necessary to work with light fields. This can be done either by voxel fields (I think nvidia tried something like this) or depth fields. I use the later one with the advantage that it can be easily combined with other techniques and can be updated in real time. For almost static environments like the one in the example I would advise to use precomputed emission maps. There is also something wrong with the shadow map (right image) I suspect a blur step is missing.
The original article got me all hot and flustered and this one didn't fail to impress either. I just wish we had widespread access to this kind of pipeline on the desktop (alongside conventional shader units).
Edit for those that are out of the loop: shadows are notoriously finicky. There are a bunch of approaches to them that, while a heroic effort, suck. They all compromise on different things and yet fail to be really good at what they are supposed to be perfect at. You can spend a week tweaking constants only to get something passable for your engine. They are the bane of an engine dev's life.
This approach is so clean and, in theory, comes very close to a one-size-fits-all-golden-hammer. Not perfect, but worlds apart from what games are doing today. Catch is: this specific clean implementation of raytraced shadows can only be done with PowerVR (AFAIK).
Can't you draw the shadow as a blended polygon where you gouraud shade from full opaque to full alpha the penumbra area? (so you thus have to draw triangles in that area)? This will produce soft shadows (due to the shading) and will take not much calculations (per vertex in the model casting the shadow, you have to cast its shadow point on the surfaces the shadows are casted at from the lightsource which isn't a point anymore (so you get the penumbra, like in the image in the article)
Not sure if this will work, I thought about this years ago when I was still in the demoscene but never had the time to implement it.
With shadow buffers this will not work, since you are not in fact rendering a shadow mask, but instead a depth buffer with distance-to-occluder, which cannot be blended as it'd be entirely incorrect.
Even if you could, the penumbra size is a function of not only the light size but also the distance from the shadowed surface to the occluding edge, which is fundamentally something you cannot know during shadow occluder rendering time.
Neat stuff. It's worth noting that shadow rays don't entirely avoid precision issues either. You still need to handle self-intersections. One option is to add a bias epsilon (which introduces other artifacts, depending on the scale); another is to reject local self-shadowing (which means you can't get finely shadowed surface cracks); another is to give artists explicit control over which sets of objects shadow which others (which adds artist time). Offline rendered movies use all of these.
Generally speaking, the biases needed for ray-tracing are much smaller than the biases needed for shadow mapping. Ray-tracing tests directly against the geometry so only needs a bias proportional to the numerical accuracy used, whereas shadow mapping requires bias proportional to the texel resolution of the map.
While these are all potential solutions that also are the same as in shadow buffers, you have the additional ability to play tricks such as, skipping intersection with the casting triangle, and similar tricks that solve the problem much more completely, if at a performance cost.
Are you kidding me haha. The shadows in the left are noticeably sharper and the right shadows are all non-physically blurred with a gaussian kernel or something.
How does one actually access this special ray-tracing hardware? Some sort of OpenGL extension? I looked around, but couldn't find any answers. None of their press seems targeted at developers.
I don't think there is any 'special ray-tracing hardware' they are simply leveraging the existing GPU with some algorithms and be able to to have faster, better looking shadows compared to the usual algos. Someone correct me if I'm wrong.
The main idea is that you represent a 3D object using not triangles, but a function f(x,y,z) stored on the GPU. The function should return the approximate distance from point (x,y,z) to the object in question, so it's zero on the object's surface and negative on the inside.
It's easy to construct such functions for spheres, boxes, etc. [1] It's also easy to define combinators for moving, adding and subtracting different objects, adding rounded corners, etc.
That representation is well suited for raymarching [2], because the value of f(x,y,z) at your current position along the ray can be safely used as the length of the next raymarching step, so you automatically get larger steps when you're far from the object.
It's also well suited for rendering shadows with penumbras, because the minimum value of f(x,y,z) as you march along the ray gives you the distance between the ray and the object [3]. A slight modification of that trick can give you ambient occlusion as well.
[1] http://iquilezles.org/www/articles/distfunctions/distfunctio...
[2] http://iquilezles.org/www/material/nvscene2008/rwwtt.pdf
[3] http://iquilezles.org/www/articles/rmshadows/rmshadows.htm