Hacker News new | past | comments | ask | show | jobs | submit login
Disney's Practical Guide to Path Tracing [video] (youtube.com)
133 points by adamnemecek on Aug 1, 2015 | hide | past | favorite | 38 comments



The video was made for this article:

http://www.disneyanimation.com/technology/innovations/hyperi...

If you want to learn more about rendering then here is some more info:

Stanford CS348b course notes:

http://candela.stanford.edu/cs348b/doku.php

Cornell CS6630 course notes:

http://www.cs.cornell.edu/Courses/cs6630/2012sp/schedule.stm

Eric Veach's Phd thesis:

http://graphics.stanford.edu/papers/veach_thesis/

Physically Based Rendering: From Theory to Implementation by Matt Pharr, Greg Humphreys and Wenzel Jakob (book and open source implementation of a state of the art renderer):

http://pbrt.org/


+1 on PBRT - great textbook, and a full program to boot.

Though they don't use the batching method outlined in the video - it would be interesting to see PBRT modified to do it and compare the resulting efficiency in rendering.

https://github.com/mmp/pbrt-v3/


While batching (and reordering) can have a benefit overall for rendering, it can have a significant (depending on how thoroughly you do it) overhead, and Disney are primarily doing it because they use Ptex to do texturing, instead of the more conventional and more widely used UDIM texture atlasing method for assigning textures to meshes.

Ptex really chokes with non-coherant texture accesses (partly due to the anisotropic filtering method it uses) in terms of thread scalability, so Disney have gone to great lengths to get around this issue. Doing such accurate texture filtering (the same goes for standard UV EWA filtering) is technically better, but is expensive, especially in a path-tracing context, where the whole point is to amortise the cost of shading over all the rays by making each intersection / shading calculation as cheap as possible.

From what I hear there are big downsides to this current implementation: the streaming of batches is pretty much off-line, so time till "first pixel" is significant, and thus you don't get interactive rendering functionality.


Awesome information, thank you very much!


The batching that Disney is doing is a very recent direction and not something that is common - PBRT serves as a straight forward reference implementation.

You are right though that it would be fascinating to see the speed difference. My guess is that it would be substantial.

Even compiling PBRT with Intel C++ instead of Microsoft's C++ compiler gives a %30 - %50 speedup, likely due to reordering instructions for better memory coherency on a granular level (batching rays would give better memory coherency on a very broad level).


It would be interesting to see PBRT re-written with C++ 11/14 idioms, and look at how that affects the execution speed as well.


That's also true, PBRT uses a lot of virtual functions.


Wenzel Jakob's Mitsuba renderer is proper badass.


He is now the co-author on the 3rd Edition. Expecting coverage of the Volumetric Layered Materials and Manifold Exploration. PBRT is only getting better.


The third edition will have an implementation of his paper on layered materials. There won't be anything on Manifold Exploration though, as pbrt has Kelemen style MLT (the third edition extends it with Hachisuka's Multiplexed MLT) instead of Veach style MLT.

They've added other cool new stuff too, such as volumetric path tracing, progressive photon mapping, hair (bezier curves with a Kaijya-Kay BSDF) and a Photon Beam Diffusion BSSRDF.


Fabulous.

This reminds me very much of one of my favorite Disney videos that I showed my daughter long ago. It's this clip of four very talented cell animators out practicing their Art:

https://www.youtube.com/watch?v=9JK9uQNBDxQ

The whole thing is whimsical while also being very educating. Really glad that Disney is keeping this kind of stuff up.


I actually found it disappointing, compared to some other similar Disney material I've seen earlier. It is perhaps too simple.

Certain things are so grossly oversimplified, they are misleading.

For example, how does sorting rays following similar direction help? The history book analogy is appalling. It might give the misguided interpretation to a kid that it is always better to sort items - e.g. before summing a few numbers, may be sorting them is a good idea, or before map operations may be sorting them will ease the task for a computer, etc.


how does sorting rays following similar direction help?

They're probably alluding to Disney's Hyperion renderer[1]. Sorting rays helps with cache coherency.

[1] https://disney-animation.s3.amazonaws.com/uploads/production...


Yeah, I was a bit disappointed that they started focusing on their own renderer instead of just explaining more of the basics of path tracing. Note that it isn't necessary at all to sort rays, although it can make things faster if done correctly. Also note that Disney isn't the first to consider sorting rays and tracing them at the same time, similar research was done in the 90s.


I thought the homework example was a good analogy for context-switching.

The big thing they were trying to get at is there are some clever tricks we can do to make these massive calculations more efficient.


For optimised ray tracing you don't beam the light from the camera as that has the same chance to bounce to the sun through indirect illumination as a ray from the sun has if it is going to bounce to the camera.

What they are saying here is wrong or rather extremely simplified for a younger audience.


No that's wrong. You do start tracing paths at the camera. The lens usually is extremely small and the rays that do contribute are highly directional, the probability of hitting the lens is extremely low and the probability of the hitting the lens in a direction that actually contributes to the final image is even lower. On the other hand there usually are many lights in a scene with a total area that's much larger than the area of lens and most lights don't have a directional profile. So you're much more likely to hit a light than you are to hit the lens, hence we start at the camera and not at a light.

They do simplify things a bit though. Normally we don't trace one path at a time, but we trace multiple of them. Each time we intersect with an object we do not only create another ray to continue to path, but we also sample a point on a light and we connect the two points by a ray to finish the path. This process is called Next Event Estimation and we can combine both 'accidental' paths and 'connected' paths by using a technique called Multiple Importance Sampling (MIS).


One more thing that makes this possible: you actually have some room for "choice" in which way the ray goes, and so at the 'last step' you can just choose that it goes towards the sun. What do I mean? Well, most objects are largely diffuse, meaning that light is reflected in a random direction. When you hit a diffuse object, you can choose to sample light that 'randomly' bounces off in the direction of the sun. Since if it bounces off away from the sun, then it doesn't contribute light.

Lots of caveats here, of course. You do need to also sample light going in other directions, since the sun isn't the only light source (other objects reflect). You can only do this on diffuse surfaces, so you need to keep going until you hit a diffuse surfaces. Most surfaces are partly diffuse partly specular, etc. so you'll actually want to sample both straight towards the light source and off in other angles.

But what does the most simple path tracer look like? You shoot rays from the camera. If it hits a diffuse surface, bounce a ray toward each light, adding that light if that ray isn't obstructed from the light. If it hits a specular surface, bounce off based on the surface and ray orientations, and recurse when you hit another surface. You see how we cheat? If everything is diffuse, then we only ever make one bounce, straight from us to the sun. But that's a great first order approximation, since sunlight is so much brighter than reflected light. Same approach works for more bounces; just end with it trying to hit the sun.


What you describe is next event estimation.


Thanks for giving the name; I wanted to describe why connecting two paths works. Other answers seemed to skirt around that without spelt it out.


I was wondering that. I don't have anything to do with 3D graphics, but that had occurred to me. What method do they use to ensure only the rays that have the camera and sun as end points are rendered?


That's very simple: if we don't end up hitting a light the path won't carry any radiance, so it won't contribute to the final image. If we start on a light and we don't end up hitting the camera the path won't carry any importance and so it won't contribute to the final image either.

Keep in mind that there is a very mathematical foundation to all this, we're not just tracing paths for the fun of it. Basically what we want to solve is a path integral (an integral over all paths), we do this using a technique called Monte Carlo integration (which basically means we use randomness). We first sample a path (using path tracing) and then we calculate the contribution of that path (which basically is the amount of radiance is carries divided by the probability of sampling the path) and then we add that contribution to the right pixel.


Bidirectional path tracing. You send rays from the sun, rays from the camera, and try to connect them. The ones that connect are the ones that get computed for illumination.


Bidirectional path tracing is one way of sampling paths (actually it combines many techniques for sampling paths and weights those using something called Multiple Importance Sampling). It's not the only way of doing it. Disney most likely uses path tracing with next event estimation. This means that they start a path as explained in the video and they end a path by sampling a point on a light and then connecting the start of the path with the point to form a full path. This is one the techniques used by bidirectional path tracing (bdpt), this technique uses n vertices on the camera path and 1 vertex on the light path, but bdpt also uses techniques with s vertices on the camera path and t vertices on the light path. This means that there are multiple ways to sample the same path, so these techniques need to be weighted using something called Multiple Importance Sampling.


How is that even possible? The beam incidence angular calculation (and multi path dispersion) creates insane complexity that must be computed on the fly to even make sense.

For instance: a beam hits some material and needs to reflect or worse, pass through via transparency. Another issue: If we are calculating on a per pixel basis, that means bundling multiple paths together to figure out what the weighted return will look like. How can this all be computed with any kind of efficiency without cheating?


Bidirectional path tracing doesn't do anything clever to "try to connect" paths from light sources and cameras. The approach is just

- Trace random paths from light sources until they terminate (usually decided with Russian roulette).

- Trace random paths from the camera (usually N per pixel, or you can use more paths in noisy areas) until they terminate.

- Try to connect each point in a camera path with each point in a light path, using a simple line test. If it succeeds, that color is added to the pixel from which the camera path originated.

At least that's my understanding; I've only implemented simpler algorithms and read a bit about bidirectional path tracing.

> If we are calculating on a per pixel basis, that means bundling multiple paths together to figure out what the weighted return will look like. How can this all be computed with any kind of efficiency without cheating?

Right, we still need to consider many paths per pixel to get a high quality image. But it converges faster than most other Monte Carlo techniques.


Shoot a ray from the camera to an object. Then 'connect' it to all light sources, by measuring the change in angle, and properties of the surface. And add up.

If the surface is refractive/reflective, recursively shoot one more ray calculating the right direction, with correctly diminished intensity and follow the same process.


That's funny that would say that since what you are saying is completely wrong. Different techniques would have different probabilities, but none would have what you are describing.


Source for this video with more information:

http://www.disneyanimation.com/technology/innovations/hyperi...


Great video. I think this should be shown at the beginning of every introductory computer graphics course.

For a nice overview of Disney's proprietary Hyperion renderer that implements this light bundling technique (and a whole lot more), see : http://www.fxguide.com/featured/disneys-new-production-rende...


I'm wondering what the target audience for that clip was.


Awesome. It would be great if Disney/Pixar would make more educational videos like this one.

Off-topic question: Why is this video unlisted?


Disney actually has hundreds of these videos – they are just all unlisted.

Take a look at this site to see their papers and videos: http://www.disneyanimation.com/technology/publications


I would bet that these are not flagged as media/marketing material and are thus not listed.

Kind of odd, but I could see an argument being made against confusion of their clientele.


Good technical explanation, but why on earth not use a 3D rendition rather than this incredibly flat 2D rendition ?


Probably because it was rendered in AfterEffects, which is a quick and easy way to make animations.


Disney's global illumination tech is spectacular. The quality level since Monster's University is just leaps and bounds better than everything before it. Well, technically their short Partysaurus Rex was it's debut. But Monster's University was the first feature length.


Pixar and Disney are different studios...

The global illumination tech used on monsters university was just standard path tracing with physically-plausible sharers - the first time pixar had used that instead of radiosity caching. Other studios have been using path tracing for years before - pixars lighters are very good however, so the results are very good.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: