Some context: During a panel on production renderers at SIGGRAPH [0] it was mentioned that most rendering research was not really usable for production rendering, as those algorithms are unable to handle the complexity of production scenes. Matt Pharr (one of the authors of pbrt [1]) asked the panelists to release production datasets for research purposes. Walt Disney Animation Studios is now following up on his request by releasing these two datasets. Yining Karl Li of Disney [2] and the readme for the Moana dataset [3] have some more info.
It's absolutely amazing that Disney is doing this, especially as the license is quite permissive and also allows the usage for non-research purposes.
As an amusing aside, Brent and Rasmus got us a scene from "Meet the Robinsons" back in late 2006 that led to our first real on-the-fly subdivision work. Working with Brent and Dylan at Disney, it ultimately led me to support correct subdivision in Arnold.
At Stanford, this fed into our micropolygon/subdivision work, though we couldn't use the Disney scenes as that was only for me, Dylan and Utah. Luckily, Marcos at Solid Angle put me in touch with the folks from Zinkia Entertainment, which was much more challenging than anything we would have created by hand. But we couldn't share that with anyone else either.
That this dataset is actually publicly available may easily push forward the subdivision/rendering community. Modern subdivision research folks (Loop, Niessner, etc.) have blazed past traditional evaluation with some amazing approximation work, including sharpness. Production renderers still don't those except for previewing though, because it's not that big of a win and because researchers haven't had access to scenes like this.
I'm super excited to see this scene (or portions thereof) replace Big Guy :).
Thanks for providing that context. It really sets the scene on why Disney decided to do this, and it’s peaked my interest. Didn’t realize how challenging this was..
Sure; like the song says, the village of Motunui is all you need.
But the "render just one frame" just means that package is a cut-down version of the "Animation" package which lets you render the entire scene while dispensing with the animation entirely.
I don't know about explicit out of core geomerty for path tracing in Renderman (completely different from the old REYES renderer which is now completely gone). But view dependent tesselation helps a lot with that (I suspect, but have not yet verified that the island was tesselated uniformly when they exported it out of their pipeline). Textures, which can easily be the biggest contributor to data usage, can be handled out of core using texture caches. These work well when two things are done right: tracing camera rays in blocks for coherent access to the textures and proper use of mip mapping and ray dofferentials for far away surfaces to allow safe access to minified prefiltered texture images. Somewhat counter-intuitively, This also improve overall image quality by bounding the frequency spectrum of the textures, requiring less samples to get an accurate representation.
Per Christensen said in his EGSR keynote last week that typical renders at Pixar use between 35 and 120GB of RAM and sometimes can load in about a TB of data. I don't know how to break this down into acceleration structures, geometry, textures etc.
RenderMan traces ray streams of a few thousands rays (at least not millions like Disney's Hyperion), right? Does that really help that much with reducing the shading costs? It probably makes a large difference for shading the initial hit point, but nowadays it's not uncommon to trace paths of more than 10 bounces. It seems very unlikely to me that it gives a significant speed-up with that level of incoherence and the high complexity of production scenes.
According to Per, the primary rays are traced per block, giving coherent accesses to the high resolution textures. Secondsry rays, especially with higher orders of reflections, are accessing higher mip map levels where it is safe, and as I understand it, these mostly fit into memory by virtue of being filtered down.
However, I don't think they actually do 10 bounces of diffuse reflections just yet. But this is just a guess. I simply doubt that it makes enough of a difference to bother with that.
Blocks of primary rays tend to be about 16x16 by default, though this is tunable. MIP maps do greatly help with textures for incoherent rays. But we are also able to use the same trick for geometry; secondary rays may use coarser tesselations than primary rays. Per wrote about this some time ago, but it's something we still leverage. [0]
The number of bounces we do is fairly flexible. [1] We actually default to an upper limit of 10 total rays per path, with an initial limit of 1 diffuse bounce and 2 specular bounces. Depending on the geometry that's hit these initial limits may get extended until the total hits the upper limit. There are cases where the higher limits make a noticeable difference to the look.
It's absolutely amazing that Disney is doing this, especially as the license is quite permissive and also allows the usage for non-research purposes.
[0] https://dl.acm.org/citation.cfm?id=2927384
[1] http://pbrt.org/
[2] https://blog.yiningkarlli.com/2018/07/disney-animation-datas...
[3] https://s3-us-west-1.amazonaws.com/assets.disneyanimation.co...