Hacker News new | past | comments | ask | show | jobs | submit login

This is only partly true. Path tracing is an embarrassingly parallel problem but only under the assumption that the entire scene description can be accessed.

When a light ray strikes the ceiling it can bounce off towards a vase that is on a diffuse table which scatters the light in all directions. So the calculation for this light ray needs to know the shape and material (BRDF) of all the objects that interact with the ray.

Before sending out the ray from the camera into the scene it is unknown what objects are going to be hit along the way - as you can imagine is a difficult problem to optimize for. The usual solution is to just distribute the entire scene.

On a single computer there is no problem, the entire scene is usually present in memory. On multiple computer it is more difficult since you will end up distributing large amounts of data (scenes can be multiple gigabytes).




It's really just a bandwidth issue - VFX studios do this all the time with their renderfarms - textures are the main issue - prod/archviz like Ikea stuff are generally really clean and don't have THAT many textures - whereas in VFX everything's dirty and generally very detailed so you're generally pulling in >300GB of textures per medium level scene.

And at least in VFX everything's generally done lazily so you only read textures as and when you need them if they're not cached already - there's a bit of overhead to doing this (locking if a global cache, or duplicate memory if per-thread cache which is faster as no locking), but it solves the problem very nicely and on top of that the textures are mipmapped so for things like diffuse rays you only need to pull in the very low-res approximations of the image instead of say 8K images and point-sampling them, so this helps a lot too...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: