Hacker News new | past | comments | ask | show | jobs | submit login
Image Space Photon Mapping: The insanely great near future of computer graphics (williams.edu)
56 points by aresant on Feb 20, 2010 | hide | past | favorite | 5 comments



This is definitely very cool. I was actually fortunate enough to attend a presentation that the first author gave just a couple of days ago. It was a good talk and the slides were substantially based on the ones from the project page, so I'd certainly recommend checking those out.

A couple of notes from the talk (paraphrased):

- Image-space photon mapping solves the same problem as traditional photon mapping; it does not introduce any extra simplifying assumptions. The performance improvements come primarily from (a) acceleration of the first bounce using rasterization, and (b) the improved radiance sampling technique yielding good results with 100x fewer photons than traditional methods.

- The slides that I saw included some extra material, including a list of other common numerical problems that can be relatively easily formulated as special cases of the rendering equation. Since image-space photon mapping produces randomized solutions to the full rendering equation, there may be extensions from this work to other important problems. (He drew an analogy between convolution and rendering a scene computing only the first bounce.)

- Q: It's not clear to me exactly how to convert some of the other numerical problems you mentioned into scenes for rendering. Which of your contributions do you think might be most valuable to people working on these problems? A: The radiance estimate. I think the lesson to draw from that is, no matter what your problem is, it's a good idea to try using fewer samples and extracting more information from each one.

- Q: Can you handle volumetric effects or sub-surface scattering? A: No, we assume that all scattering occurs at an object surface.

- Q: Why are you attempting to achieve physical accuracy? Studies have shown that people are really bad at judging whether the scene they're looking at is physically accurate. A: You don't want your artists to have to understand the guts of your rendering techniques. When you start using hacks to capture only a few perceptually relevant effects, the result can be fragile, and as soon as the artists change anything the scene looks terrible. If your renderer is physically correct, though, everything just works.

EDIT: Made point (a) in the first note a little more precise.


> I was actually fortunate enough to attend a presentation that the first author gave just a couple of days ago.

Extremely off topic: I remember implementing an algorithm from the second author almost 10 years ago (before soul-crushing college and work) of the second author (when he was at the University of North Carolina – Chapel Hill). The algorithm was called " View-Dependent Simplification of Arbitrary Polygonal Environments" (from here http://luebke.us/#Papers). The algorithm was quite awesome – it allowed view-dependent simplification – even for indoor environments.


Neat! I can't say I'm familiar with the paper, but I'll check it out sometime.


Awesome. Don't miss the video link. This is very impressive stuff...


There's a few video links, but the main demonstration (with the explanatory voice-over) is this one: http://graphics.cs.williams.edu/papers/PhotonHPG09/ISPM-HPG0... (~5 min, 41MB)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: