Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’m learning about rasterization based computer graphics at the moment and it somehow bothers me that all of it might be obsolete in a couple of years when we can do path tracing in real-time. It’s such a fast moving field.


Don't think raytracing will kill off rasterization for the crucial "first bounce" step anytime soon if we're talking about increasingly-detailed ever-content-richer realtime 60FPS game renderers. Usually as soon as RT gets semi-real-time for simplistic scenes at a rather-too-low resolution (without AA), consumers move on to double the previous standard resolution (four times per-pixel workload) such as the move from FHD to retina/hiDPI, we're back to square one. Especially if say you find you need 4 split-screens stereoscopic at 4k with full shading fidelity.

Rasterization is just too cool of a hack to avoid the tracing of primary rays. In fact, it's rather all the hardware and software hacks that evolved around rasterization for 2 decades combined that make it so, not rasterization by itself per se. Various GPU & CPU based culling methods, early & hierarchical Z-buffer, today's realtime rendering pipeline is an amazing combination of cool techniques that keeps getting better. Raytracers say "pff, with real raytracing we wouldn't need all those dirty hacks", but for one most of them come with the GPU or are well-known and intuitive to implement, plus to get anywhere near-realtime they need to implement another set of even more (and less comprehensible/intuitive/hardware-accelerated) hacks of their own.

All shading stages are also designed around rasterization and matured over the last decade. It's good fun to write some pixel-shader-based raytracer but that alone doesn't make it the better fit for current/next-gen gamedev at all.

The oft-antipicated "hybrid" approach however is finally arriving. Once you have some "depth-aware screen-space" (whether voxelized or N-layers), you can shoot specialized and rather simple rays to create diverse outstanding effects in post-process, from much smoother water surfaces to of course "screen-space raytraced reflections" (SSRR).


When we are able to trace multiple rays with multiple bounces per pixel in real-time, would the first bounce really carry that much weight compared to the subsequent bounces that it would justify a separate rendering technique?

Besides that, what would you say to ilaksh’s comment? https://news.ycombinator.com/item?id=8060197


He's referring to demos that have been around for years and I've been following the space, too. His argument doesn't really refute any of my points. Knowing how the rasterizers work is never going to hurt you, quite the opposite. If you're serious about computer graphics you'll really want to have internalized both raytracing basics and rasterization basics. If I didn't know how triangle meshes are fed into the GPU and processed until the "final fully lit&shaded output pixel" for any one of my 94 favourite games, I'd feel pretty uneasy. Rasterization will always be orders of magnitude faster by necessity: that means when raytracing can finally render ca. 2004 scene complexity (think GTA:SA -- hint, it still can't even at quarter-res), rasterization can render ca. 2016 (or 2018 or whenever) scene complexity at full-res. Guess what the folks at Rockstar, Ubisoft, Crytek wanna do? Throw more details, more content variation, more procedurally-generated or -perturbed geometry at the screen at 60+ FPS. Sure Brigade can reach almost/barely 30FPS at some low resolution in a small limited preprocessed scene and then there's no more room for any physics, any AI, any animated crowds etc etc at all. They're doing important work and one day the big payoff will come, but for the next 10 years knowing how our rasterizers works will not be wasted at all.


After seeing the Brigade Engine and LuxRender path-tracing demos, I think that you are 100% correct to be worried, and beyond that, you should avoid wasting your time as much as possible. Actually, when you say "learning" that leads me to believe that you are enrolled in some course under some curriculum in some school.

I don't believe that any individual school, professor or course can really be up-to-date with the most leading edge practices or even theory, especially in high technology.

I think that the only really interesting area now in computer graphics is in doing path-tracing in hardware.

There is a massive culture that still is interested in other things but I think they are behind the times.

Sooner or later nVidia or ATI/AMD will put some native capabilities for path tracing a scene (not like OptiX where you program the existing architecture for it, but circuits/architecture that is truly optimized for path tracing) given a set of geometry, materials and lights into their graphics cards.

At that point all of the rasterization and lighting calculation tricks will be obsolete.

Probably this will be buried because there is an enormous amount of effort going into those old-fashioned approaches, but oh well. I have to say it.


continue learning that stuff: the algorithms will be useful, and you may find yourself working on an embedded device without the CPU to do path/ray tracing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: