The real time ray tracing is a massive achievement. Tbf I never thought it would be possible to do in my lifetime (and it really isn’t, it’s a lot of impressive smoke and mirrors like reservoir sampling!) but this is really close to a path traced look considering how few samples it really is per pixel.
All that aside, what really stands out with this demo is how nice the dev experience is. Lots of bits and pieces that didn’t exist 10 or 20 years ago. It’s git clone and cargo run. The threshold for tinkering with a repo like this compared to an old C++ equivalent is extremely low. I had to try making the day/night sky dynamic and it was literally a two minute job even though I’m a rust newbie.
No I mean it in the sense of using ray tracing for the full Global Illumination of a full game scene. Obvs there are lots of titles some a few years old where RTX is used for specific effects like specular reflection or ambient occlusion but where the lighting is still traditional.
Ray marching a SDF based scene is also pretty easy but that’s more like a world inside a shader than anything resembling a full game scene.
The astute among you will notice that the outputs look more or less indistinguishable from modern video games -- which is to say, they don't look real.
It's not to downplay the achievement. It's to say that I hope one of you takes up the task of making truly real, indistinguishable-from-TV videos generated in real time.
It may seem like an impossible dream. But so was realtime raytracing, at the start of my career. And there are a few promising signs that it's coming within reach. ML trained on video, applied to gamedev, seems like a matter of time. You could do it right now.
Since the talking point of "We don't want real-looking outputs!" always comes up, I suggest ignoring that detail. It's worth doing, even if nobody wanted it, just to show biology who's boss. And incidentally, you'll want to become familiar with how to run blinded experiments, since you'll realize that your own judgement isn't any good when deciding how "real" something looks. The sole test is whether human observers can distinguish your outputs from real outputs no better than random chance.
The "photorealism" of that ML demo was vastly overstated. A lot of post processing effects in games are there to (somewhat ironically) emulate defects of cameras: vignetting, chromatic aberration, bloom, flares etc.
I think this ML demo got there by effectively emulating a low-res, poorly white balanced, low framerate dash cam. I think those "defects" make people see the image looking more like the output of a real (albeit shitty) camera system.
It would be more interesting to see what kind of output we could get at 4K 60fps and properly white balanced.
not to diminish their work, but it just changes the look of the game too much to look more dreary. I have the suspicion that this happened because the training set seems to be based on Northern European cities whereas the game is set in LA. I wish there was a version trained the engine on LA or at least a city more similar in feel. It certainly would make for a more intuitive comparison.
Realism is only one of the benefits of global illumination, the other (perhaps more important) is the massive increase in productivity you can get in creating environments. Having dynamically lit environments in engines with pre baked lightning requires carefully placing emitters, triggering them on geometry changes and so on.
See when people argue “I wan’t dynamic and destructible environments” the problem is to a large extent the lighting. If you can blow a hole in the roof but the sun doesn’t shine in - you can tell something is wrong. So you can’t make a hole in the roof.
With GI you can crash a part of the roof and the interior lighting “just works”.
I'm not sure people appreciate how many hours of "hacks" are going to disappear nearly overnight when real time ray tracing becomes common place. Things as conceptually simple as "soft shadows" have multiple manifestations and implementations, and require dozens of hacks to pull off believably.
Slightly confused about what you mean here. As I understand it, ray tracing is just one of a few different global illumination approaches. In my graphics course many years ago, we implemented radiosity, photon mapping and path tracing, and they were better for some kinds of scenes than straight forward ray tracing. Some of these approaches support soft shadows very naturally, but ray tracing is not one of them - good soft shadows with ray tracing still involves some hacks.
Does this harm nvidia? I don’t do graphics programming but I would imagine they have a lot of tech to make shadow and many other hacks run fast. Does ray tracing simplify game rendering even though it would require more FLOPS?
> Does ray tracing simplify game rendering even though it would require more FLOPS?
In theory it will, the VFX/CG industry went through the rasterisation -> raytracing change ~10 years ago, as it simplified things (at least in the pipeline, and in the renderers themselves to some degree). It was slower in terms of compute time than rasterisation, but made up for that by not needing things like irradiance/pointcloud caches, shadow map passes, etc (the removal of which made total iteration time faster), and the fidelity it allowed was quite a bit better (global GI, accurate layered materials traced through the surface, volume SSS, etc, etc).
In terms of renderer complexity, it means that while you have to specialise how you use the rays (ray casting/tracing is really just a visibility query), it means you can use them for camera visibility, reflection, refraction, shadows, environment quite easily in a very similar manner (especially with path tracing), whereas with rasterisation, you need rasterisation for the main tri/quad drawing, then (cascading) shadow maps for shadows, then cube mapping for environment lighting/reflection, and these for rasterisation are very different techniques / algorithms.
The aesthetic kinda reminds me of the Jet Car Stunts iOS app[1], I wonder if that was an inspiration. JCS had pre-baked lighting textures, which meant it could play well on ancient iOS devices while still looking good - but of course, not quite as good as real-time raytracing.
There's a lack of shadows at the very base of the white columns, which looks very much like the occlusion rays (used when doing NEE to determine whether the surface point is occluded or not for the light being sampled) have too large a ray epsilon or offset being applied...
i.e. the very base of the white columns is pure white, as a disconnect from the rest of the columns.
It's by far not the first scene done in any ray tracer, rather an early test scene for radiosity (diffuse-only global illumination, before path tracing was developed).
I'd guess the Utah teapot, it's been around since 1975. Cornell box is from 1984, and test scenes for materials are in some way a pretty recent thing, at least how they are used now; to me the modern versions started with Maxwell Render.
Raytracing is clever. Good for realism. But end of the day, these are games, not simulations. Being blinded by reflective materials periodically doesn't sound like my kinda fun
Real-time ray traced reflections are an impressive trick, but real-time global illumination is where this tech shines IMO. This Digital Foundry video (https://youtu.be/NbpZCSf4_Yk) is a good overview of why this is such a huge leap forward for developers and players alike.
Metro: Exodus EE (the game discussed in the video) is the only game I've played that really shows what 'RTX' and similar GPU hardware is capable of, IMO. Even CP77, which was supposed to push modern systems to their limit, feels like its ray tracing and GI stuff is janky and tacked-on.
Damn, that really makes a massive massive difference. Makes me realise that the reason I can’t see anything in dark scenes in video games is technical limitations not artistic effect!
One of the things that hadn't occurred to me before watching this video is how much easier GI can make environment artists' jobs. Rather than "faking it" with a bunch of artificially placed light sources, they can just light the scene naturally and the effect should be as good as or better than the old techniques.
Raytracing is emulating photons and thus not only showing reflections, but all visible pixels on the screen. Shadows, light, dark reflections, glow, transparency, glossy, shinny ..everything.
All that aside, what really stands out with this demo is how nice the dev experience is. Lots of bits and pieces that didn’t exist 10 or 20 years ago. It’s git clone and cargo run. The threshold for tinkering with a repo like this compared to an old C++ equivalent is extremely low. I had to try making the day/night sky dynamic and it was literally a two minute job even though I’m a rust newbie.