Hacker News new | past | comments | ask | show | jobs | submit login
Cornell McRay T'Racing (github.com/h3r2tic)
171 points by tosh on Dec 23, 2021 | hide | past | favorite | 43 comments



The real time ray tracing is a massive achievement. Tbf I never thought it would be possible to do in my lifetime (and it really isn’t, it’s a lot of impressive smoke and mirrors like reservoir sampling!) but this is really close to a path traced look considering how few samples it really is per pixel.

All that aside, what really stands out with this demo is how nice the dev experience is. Lots of bits and pieces that didn’t exist 10 or 20 years ago. It’s git clone and cargo run. The threshold for tinkering with a repo like this compared to an old C++ equivalent is extremely low. I had to try making the day/night sky dynamic and it was literally a two minute job even though I’m a rust newbie.


If you qualify RTX as "realtime raytracing" then it has been possible for a long time now. Of course this uses way more smoke and mirrors :)

https://www.youtube.com/watch?v=-Qw5bavWxBs

https://web.archive.org/web/20110519050817/http://exceed.hu/...


No I mean it in the sense of using ray tracing for the full Global Illumination of a full game scene. Obvs there are lots of titles some a few years old where RTX is used for specific effects like specular reflection or ambient occlusion but where the lighting is still traditional.

Ray marching a SDF based scene is also pretty easy but that’s more like a world inside a shader than anything resembling a full game scene.


Metro Exodus Enhanced Edition uses RT for basically all lighting, doesn't it? https://www.youtube.com/watch?v=NbpZCSf4_Yk

Same with Minecraft RTX https://www.minecraft.net/en-us/updates/ray-tracing

Though both still calculate the lighting over several frames, so if you want to be pedantic you could argue that it's not strictly real time.


The astute among you will notice that the outputs look more or less indistinguishable from modern video games -- which is to say, they don't look real.

It's not to downplay the achievement. It's to say that I hope one of you takes up the task of making truly real, indistinguishable-from-TV videos generated in real time.

It may seem like an impossible dream. But so was realtime raytracing, at the start of my career. And there are a few promising signs that it's coming within reach. ML trained on video, applied to gamedev, seems like a matter of time. You could do it right now.

Since the talking point of "We don't want real-looking outputs!" always comes up, I suggest ignoring that detail. It's worth doing, even if nobody wanted it, just to show biology who's boss. And incidentally, you'll want to become familiar with how to run blinded experiments, since you'll realize that your own judgement isn't any good when deciding how "real" something looks. The sole test is whether human observers can distinguish your outputs from real outputs no better than random chance.


> ML trained on video, applied to gamedev, seems like a matter of time. You could do it right now.

Indeed!

Photorealistic GTA V using ML : https://www.theverge.com/platform/amp/2021/5/12/22432945/int...


The "photorealism" of that ML demo was vastly overstated. A lot of post processing effects in games are there to (somewhat ironically) emulate defects of cameras: vignetting, chromatic aberration, bloom, flares etc.

I think this ML demo got there by effectively emulating a low-res, poorly white balanced, low framerate dash cam. I think those "defects" make people see the image looking more like the output of a real (albeit shitty) camera system.

It would be more interesting to see what kind of output we could get at 4K 60fps and properly white balanced.


not to diminish their work, but it just changes the look of the game too much to look more dreary. I have the suspicion that this happened because the training set seems to be based on Northern European cities whereas the game is set in LA. I wish there was a version trained the engine on LA or at least a city more similar in feel. It certainly would make for a more intuitive comparison.


They basically made it look like GTA 4


Realism is only one of the benefits of global illumination, the other (perhaps more important) is the massive increase in productivity you can get in creating environments. Having dynamically lit environments in engines with pre baked lightning requires carefully placing emitters, triggering them on geometry changes and so on.

See when people argue “I wan’t dynamic and destructible environments” the problem is to a large extent the lighting. If you can blow a hole in the roof but the sun doesn’t shine in - you can tell something is wrong. So you can’t make a hole in the roof.

With GI you can crash a part of the roof and the interior lighting “just works”.


https://github.com/EmbarkStudios/kajiya

kajiya currently works on a limited range of operating systems and hardware.

Hardware:

Nvidia RTX series

Nvidia GTX 1060 and newer (slow: driver-emulated ray-tracing)

AMD Radeon RX 6000 series


What other hardware would be capable enough but is not supported?


A MacBook M1 max I guess.


How I wish it had native Vulkan support.


I'm not sure people appreciate how many hours of "hacks" are going to disappear nearly overnight when real time ray tracing becomes common place. Things as conceptually simple as "soft shadows" have multiple manifestations and implementations, and require dozens of hacks to pull off believably.


Slightly confused about what you mean here. As I understand it, ray tracing is just one of a few different global illumination approaches. In my graphics course many years ago, we implemented radiosity, photon mapping and path tracing, and they were better for some kinds of scenes than straight forward ray tracing. Some of these approaches support soft shadows very naturally, but ray tracing is not one of them - good soft shadows with ray tracing still involves some hacks.


Does this harm nvidia? I don’t do graphics programming but I would imagine they have a lot of tech to make shadow and many other hacks run fast. Does ray tracing simplify game rendering even though it would require more FLOPS?


> Does ray tracing simplify game rendering even though it would require more FLOPS?

In theory it will, the VFX/CG industry went through the rasterisation -> raytracing change ~10 years ago, as it simplified things (at least in the pipeline, and in the renderers themselves to some degree). It was slower in terms of compute time than rasterisation, but made up for that by not needing things like irradiance/pointcloud caches, shadow map passes, etc (the removal of which made total iteration time faster), and the fidelity it allowed was quite a bit better (global GI, accurate layered materials traced through the surface, volume SSS, etc, etc).

In terms of renderer complexity, it means that while you have to specialise how you use the rays (ray casting/tracing is really just a visibility query), it means you can use them for camera visibility, reflection, refraction, shadows, environment quite easily in a very similar manner (especially with path tracing), whereas with rasterisation, you need rasterisation for the main tri/quad drawing, then (cascading) shadow maps for shadows, then cube mapping for environment lighting/reflection, and these for rasterisation are very different techniques / algorithms.


I'm sure NVidia is fine, as they are the market leader in gaming / workstation hardware accelerated RT.


The aesthetic kinda reminds me of the Jet Car Stunts iOS app[1], I wonder if that was an inspiration. JCS had pre-baked lighting textures, which meant it could play well on ancient iOS devices while still looking good - but of course, not quite as good as real-time raytracing.

[1] https://apps.apple.com/us/app/jet-car-stunts/id337866370


JCS 1 and 2 were both phenomenal, JC1 especially was ahead of its time.

Interesting you can still get both on the app store


I don't have a RTX serie so... but I notice on the gif, there is a light leak on that white column, both sides of the base, around 80% of the gif?

I thought this was ray tracing but I take it it must still takes shortcuts to do this in real time?

Is that a draw distance issue or some other approximation, it reminds me of shadow mapping going wrong.


Here is some higher quality material. It could just be that the spinning cubes are throwing some odd shadows. https://m.youtube.com/watch?v=2KoLm8ajfo8


That highlight in the gif is indeed just some spinning cubes.

Either way, there definitely are some leaks and other artifacts -- many corners need to be cut for this to run in real-time.


There's a lack of shadows at the very base of the white columns, which looks very much like the occlusion rays (used when doing NEE to determine whether the surface point is occluded or not for the light being sampled) have too large a ray epsilon or offset being applied...

i.e. the very base of the white columns is pure white, as a disconnect from the rest of the columns.


Anyone got a binary? I don't have a good track record with successful compiles.


No chequer board pattern. Is this even really ray tracing?


This is incredible


I'm sincerely hoping that whoever wrote this is a hockey fan.

(for those who aren't, Connor McDavid is arguably the best hockey player alive today)


Most probably is rally driver Colin McRae[0], after whom a popular series of rally games are named[1].

[0] https://en.wikipedia.org/wiki/Colin_McRae

[1] https://en.wikipedia.org/wiki/Colin_McRae_Rally_and_Dirt

(edit: formatting)


Indeed and the first part is the first scene done in any ray tracer:

https://en.m.wikipedia.org/wiki/Cornell_box


It's by far not the first scene done in any ray tracer, rather an early test scene for radiosity (diffuse-only global illumination, before path tracing was developed).


Oh I meant “when anyone today writes a toy ray tracer, this is usually the first scene you test with it”


Which one is the oldest, Utah teapot, bunny, dragon, default material cube?


I'd guess the Utah teapot, it's been around since 1975. Cornell box is from 1984, and test scenes for materials are in some way a pretty recent thing, at least how they are used now; to me the modern versions started with Maxwell Render.


I figured it's a play on https://en.wikipedia.org/wiki/Colin_McRae instead.


It is probably a play on Collin McRae. But McRae died under tragic circumstances and to me it felt slightly odd to allude to his name in this manner.


He also had a brilliant career as a driver, and people remember him very fondly. It's about the way he lived, not the way he died.


Raytracing is clever. Good for realism. But end of the day, these are games, not simulations. Being blinded by reflective materials periodically doesn't sound like my kinda fun


Real-time ray traced reflections are an impressive trick, but real-time global illumination is where this tech shines IMO. This Digital Foundry video (https://youtu.be/NbpZCSf4_Yk) is a good overview of why this is such a huge leap forward for developers and players alike.

Metro: Exodus EE (the game discussed in the video) is the only game I've played that really shows what 'RTX' and similar GPU hardware is capable of, IMO. Even CP77, which was supposed to push modern systems to their limit, feels like its ray tracing and GI stuff is janky and tacked-on.


Damn, that really makes a massive massive difference. Makes me realise that the reason I can’t see anything in dark scenes in video games is technical limitations not artistic effect!


One of the things that hadn't occurred to me before watching this video is how much easier GI can make environment artists' jobs. Rather than "faking it" with a bunch of artificially placed light sources, they can just light the scene naturally and the effect should be as good as or better than the old techniques.


Raytracing is emulating photons and thus not only showing reflections, but all visible pixels on the screen. Shadows, light, dark reflections, glow, transparency, glossy, shinny ..everything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: