Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Could a similar scheme be used to drastically improve the visual quality of a video game? You would train the model on gameplay rendered at low and high quality (say with and without ray tracing, and with low and high density meshing), and try to get it to convert a quick render into something photorealistic on the fly.

When things like DALL-E first came out, I was expecting something like the above to make it into mainstream games within a few years. But that was either too optimistic or I'm not up to speed on this sort of thing.



Isn't that what Nvidia’s Ray Reconstruction and DLSS (frame generation and upscaler) are doing, more or less?


At a high level I guess so. I don't know enough about Ray Reconstruction (though the results are impressive), but I was thinking of something more drastic than DLSS. Diffusion models on static images can turn a cartoon into a photorealistic image. Doing something similar for a game, where a low-quality render is turned into something that would otherwise take seconds to render, seems qualitatively quite different from DLSS. In principle a model could fill in huge amounts of detail, like increasing the number of particles in a particle-based effect, adding shading/lighting effects...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: