Hacker News new | past | comments | ask | show | jobs | submit login
Voxel Displacement Renderer – Modernizing the Retro 3D Aesthetic (danielschroeder.me)
438 points by huhtenberg 10 months ago | hide | past | favorite | 73 comments



That's a very appealing aesthetic (possibly because I'm old enough to remember downloading games over a 2400 baud modem!). Very nice work!

That video looked great, and (at least for me) felt very evocative. In the section where the roof was too low, I started feeling claustrophobic and cramped.

Those rock and sandy-floored caverns, and the cavern with boulders (which are gorgeous!) made me think I'd love playing a Myst or LucasArts-style adventure game using this as the renderer. Spelunking through caves, or archeological digs, etc.

Can't wait to see where you take this!


I agree. To me this is very evocative of the pixel-art/retro look, even moreso than the low-poly Doom/Wolfenstein look.


It looks like what those older games felt like.


If the author realizes it, he could make the next Valheim hit.


No polygons in Doom/Wolf3D.

The example from Back To Saturn X2 can be played on a software rendered engine with no concept of polygons at all.


Isn’t it referred to as 2.5D ?


More like a bunch of cardboard figures raised extruded and raised up to look 3D. You couldn't stack floors in Doom for instance.

2.5D would be the semi top down games such as most beatem-ups allowing you to roam around instead of just going left/right as the typical platform or action game, or most SNES RPG's.


Imagine a 3D version of Noita with this aesthetic, and perhaps using smoothed particle hydrodynamics to make the falling sand engine scale to 3D.


3D voxel Noita is something I never knew I needed.

But the engine would need to be crazy optimised to handle decent sand/fluid voxels in 3D space. It would be a technical achievement in itself.


It’d be really hard to keep track of what is happening in first person, but so totally awesome xD

I don’t think it’d make for a very good game though.


Or 3D nethack


Warez on the BBSs. Go Eaglesoft!


completely agree with the aesthetics!

my issue with it that it looked so good, but the architecture felt wrong. in the part with the low ceiling, it felt like those blocks hanging from the roof were defying physics



There's another approach - Deep Bump.[1] Addresses the same problem, but in a totally different way.

Deep Bump is a machine-learning tool which takes texture images and creates plausible normal maps from them. It's really good at stone and brick textures like the ones this voxel displacement renderer is using. It's OK at clothing textures - it seems to be able to recognize creases, pockets, and collars, and gives them normals that indicate depth. It's sort of OK on bark textures, and not very good on plants. This probably reflects the training set.

So if you're upgrading games of the Doom/Wolfenstein genre, there's a good open source tool available.

[1] https://github.com/HugoTini/DeepBump


If I read that blog post correctly, it's a model to infer normal maps from textures, not a new way to render geometry.

The article in this thread is more about a small-voxel-based representation of displacement maps. A tool like Deep Bump could conceivably be used to aid in the creation of texture assets for the system discussed in this thread.


Yes, it's a model to infer normal maps from textures. Then you can use a modern PBR renderer on old content and have a better illusion of depth. It doesn't introduce the blockiness of voxels.


It seems the blockiness of voxels is the whole point. Applying normal maps to low-res textures doesn’t look good and completely changes the look. Generating high-res textures from the originals makes it even worse.

This voxel approach preserves the aesthetics of the old pixelated (now voxelated) graphics in a much more pleasing way.


As the article points out though, normal maps don't "work" in all situations though, as they don't actually change the geometry just the lighting and the illusion of the geo, so viewed at the edges of meshes displacement is still the better high-fidelity option.

DeepBump might be able to extract 1D (height only, not full 3D vector displacement) maps to use with traditional displacement though.


This is neat, but I'm wondering how well the authors approach will map to animated 3D models. I'm guessing, at best, it might look something like the "Voxel Doom" mod for Doom [1] [2].

[1] https://media.moddb.com/cache/images/mods/1/55/54112/thumb_6...

[2] https://media.moddb.com/cache/images/mods/1/55/54112/thumb_6...


The article has a footnote about Voxel Doom, but more about Voxel Doom's environment approach and not monsters:

> Now that I’ve laid out all this context, I want to give a shout out to the Voxel Doom mod for classic Doom. The mod’s author replaced the game’s monsters and other sprites with voxel meshes to give them more depth, some very impressive work. Then, in late 2022, he began experimenting with using parallax mapping to add voxel details to the level geometry. This part of the mod didn’t look as good, in my opinion — not because of the author’s artwork, but because of the fundamental limitations that come from using parallax mapping to render it. This mod wasn’t the inspiration for my project — I was already working on it — but seeing the positive response the mod received online was very encouraging as I continued my own efforts. ↩

It does say though that the approach supports animated doors and stuff so combined with mesh and texture flipbook I think it could be used for original doom looking monsters too, but sharp curvature areas I think have the most artifacts with shell mapping and he mentions limitations to the meshing of levels so maybe not.


This looks a lot like what Notch is working on (see his twitter feed). Another kind of voxel rendering. This, however is using C++/Vulkan and looks stunning!


I wonder how this approach compares to Unreal Engine 5 nanites, or maybe Unreal Engine is actually doing something similar?

I remember one motivation for using voxels in older games (like Comanche[1]) is that you can get seemly more complex terrains that, when modelled using triangle meshes, would have been more expensive on similar hardware. The author mentions 110FPS on a RX 5700 XT, I am not sure how that compares to other approaches.

[1] https://en.wikipedia.org/wiki/Comanche_(video_game_series)


Its hard to say exactly because the OP doesn't go into the runtime mesh used but my guess is its quite different form Nanite.

Nanite assumes high poly authoring of objects and works to stream in simplified chunks such that the rendered triangles are not less than a pixel wide. Displacement maps are a bit redundant because geometry can be naturally very detailed, there's no reason to use a texture map for it. (There is a case for Landscapes but that's a unique case)

This seems to be using a displacement and a low poly mesh to generate high poly but 'voxelized' geo on load.


Iirc Comanche used a ray tracing approach to render the terrain [1]. No Voxels there, just a 2D height map that is sampled.

(They called it "VoxelSpace" ... so some confusion is warranted)

[1] https://github.com/s-macke/VoxelSpace


Sorry for being pedantic, but it’s ray “casting” not tracing. It’s similar in some ways but very different in others.


My mind parses any mention of "voxel" that isn't accompanied by a screenshot from one of the original Comanche levels as a lie.

The reality is that Comanche was more like a displacement-map twist on the Doom "2.5D" than something that really deserves the term "voxel". But it was so magic, did anything else ever come close?


My guess its the exact same technique used in the Comanche games - or at least the same results can be achieved with it.

Contrary to popular belief, those games didn't use true 3D voxels - they used a heightmap that stored a color and height value in the terrain texture which they raymarched into.

You could recreate the same look with raymarching into the texture in a shader which I suspect would look very similar to what the blog post achieved..


There's a libre game which does that but in 3D and OpenGL, Trigger Rally.


Basically unrelated. He's using geometry shaders to generate voxel mesh details-- perhaps with some LOD optimizations, while Nanite is a GPU-driven rendering technology that adapts triangle density to aim for a given fidelity target.

Nanite can use displacement maps and perform tessellation, but it uses an alternate pathway that's not necessarily more efficient than feeding it a high-poly asset to render.


I'd be very surprised if they were actually using geometry shaders (at least this conclusion can't be drawn from their post). Geom shaders are basically dead weight, a holdover from a bygone time, and are better avoided for modern renderers.

The techniques they're drawing on mentioned in the post - parallax mapping, shell mapping - do not generate explicit geometry, rather they rely on raymarching through heightfields in the frag shader. It's more likely that they're doing something like that.


Err, right, they're doing the transformations CPU-side. The blog hints that it's related to shell maps, so maybe the new mesh geometry is densest on the sharp edges?


Parallax mapping breaks on the edges of polygons though, while this technique seems to actually add geometry since the edges of surfaces are appropriately detailed and bumpy.


Love hearing about new methods in the voxel space!

It's a bit unfortunate the article conflates voxels with a specific rendering technique. Voxels are just the usage of the 3d grid. It seems like based on the middle section that the author is equating voxel usage with cubic style rendering, or what we often call bloxel renderers (Minecraft).

It is also mentioned that the triangle geometry can be import d and used in engines directly, but I think the author is forgetting that bloxel, or really any rendering process can do the same thing, this is how typical voxel rendering plugins that use other styles already get first class support in existing engines.


I wish this was open-sourced! I wish!


I love love love it. Ultima Underworld comes to mind.


This should be the future of modernizing retro 3D games. God, this is beautiful.


For someone who has done zero 3D graphics in a couple of decades, why do displacement maps exist? At first blush they don't appear to be more computationally efficient than more complex geometry.


It's less about being "less work" and more about what GPUs are and are not good at.

Displacement maps can closely approximate the geometric detail of having 1 triangle per pixel. All the work for displacement maps happens on a per-pixel basis inside a fragment shader (in simple terms, a little program that runs for each pixel of a triangle). You can wrap this displacement map over a single, large triangle and get the visual appearance of a much denser mesh.

The alternative approach of subdividing the mesh is orders of magnitude less efficient because GPUs are _very_ bad at drawing tiny polygons. It's just how GPUs and the graphics pipelines are implemented. A 'tiny polygon' is determined from how many pixels it covers, as you start dropping below a couple dozen pixels per triangle you start hitting nasty performance cliffs inside the GPU because of the specific ways triangles are drawn. Displacement maps works around this problem because you're logically only drawing single big polygons but doing the work in a shader where the GPU is much more efficient.


> Displacement maps can closely approximate the geometric detail of having 1 triangle per pixel. All the work for displacement maps happens on a per-pixel basis inside a fragment shader (in simple terms, a little program that runs for each pixel of a triangle). You can wrap this displacement map over a single, large triangle and get the visual appearance of a much denser mesh.

I think you're wrong here... Can you provide an example of an engine where this is actually the case? To my knowledge (and I checked on Wikipedia [1] to make sure), a displacement map simply displaces VERTICES along their normals (so white color moves a maximum distance along normal, and black either doesn't move at all or moves maximum distance along the inverted normal - inwards). This means you need to heavily subdivide your geometry. Or you can remesh your geometry, with the simplest remeshing algorithms being just voxelizers - and that's what we see the OP doing - except in the place where all the 3D graphics complain on the limitation of voxelization, he hypes it as this new retro look he invented.

What seems to confirm my interpretation of all of this is how the OP describes this being done on the CPU, and - while he tries to downplay it - using rather decent hardware - I mean, Steam Deck is not exactly ancient hardware, and for an extremely simple scene with nothing else going on, he's happy being above 60 FPS on 800p resolution!

[1] https://en.wikipedia.org/wiki/Displacement_mapping


The magic lies in tessellation. Tessellation is an efficient GPU process of heavily subdividing your mesh, so that displacement maps can add visible geometric details afterwards. And because it's dynamic you can selectively apply it only to the meshes that are close to the camera. These are reasons why it's better than subdividing the mesh at preprocessing stage and "baking in " the displacement into vertex positions.


Good point! It's not just the LOD, it may be also the fact that the parallelization makes GPUs a better fit for subdivision than CPUs, and surely there's a matter of connection bandwith to the GPUs: the vertex coordinates, as well as lots of other vertex-specific data, needs only to be sent for the base vertices, the new vertices get their new values from interpolating old values, and the textures like displacement map control the difference from interpolated value to desired value. Of course a texture would have been just some weird compromise of resolution of the sent data, except you don't have to provide a texture for every attribute, and more importantly, such a texture might be static (e.g. if encoded in normal space it may work throughout an animation of an object).


I think displacement maps are not that frequently used in game graphics these days. The typical thing is regular geometry with normal maps. However, triangle soup is very difficult to modify in any way. The cool thing about displacement maps is that they can downsampled trivially, so if you're doing runtime tesselation you can get smooth LOD scaling (at least down to your base mesh) which is nice. Less usefully, they can also be tiled, blended, interpolated, scrolled, animated, etc.

On the other hand, UE5's Nanite achieves the same LOD scaling but far better, without the limitation of not being able to scale down past the base mesh and with a built-in workaround for the issues GPUs have with small triangles. It's possible that people might use displacement maps in the authoring process, which can then be baked into static meshes for Nanite. Then it would just be a convenient-ish way for artists to author and distribute complex materials.


The reason that comes to my mind is to save memory to transfer from the CPU to the GPU.

If you do your tesselation inside the vertex shader, then you can send a low-poly mesh to the graphics card, which saves a lot on vertex buffers (e.g. uv coordinates and other per-vertex attributes). The vertex shader still emits the same number of vertices to the rest of the pipeline, but inter-stage bandwidth is more plentiful than CPU-GPU bandwidth so I can see that coming out ahead.

I’m not an expert though. Perhaps someone with a better understanding can clear this up, I’m curious too…


It allows you to defer subdivision of the mesh (maybe down to micropoly level for displacement) until the final render stage, rather than baking in the subdivision earlier and having to deal with dense geometry for things like animation where it's better to have lighter-weight meshes.


Looks amazing. My dream outcome would be something like warlords battlecry 3 in a voxel style.

How the hell do people get into graphics programming or voxels? Seems very difficult as a dirty ol webdev


It's just experience. You start with basics things. There's likely lots of game devs saying "how do you even get into web development, seems difficult as a dirty ol pixel pusher".

There's a lot of tutorials around. You can also join a game jam for some motivation / community.


I hope to become a dirty ol pixel/voxel pusher hehe.

Time to make a start!


If you want some friendly people to ask for advice and find examples/resources, I'd meta-recommend checking the Pirate Software discord https://discord.com/invite/piratesoftware There's lots of people there who are/were in a similar position and can help you out.


Thanks I would check that out but I don’t have a discord :( may make one when I start making games though


This looks awesome. The demo video was compelling, even though the invisible light sources kept throwing me off. Place some "torches"!


I got textures I worked on for a temporarily paused project that I was working on just as a hobby trying to put the game Strife in UE5. https://github.com/navjack/strife-rtx-pbr-textures I handmade a ton of height maps or displacement maps for the textures and they might work really good with this.


This is inspiring. Makes me want to try to duplicate the results using old-fashioned bump mapping.


Looks nice, but without shadows it's missing an important (and often very complex) feature.


Sometimes the lack of something shifts our focus elsewhere. Could be a feature.


The benefits of having less is underrated.


Only when you value waiting for what you want, possibly forever, rather than simply disabling what you don't.


Yeah, thinking it may need a second pass to produce shadows. It's not clear to me that you could "bake them in" since textures are reused.

Stretch goal, dynamic lighting — like someone carrying a torch ahead of you in a tunnel and illuminating as they went.

To be sure though, the retro vibe is 100% nailed as is.


Just use a second uv for the lighting.


This would be absolutely perfect for a Riven remake.


Does anyone else remember Sauerbraten? Gives me similar vibes. Should see if it will still run on a modern system.


The author of Sauerbraten is also working on new voxel game tech: https://store.steampowered.com/app/2283580/Voxlands/


Could you use the algo to render a photo as a carved face or profile? Like a netpbm filter to sculpt from image


On an actual voxel engine, look up Outcast.

Also, Duke Nukem Forever 2013 with Eduke32 had some voxel models I think.


This is really cool, but with all homemade voxel engines I have to ask the same thing. Where is the game?


It's not a real voxel engine, the world geometry itself isn't much different from Quake.


Sorry if I am dumb but...I was wondering if this in practice would theoretically be able to convert or render directly old games and graphics (provided the starting point is compatible) in such a "remastered" way.

In any case I am at the third read and I think I understood enough to say "wow, looks and feels great. Would play TES Arena like that."


No, at least not in the "automatic" way the Nvidia RTX Remix does. You would not only need to generate the displacement maps for textures, but the most importantly port the game to this new rendering engine. It's an extremely complicated task if done by reverse engineering and hacking the executable, without ability to read and recompile the source code.


Perfectly clear. Who knows, maybe in the future this could be a step for a mechanism that does so; anyway, impressive results in itself! Kudos!


This would look incredible in VR.


is there a game engine using this? would be wild


Impressive.


hugely impressive.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: