As someone who has done a lot of tetrahedral stuff as well as voxels, there are some downsides to tetrahedrals -- less cache hits, and the complexity of writing tetrahedral code is significantly higher. In the end I believe that the complexity for development and optimization may balance itself with the benefits.
Great post!
One thing to note with tetrahedral decomposition is that while you can always triangulate non-self-intersecting 2D boundaries into triangulations, using just the boundary verts, this doesn't extend to 3D boundaries and tetrahedral decompositions. You will sometimes need to add extra 'steiner' vertices.
Maybe not directly relevant to the context in the link, but it's something I found unintuitive, and a trap to watch out for.
See http://www.ics.uci.edu/~eppstein/junkyard/untetra/, and https://en.wikipedia.org/wiki/Sch%C3%B6nhardt_polyhedron for some concrete examples.
Another reason that tetras are handy (for anyone who has looked at FEA) is that you can easily interpolate across them. Maybe not useful now, but if they ever want to add heat transfer or charge or something...
This would be really cool combined with laser scanning hardware[1] to generate point clouds of real-life environments. Deformable representations of the real-world.
The author has some good taste. Those tech demos and sandboxes were just beautiful with nice color palettes and surprising polish for what they were. Like, in the first demo, how the screen glitches when they hit the mesh and how the deformation simply layers a track in the music.
Does Atomontage have support for real time shadows? All of that information seems to be pre-baked into the voxel models. This would severely limits its "everything is destructible" potential.
It should be able to support some screen spaced based effects such as deferred shading and SSAO at least.
If you can render it once, you can render it twice. So just render a shadow buffer pass, and now you have real-time shadows. This applies to any 3D engine.
Would it also make stress calculation (for structural collapse) much easier?
I have been wondering what method they are using in Everquest: Next, and how it compares to this.
Also, from CFD (Computational fluid dynamics) work I have done, cells are defined as a single point in their center. Would it be possible to do it this way and have the points connect to each other to form surfaces? Destroying a point would then cause adjacent points to generate a new surface with the now exposed point/s below. I am not sure how well this would translate though.
EQNext and Landmark are using a voxel engine. I can't speak to implementation on the EQNext side as it's not in a publicly testable form yet, but on the Landmark side it's been quite a while since we've seen any meaningful engine improvements and it's rather primitive in many respects.
Things like having to elaborate hacks to get sub-voxel shapes and imperfect precision with rendering intersecting edges seem like limitations of any voxel based design, while the crude handling of lighting and reflections and very short LOD distance seems to be intentional limitations for performance reasons (and understandable given average voxel counts in player-created areas).
One other issue with voxel engines is isotropy, or rather, anisotropy.
(If you don't recognize what that is, it's the property of having identical properties in all directions, or rather the lack thereof.)
In a voxel engine, you generally can't rotate things by arbitrary amounts. Ditto, physics doesn't work the same way in all directions. Ditto, things are substantially more complex to render in certain directions (look at MC rendering a diagonal wall versus a straight one, for instance.)
Though there are ways around some of this, e.g. [1]
Would that work? It probably could. Is it a good idea? It probably is. The reason is that what you are describing could be thought of as a generalization of what is being done now.
I guess that's because the author is trying to make an player-editable world, and 3D fractals don't mix well with that.
It's actually quite hard to use 3D fractals in games (or any interactive application), unless you bake them to voxels or 3d meshes, but then you loose the infinite amount of detail that make fractals so interesting in the first place.
The rendering techniques used by 3d fractals are very GPU consuming and you can't really "edit" the world. But if anyone has been working on that, I'd be interested to see what they came up with.
Yes, but only use fractals for initial geometry, just to get some structure and then reduce the complexity and imposing a lower limit on the recursive cell size. The routine would be run once at level creation and then the geometry would optimized.
I disagree about fractals losing all their interestingness when they aren't zoomable. There's still a slew of visual complexity. Maybe they wouldn't be as interesting, but they are oodles more interesting than spheres.
Really interesting article; the videos + audio made it especially fun to read / watch. Impressive that all this came together in about a year and a half!
Voxels are just so easy.
Here is my past work in tetrahedrals:
- https://cs.uwaterloo.ca/~c2batty/papers/Batty10/ - https://cs.uwaterloo.ca/~c2batty/papers/Batty11.pdf