This program utilizes a powerful technique called raymarching, raytracing's big brother. Instead of tracing along each ray in tiny fragments and returning intersections, the ray is 'marched' in larger steps, vastly increasing render speed (the downfall of classical raytracers).
How do we 'march' rays without missing geometry? With the aid of signed distance functions, geometric equations which take an arbitrary point in 3space and evaluate to 0 if that point is on the 'shell' of the geometry. For example, the signed distance function for a sphere:
This lets us do interesting things such as transform the point p to transform our object, take the minimum value of two spheres to produce a venn-diagram intersection shape, take the maximum of two spheres to 'cutout' a hole from one, etc. The logic is a bit backwards at first, but it clicks quite quickly and one can rapidly construct complex geometry without much optimization/shader magic. For example, this is a realtime capture of a 64k self-contained executable. No 3d models, no textures, no assets - just raymarching and math: [https://www.youtube.com/watch?v=ie4u2i_5OdE]
> Instead of tracing along each ray in tiny fragments and returning intersections
I dunno if that’s how it’s done now, but the old CG end-of-semester assignment was a constructive solid geometry ray-tracer to render not-LEGO bricks. It was simple ray-intersection against simple geometry unioned/diffed together, and didn’t use any stepping along rays.
Classic raytracers never march or step: the ‘downfall’ is that the need to test every object in the scene for intersection, and the objects need to be easy to test for ray intersection.
One optimization is to break a scene into chunks. First you see which chunk a ray intersects and then test a Ray vs each object in that chunk.
This lets you avoid testing most objects for intersections but adds book keeping overhead so it only works really well for specific kinds of scenes. Aka not the classic single ball and a few walls.
Their are also multiple ways to handle chunks. For a space RTS game you might box up all the ships in a squad and first test the box around the squad before testing each ship. For Minecraft you can just split the world into arbitrary boxes that don’t change frequently.
This optimization can be made systematic. From a polygon-based geometry, an algorithm automatically computes a set of "boxes" (actually, hyperpolyhedra in the general case, polygons in 2D). This allows to very quickly find what intersects any ray without checking all objects.
See https://en.wikipedia.org/wiki/Binary_space_partitioning
It's what enabled Doom to render complex geometry in real time on a i386 processor.
Except that the state of the art has evolved to bounding volume hierarchies, which are both easier to handle and more performant. I won't recommend implementing BSP or kd-trees because they have many more corner cases and numerical issues.
Absolutely, testing against bounding boxes is a quick way to exclude candidates and frustrum culling with an octree (or similar) also greatly speeds up intersection tests
An advantage of ray-marching is that the acceleration structure is built into the shader math, so it requires no additional memory or loading.
This allows the fractal to evolve in realtime, which could not be done with ray-tracing without regenerating the bounding boxes or octree every single frame.
Your description portrays raymarching as an optimization over raytracing and that's simply not the case at all.
Raymarching enables rendering objects/scenes raytracing can't readily handle (like procedurally generated fractals), and tends to be costly because of how the ray must be "marched" towards the surfaces rather than directly computing intersections.
The only form of raytracing I'm familiar with that employs anything resembling raymarching is the classical 80s-era KD-tree acceleration structure paper which stepped the ray at hyperplane boundaries, to confine the costly ray->object interesection tests to objects in the intersected axis-aligned bounding boxes.
Ray tracing does not do any kind of stepping/marching or anything similar. It's incredibly odd that you would claim it does, especially so authoritatively.
- ray tracing refers to the general problem of finding an intersection between a ray and the closest scene surfacing. It is most commonly asociated with techniques that intersect discrete primitives like triangles and usually employ acceleration structures like bounding volume hierarchies or kd-trees to make that fast.
- ray marching is a related, but actually separate technique where the algorithm steps along the ray in discrete steps. It can accumulate values along the way, if needed. This is used e.g. for volume visualizations (e.g. 3D CT scan visualization or clouds in movies). You can also use that concept to find intersections with surfaces that do nit have/require closed form solutions for ray intersections. This is how you eventually arive at sphere tracing for signed distance fields and the stunning image by iq et al. that are created entirely from math expressions for the surfaces.
>Marble Marcher is entirely ray-traced in real time and is played on the surface of evolving fractals.
That’s the very first line of the source page. So why do you say it uses raymarching? Are these terms used interchangeably as in marching is a method to do tracing?
Most people don't know what ray-marching is but are familiar with ray-tracing which is similar. So I used the correct "ray-marching" term where I have a more technical audience like my videos, and I use "ray-tracing" for a more general audience like a gamer would see.
I would have said ray-marching here on HN, but someone else posted this, not me :P
I did a little open source game recently where all the rendering is based on SDFs, rendered in GLSL but without raymarching—all 2D: https://github.com/westoncb/under-game
What an excellent explanation! Can this work in reverse, as in given an image of dimension d can a signed distance function be used to infer the geometry? I'm hoping it might be useful for visualizing/cataloguing strange attractors in audio or sequential data, for example, or extracting beautiful idealized geometries from blocky voxels.
How do we 'march' rays without missing geometry? With the aid of signed distance functions, geometric equations which take an arbitrary point in 3space and evaluate to 0 if that point is on the 'shell' of the geometry. For example, the signed distance function for a sphere:
This lets us do interesting things such as transform the point p to transform our object, take the minimum value of two spheres to produce a venn-diagram intersection shape, take the maximum of two spheres to 'cutout' a hole from one, etc. The logic is a bit backwards at first, but it clicks quite quickly and one can rapidly construct complex geometry without much optimization/shader magic. For example, this is a realtime capture of a 64k self-contained executable. No 3d models, no textures, no assets - just raymarching and math: [https://www.youtube.com/watch?v=ie4u2i_5OdE]Approachable overview of raymarching & SDFs: [https://jamie-wong.com/2016/07/15/ray-marching-signed-distan...]
Learn more: [https://iquilezles.org/www/articles/raymarchingdf/raymarchin...]