Hacker News new | past | comments | ask | show | jobs | submit login
Massively Parallel Rendering of Complex Closed-Form Implicit Surfaces (2020) (mattkeeter.com)
72 points by vg_head on April 21, 2021 | hide | past | favorite | 18 comments



This is important for vision ML because sufficiently advanced simulations may comprise the real world as a special case: https://arxiv.org/abs/1703.06907

For example, gaming pixel maps have been used for semantic simulation, and have been rendered scenes: https://arxiv.org/abs/1608.02192 https://www.cv-foundation.org/openaccess/content_cvpr_2016/p...

This concept has not (yet) been applied in audio ML. We have a paper in submission---will be on ArXiv soon---where we share a GPU-enabled modular synthesizer that is 16000x faster than realtime, concurrently released with a 1-billion audio sample corpus that is 100x larger than any audio dataset in the literature. Here's the code: https://github.com/torchsynth/torchsynth


How is it related to ML?

Closed form implicit surfaces are actually one of the most difficult way to model the real world. They are neat because they are very compact and the creation process is close to modeling with (mathematical) clay. But they are hard to use if you want to model the real world with all its complexity, resulting of a variety of chemical and physical processes happening over time. There is a reason why they are so popular in the demoscene, for which technical achievement and art is more important than realism, and not much elsewhere.

The paper is about making rendering of these primitives more efficient, which may prove to be a great addition to an artist toolbox, and maybe for scientific imagery. However, I don't really see applications for ML anytime soon.


Fast throughput rendering yields in-GPU larger datasets, which are useful for ML pretraining.


Author here, I'm glad to have finally figured out why a bunch of people followed me on Twitter this morning!

If you enjoyed this paper, there's a companion blog post about the actual process of writing it: https://www.mattkeeter.com/projects/siggraph/

(and I'm happy to answer questions, of course)


This certainly looks like a very interesting approach! As I'm not in the field of graphics would you be able to comment how this compares to cone marching and how well it can render fractal like surfaces?


I'm not familiar with cone tracing, but [1] indicates that it's a variation of sphere tracing.

The downside to sphere tracing and similar is that it limits the input model: you have to guarantee that evaluating the model at [x, y, z] gives you a result that's less-than-or-equal to the true (Euclidean) distance to the shape's surface.

(or a distance adjusted by some constant scaling factor, i.e. Lipshitz continuity [2])

This is a relatively fragile property, and really limits what kinds of shapes and transformations you can use when modeling.

Using interval arithmetic is more robust against arbitrary models, at the cost of being less efficient when models are well-behaved.

I don't know much about state-of-the-art fractal rendering! I'd imagine that the fixed (original) tape in MPR would be a limitation here, because you may want to terminate conditionally, rather than evaluating a fixed expression.

[1] http://www.fulcrum-demo.org/wp-content/uploads/2012/04/Cone_...

[2] https://en.wikipedia.org/wiki/Lipschitz_continuity


Super cool to see Matthew's library as well: https://libfive.com/studio/


Fabulous work. The video presentation is only 18 minutes long, well organized, and very accessible -- the author does a great job of explaining how and why the rendering works so efficiently using a simple example in 2D:

https://www.youtube.com/watch?v=_6CnaugAcCc

Highly recommended.


The 2D part is well explained, however, I am interested in how they made it 3D. They have voxels, but the actual rendering is a bit unclear.

I guess they simply didn't do much work on that part, using some bruteforce-ish raymarching technique, which their fast evaluation and nicely bound objects allows. They mention further work though, like sparse voxel octrees, improving culling, etc... So I guess that will be for a "future episode".


Well in honesty it isn't that different for 3d. The beauty of raymarching would be that you could rewrite the raymarcher slightly to tag many "instructions" that are ambigiously within bounds when rendering one of their "Big-squares" and just let them remain within their described tape for all pixels within that screen-space square. Just skimmed the paper but having written a few raymarchers i see the pseudo-code (maybe not for all shapes/distortions but many in practice).

Same priciple should work for anything with decently good distance function, reifying a mesh or SVO could always be combined with rough culling of the "tapes" since it's all about the distances.


Check out section 4.2 of the paper: 3D is very similar to 2D, using the same strategy of big regions -> small regions -> voxels.

(There's also a bit of extra logic to skip regions which are occluded in Z, plus a final pass to render normals using automatic differentiation)


Really cool!

Need neural radiance fields. Then add super resolution, then add motion prediction and you are on your way to a synthetic visual cortex.

In the future GPUs will be spec'd by how far in the future they can predict given a power and thermal envelope.


Very very impressive work. I wonder if it would be feasible for vendors to implement the interpreter in hardware in future.


Being able to do very basic JITting on the GPU would be a great first step – load/store operations when evaluating the tapes are terrible for memory access, since they use global memory rather than registers.

In another project [1], I found a 2-6x speedup in going from an interpreter to a fully-compiled shader, so this can make a huge difference!

[1] https://www.mattkeeter.com/projects/rayray/


That's a very cool project of yours I'm going to have to look at when I have the time.


this isn't exactly the right place to ask but i'm betting people interested in graphics pipeline latency will visit this comment section:

how do people that absolutely need the lowest latency numbers make do with GPUs? i'm not in graphics but i'm in ML and lately i've been working on research to squeeze as much juice out GPUs as possible. my last project involved optimizing a pipeline that basically consisted of just a stack of filters and some min/max finding. the fastest i could get it to go after throwing everything i could at it and i only got it down to ~20ms. that's ~50Hz. admittedly it was a tall stack but still i don't understand how game devs (for example) get complex pipelines to finish within the 60Hz/16ms given you don't never have access to bare metal GPU.


I'm not an expert, but in my experience working with CUDA/OpenCL/OpenGL, GPU latency usually comes down to data transfer and reduction operations (like max or sum). If you are okay with some kind of double-buffering, you can usually "hide" those latencies.


Looking forward to watching this when I have some down time... But, whew, I did a double-take when I saw "VR/ML workstation"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: