Hacker News new | past | comments | ask | show | jobs | submit login

I think it is quite funny that 2D vector graphics, and 3D e.g. computational fluid dynamic simulations, have to, pretty much, solve the same problem.

We use in 3D STL geometrys, NURBS/T-splines from CAD, and signed-distance fields, often all of them simulatenously in the same simulation, and for a big 3D volume (with 10^9-10^10 "cells"/"3d-pixels") we have to figure out whether these are inside or outside. Our 3D domain is adaptive and dynamic to track the movement of bodies and features of the flow, so we have to update it on every iteration, and all of this has to happen in distributed memory on 100.000-1.000.000 cores, without blocking.

There is a lot of research about, e.g., how to update signed-distance fields quickly, in parallel, and distributed memory, when they slightly move or deform, as well as how to use signed distance fields to represent sharp corners or how to extract the input geometry "as accurately as possible" from a signed distance field, as well as how big the maximum error is, etc. The Journal of Computational Physics, Computational Methods in Applied Mechanics, and SIAM journals, are often full with this type of research.

For computer graphics, errors are typically ok. But for engineering applications, the difference between a "sharp" edge and a smoothed one can be a completely different flow field, which result in completely different physical phenomena (e.g. turbulent vs laminar flow), and completely different loads on a structure.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: