Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Graphics is not as trivial of a problem as you're making it seem. Even if you're just rendering a single "thing", that thing can have an unbounded number of vertices. And it's definitely not a trivial problem deciding which vertices are important and which are not.


Oh, I just meant that it was conceptually easy - things should look 'right' - whereas with scientific data figuring out how to map that data to a 2D screen often involves a lot of carefully thought-out discretion about what to show and what to hide.

The actual implementations of 3D rendering are gargantuan feats of engineering, science, and art. Having spent time writing code to turn terrain height maps into quantized meshes for 3D rendering, I agree entirely that deciding which vertices are important is not a trivial problem. The OP linked a video about Nanite, which IMHO is a marvel of engineering.


My point is that both you and the person you were replying to have a kind of fundamental misconception about why data is slow and graphics are fast. The misconception is that data isn’t slow and graphics aren’t fast, at least relative to one another.

For graphics, the main performance metric is (generally) triangle count. You can draw lots of low-poly objects on screen more efficiently than you can draw one very high-poly object. The same holds true for data: you can render low amounts data more efficiently than you can render large amounts of data. Nanite doesn’t magically render high-poly meshes in real time. Nanite needs to pre-process the mesh and produces lower poly meshes that maintain the geometric properties of the original mesh. This is the main innovation of nanite, because traditionally it’s been very hard to reduce triangle count while keeping the overall geometry roughly similar to the original. And in this way, data processing has traditionally been much more efficient than 3D graphics. There have long been various statistical aggregations you can do on data to keep the same rough statistical properties using less data. But you have to be willing to preprocess the data, and that is slow. I haven’t used nanite, but I imagine the import process is also slow relative to the rendering speed after processing.


>The misconception is that data isn’t slow and graphics aren’t fast, at least relative to one another.

depends on the data/graphics. modern commercial GPUs are beefy (even when talking about integrated ones) and I suspect the Matlib kinds of tools aren't even tapping into a fraction of a percent of its power.

But at the same time 5 billion draw calls raw will bring even a decent gaming GPU to its knees, at least for responsive, real time applications. The trick is to first understand your data (e.g. that 5 billion triangles are useless on a monitor that has 1-4 billion pixels). As you said, even Nanite isn't truly trying to process a trillion triangles raw.

The steps from that understanding to a good enough approximation are indeed some dark magic.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: