Hacker News new | past | comments | ask | show | jobs | submit login
A new way of rendering particles (simppa.fi)
100 points by plurby on Jan 24, 2016 | hide | past | favorite | 13 comments



This is extremely unlikely to be a good technique (and it isn't new) but here are two alternatives. Even in the demo video, beneath the lens flares and other excess, the particles are aliasing like crazy.

First of all, particles like this are usually only a few pixels in size, so the shape doesn't really matter as much as the area. Because of this, I don't think there is much gained, but there is a lot lost since it aliases so badly.

Renderman actually creates particles as two bilinear patches that make up the same shape as the intersection given by these two triangles. You could do the same thing with a quad strip, which would still also take only 6 vertices. The gpu will deal with the quads itself, possibly by creating two triangles for each quad. This is very unlikely to be a performance hit since it does not affect shading.

A second way of creating a semi-transparent particle would be make a triangle strip that rotates around one center point. The center point has an alpha of 1, the outer points have an alpha of 0. The interpolation takes care of gradient transparency to the edges, and even with 5 outer vertices the shape + gradient should easily look good enough to hold up while taking up around 16 pixels.

Shadows - the best way in these situations to control shadows is usually to scale down the particle (or hair). Remember that the shape or look on a micro scale doesn't matter as much as the integral of the area * integral of the transparency.


"The gpu will deal with the quads itself, possibly by creating two triangles for each quad. This is very unlikely to be a performance hit since it does not affect shading."

No modern GPU (or API in fact) deals with quads. You can't guarantee all vertices to be coplanar when dealing with floating point values so you're much better off pushing triangles and not forcing driver to do conversion for you. Also if vertex data bandwidth is scarce in particle-heavy scene (so you actually care if it's 3 of 6 or whatever vertices), just push one vertex and expand it in GS like everyone else does.


I would love to see a case, even contrived, where planar quads produce artifacts from floating point precision but I think it would be extremely difficult to demonstrate it even on an enormous ground plane quad, let alone a subpixel quad.

I'm not sure why you would say no modern API deals with quads, what would you call the ability to draw quads and quad strips from gpu buffers in OpenGL?

Geometry shaders should work well, what I described is an approach for lower tech situations.


Pretty much any set of 4 fp vectors that don't share z coord will end up non-coplanar at some point (if you insist on going barycentric for texture iteration or if you plan on using halfplanes to do intersection or whatever). That's why HW tends to use fixed point internal representation in such places. But yeah, whatever, it's almost 1am and I shouldn't be spending time convincing people online that GPU developers know a thing or two about GPU. :) But at the very least trust me on one thing: quads are dead. They've been dead for a long, long time now, low-end or high-end, quads make no sense at all.


Seems like a very simple expansion on this technique would be to keep a rolling framebuffer, and blend the last particle draw with the current one. This would remove the flickering, display the fake alpha properly for still screenshots, and work for variable framerate.

Assuming your GPU will let you hold onto a previous frame's draw result (of just the particles) and blend it relatively cheaply with the new one, it wouldn't cost very much to implement. One full framebuffer blend should be much, much cheaper than blending the individual dots.


You could do that for the whole frame though. The reason it isn't done is because it is essentially a blur and it makes the motion streaky. You might think this would be ok for particles, but the issue won't be particle motion as much as it will be camera motion.


This is really cool. I'm curious how this will look on a Gear VR, since those run at 60fps and require every optimization they can get to maintain that on a cell phone.


I have been looking for a better particle. Thanks! One strange advantage to putting new feature off, better way of doing them suddenly appear.


I really like this. It would be nice to see some benchmarks versus a quad based solution where the quad is tessellated in a geometry shader with a fragment shader drawing the particle or using a texture. My guess would be that the bottleneck might be pcie but my knowledge of the gpu performance is a little outdated.


How is it any different from the idea of voxels?


A particle is a free 3D coordinate. Particle rendering refers to several visualization techniques to visualize a volumetric system which focus on minimizing per particle work load:

https://en.m.wikipedia.org/wiki/Particle_system

A voxel is a value in a 3D lattice.

https://en.m.wikipedia.org/wiki/Voxel

A particle system could be used to visualize a voxel field, of course, but their application range is far wider.


If this sounds interesting to you I suggest also checking out TXAA. It exploits a similar concept to get high quality cheap AA.


This brilliantly insightful heuristic will work well for showing off point clouds in WebGL.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: