Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

(Note the following is a simplified description of the classic forward rendering process; the so-called deferred rendering techniqure is a bit different.)

A GPU turns an abstract vector shape like a triangle, defined by three vertices and data such as a normal associated with each vertex, into a stream of fragments, one (or more if multisampling) for each pixel in the output buffer that’s covered by the shape. This part is all done in hardware.

A fragment is a pixel coordinate plus user-supplied data that’s either constant, called uniform, or the aforementioned vertex data interpolated across the triangle face, called varying. This interpolation business is again done in hardware and not programmable.

The fragment shader takes a fragment as input and based on the data computes a color, which is (after a couple more stages) output on the screen (or offscreen buffer) as the color of the respective pixel. This could be anything from a constant color to complex lighting calculations. In GPU rendering, this is all massively parallel, with countless fragments being processed simultaneously at any moment. Shaders are pure, stateless functions: the only data they can access is the input, and the only effect they can have is to return a color (and a few other things like a depth value).

So in a nutshell, the GPU hardware is responsible for computing which pixels should be filled to draw each triangle, but the fragment shader’s responsibility is to determine the color value of each of those pixels.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: