"Which, if you’re like me and didn’t finish their college level math courses, means absolutely nothing. I dropped out of art school, so it’s mostly over my head."
This is why Ben Golus's posts on shaders are the best. Cause they're actually accessible.
This was really interesting. I don't love the fading out to solve the moire in the final solution. I wonder if some dithering would feel better and give the impression there are lines out there rather than a smooth gray surface? Or maybe some jitter to break up the pattern?
This isn't shown in the post, but sometimes the moire feels worse if you are walking around and the moire has a movement of its own (either flickering/shimmering or sweeping across in uv space), and it's probably a decent benefit to get rid of the moire even if it's imperfect.
As far as dithering-while-in-motion goes, Lucas Pope has a whole series of posts [0] [1] [2] [3] [4] on his various attempts at implementing the 1-bit dithering effect for his game Return of the Obra Dinn. It seems like for every strategy that does work well, there are many more that don't (or at least, not at the scale he applies it).
Plugins are unnecessary. On the right side of your address bar in Firefox should be a little icon of a page. Click it to enter Reader View, which can apply various themes to a page.
This is really awesome. to ask a dumb question, what’s a good way to get acclimated with running and building shaders? Just going straight to OpenGL tooling and extrapolating from there?
Depending on what you’re after, the common recommendation to start with book of shaders or shadertoy may be counterproductive. In my experience using shaders, fragment shaders are a place where I spend a lot of time and effort, and I do a decent amount of computation directly on textures, but the sort of stuff done in fullscreen quad ‘picture in a box’ shaders as in Book of Shaders and ShaderToy is, I believe, unproductively mind-shattering and abstract for beginners. To learn shader fundamentals I’d suggest you write a particle system (in compute or in vertex+frag) or even a raymarching renderer or something, rather than just bang your head against animating patterns on a single frag.
What makes ShaderToy great is the extremely fast turnaround time.
Make changes, compile (in e.g. < 0.1s) and you immediately see the result. In Unity/Unreal you can also work on them, and in certain places you can see results in near real time too, but it takes a few more clicks and saves etc.
This is great! I have a little viewer app for a code-based CAD tool and I've been delaying making a grid for it as I haven't found a satisfactory solution for it until I just read this article :)
Beautiful work. I'm happy to read it from start to finish. Not at all sure I have an application for this, but I completely get why this is a beautiful thing :)
Very nice, I'd use it as one of the starting points if I was to learn 3D graphics as it touches upon a lot of math details in a seemingly simple problem.
I have an even better darn grid shader that I use in my graphics projects.
The shader in this article wants to emulate the look of a sampled texture so that it blurs to medium gray at distance while avoiding moire patterns. And it does indeed look quite good at what it's aiming for.
I on the other hand wanted an "infinitely zoomable grid paper" look with constant-ish pixel width lines, such that the density of the grid changes. If applied to a "ground" plane, the grid gets sparser near the horizon. When you zoom in, the grid gets denser with a new "decade" of grid fading in with the old grid fading out.
I generally apply this to a "full screen triangle", and do a raycast against the ground plane (e.g. the y = 0 plane) and extract the "uv" coordinates from the raycast result (that would be the x,z coordinate of the ray hit). I've also applied this technique to a skybox, where I take the view ray direction unit vector and convert it to spherical coordinates (latitude, longitude) for the UV.
This shader gives a very clean looking grid at almost any viewing angle and distance. If your world space units are meters, this can be zoomed in from nanometers to gigameters while looking crisp and clean. Unfortunately, floating point issues take over after that point and there are some pixel artifacts when the camera is very close to the ground plane and the viewing angle is extreme. This could be fixed by adding some clamping to the the `log_width` variable, but I haven't bothered with that as I don't work in nanometers in computer graphics projects. There's also some flickering at the horizon line, which I've solved with a trivial fade out factor (not visible in the source code below).
As it is shown below, it'll show a grid with a subdivision factor of 10, like millimeter paper with the primary grid in centimeters and secondary grid in millimeters. The subdivision can be changed by adjusting the `base` and `N` variables. See the comments for explanation.
Here's the thing in all its glory. Apologies that I don't have a screenshot hosted where I could share. Please let me know if you're trying this out in your projects.
float grid(vec2 uv, float linewidth) {
vec2 width = linewidth * fwidth(uv);
vec2 grid = abs(fract(uv - 0.5) - 0.5) / width;
float line = min(1.0, min(grid.x, grid.y));
return 1.0 - line;
}
float decade_grid(vec2 uv) {
// grid subdivision factor (logarithm base)
float base = 10.0;
// grid density, primary grid is approximately base^N pixels wide
// with N = 3, primary grid is 100 pixels, secondary grid 10 px
float N = 3.0; // 3.0 is the densest grid that does not have moire patterns
// approximate grid cell size using screen space uv partial derivatives
vec2 width = fwidth(uv);
//vec2 width = vec2(length(dFdx(uv)), length(dFdy(uv)));
float w = max(width.x, width.y);
//float w = length(width.xy);
// take logarithm of grid cell size to find zoom factor
float log_width = log2(w) / log2(base); // logarithm change of base
// round down to find grid zoom factor (power of ten)
float exponent = -floor(log_width) - N;
float blend = 1.0 - fract(log_width); // blend between powers of ten
// primary grid with wider lines
float grid1 = grid(uv * pow(base, exponent), 1.0 + blend);
// secondary grid with narrow lines
float grid2 = grid(uv * pow(base, exponent + 1.0), 1.0);
// mix primary and secondary grid with linear interpolation
return mix(grid1, grid2, blend);
}
> Can this be adapted to work with classic line segments made from quads or is it just for drawing grids in screen space?
It can be applied like a texture in the fragment shader using UV coordinates and their partial derivatives.
You can't really use this technique with line segments because the grid density up close to the camera is higher than far away. You'd need to do a lot of math to figure out where to draw your line segments. If you calculate all the line segment geometry, a fancy shader isn't required any more.
Aha, I see. Line segments from quads suffer from all the issues mentioned in the article as well, especially in the distance, but I didn't quite understand in what context is the grid shader applicable. Thank you for clearing this up.
This is why Ben Golus's posts on shaders are the best. Cause they're actually accessible.