This was painful to read. It become better and simpler with a basic signals & systems background:
- His breaking up images into grids was a poor-man's convolution. Render each letter. Render the image. Dot product.
- His "contrast" setting didn't really work. It was meant to emulate a sharpen filter. Convolve with a kernel appropriate for letter size. He operated over the wrong dimensions (intensity, rather than X-Y)
- Dithering should be done with something like Floyd-Steinberg: You spill over errors to adjacent pixels.
Most of these problems have solutions, and in some cases, optimal ones. They were reinvented, perhaps cleverly, but not as well as those standard solutions.
Bonus:
- Handle above as a global optimization problem. Possible with 2026-era CPUs (and even more-so, GPUs).
Perhaps you're right but I won't believe you until you whip up a live-rendering proof of concept.
It's a bit rude to dismiss somebody's cool work as "painful", with some hypothetical "improvements" that probably wouldn't even work.
It's probably much more exciting to implement stuff like this when you can experiment with your own ideas to figure out the solution from scratch, compared to someone who sees it as a trivial exercise in signal processing, which they can't be bothered to implement.
- His breaking up images into grids was a poor-man's convolution. Render each letter. Render the image. Dot product.
- His "contrast" setting didn't really work. It was meant to emulate a sharpen filter. Convolve with a kernel appropriate for letter size. He operated over the wrong dimensions (intensity, rather than X-Y)
- Dithering should be done with something like Floyd-Steinberg: You spill over errors to adjacent pixels.
Most of these problems have solutions, and in some cases, optimal ones. They were reinvented, perhaps cleverly, but not as well as those standard solutions.
Bonus:
- Handle above as a global optimization problem. Possible with 2026-era CPUs (and even more-so, GPUs).
- Unicode :)