Hacker News new | past | comments | ask | show | jobs | submit login
Antialiasing: To Splat or Not (reedbeta.com)
79 points by mnem on Nov 25, 2014 | hide | past | favorite | 9 comments



For some reason I find the test image that was used to be quite fascinating on its own. Were I given that image in isolation and told "write a program to generate this" I wouldn't have any idea where to start. After consulting the source code I now realize how it was created, and if anything that makes it even neater, that such a simple approach generates such an interesting-looking image.


It's reminiscent of a binary zone plate [1] in a slightly altered Cartesian form. Zone plate images are fairly commonly used for testing image filtering.

[1] http://en.wikipedia.org/wiki/Zone_plate


Yeah; that's the most interesting part of the article. Everything else is pretty much a dude screwing around with getpixel/setpixel without understanding basic signal processing. Looks like he made the classic log/linear error too.


> Everything else is pretty much a dude screwing around with getpixel/setpixel

That's not very constructive. Can you point where he did that? For reference, source code is here: https://gist.github.com/Reedbeta/893b63390160e33ddb3c.

> without understanding basic signal processing.

I got the impression he approached it from visual pleasantness point of view. Which is more than perfectly valid when generating images for people to look at. In that business, if it's fast to compute and looks visually good to human eyes, it is perfectly acceptable to do a slightly "wrong" thing from signal processing point of view. At least until we have infinite computing resources.

I didn't read the source code, but judging by the article and images, he did appear to understand signal processing and sampling theorem. He appeared to look for a better sampler for a scan-line (think Pixar Renderman) or ray-tracer renderer (think POV-Ray).

My take is to have per pixel adaptive sample count as a function of standard deviation in certain sample radius larger than a pixel. Oversimplified, the higher the deviation, the more samples should be taken until the contribution is below some adjustable threshold. For example in a real ray-tracer you probably want to consider other variables as well, such as computational cost per sample. Ultimately the problem in visual renderers is how to get the best visual quality for computing resources available.

> Looks like he made the classic log/linear error too.

I can't see any telltale sign of doing linear processing for log space data in the images themselves. They all look correct. Retina / high-dpi display? Make sure your web browser is not resampling the images linearly in log space! Or worse, your monitor or graphics adapter, in case you're using a non-native resolution.


In general, things are not "wrong" for reasons of ideology, they are "wrong" because they are "suboptimal" or "don't work."


I don't understand the basics of signal processing either. Any tips for good literature/videos about signal processing for image sampling like this?

> Looks like he made the classic log/linear error too.

It's converted from linear space to "gamma space" before writing the image, if that's what you mean:

> img = np.where(img <= 0.0031308, 12.92img, 1.055img(1.0/2.4) - 0.055)


Try the sample chapter from the Physically Based Rendering book: http://www.pbrt.org/chapters/pbrt_chapter7.pdf


This is quite an interesting question. If you splat, you're effectively sharing some information between neighboring pixels, which is efficient. However you do introduce some variance at each pixel since you're not perfectly importance sampling the filter function. So it's a trade-off.


Server is straining. Here's the Coral Cache mirror:

http://www.reedbeta.com.nyud.net/blog/2014/11/15/antialiasin...




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: