Hacker News new | past | comments | ask | show | jobs | submit login

I would be interested to know the comparison of the magic kernel to cardinal B-Splines [1]. The idea in the paper is that by combining an IIR filter with a traditional FIR/convolutional filter, you can get a much better sinc approximation, and preserve the high frequencies well. For those interested in this, many researches following that have been summed up into a book by Hugues Hoppe [2]. The book also includes some discussion of filters that are commonly used today like Lanczos.

Also, I'm not sure if I agree with this claim regarding partition of unity:

> This remarkable property, that prevents “beat” artifacts across a resized image, is not shared by any other practical kernel that I am aware of ...

Lanczos is surely a weird filter having many less than desirable properties, but IIRC cubic polynomial filters like Mitchell-Netravali and Catmull-Rom satisfies partition of unity, and both of them are popular in today's image processing. Even the bilinear filter, which can be box or triangle filter depending on whether one is minifying or magnifying, satisfies the partition of unity property. Did I get something wrong?

Edit: the bilinear filter actually does not satisfy partition of unity because it's scaled down, but a properly scaled box filter or triangle filter satisfies partition of unity.

[1] http://bigwww.epfl.ch/publications/blu9903.pdf [2] http://hhoppe.com/proj/filtering/




Yes I have recently been asked about splines, and the Magic Kernels are of course a subset of those, but generated in a specific way. I have not had a chance to compare to all previous interpolation kernels (and there are many of them).

Yes, the "beat" statement survives from a much earlier version of the page, and is not quite right. I'll fix that.


> I would be interested to know the comparison of the magic kernel to cardinal B-Splines

Well, the observation in the paper

> the rectangular window function can itself be thought of as the nearest neighbor kernel, with Fourier transform sinc f; and the convolution of the rectangular window function with itself—the "tent" function—is simply the linear interpolation kernel, with Fourier transform sinc² f. The first of these has discontinuities at its endpoints, and significant spectral energy outside the first Nyquist zone. The second is continuous everywhere, and has significantly less spectral energy outside the first Nyquist zone, but it still has discontinuous first derivative at its endpoints and midpoint. The third kernel in this fundamental sequence, the Magic Kernel, is continuous and has continuous first derivative everywhere, and has even less spectral energy outside the first Nyquist zone.

corresponds quite precisely to the construction of the uniform cardinal B-splines by repeated convolution of a boxcar filter. The "succession of kernels, each obtained by convolving rect x with itself the corresponding number of times" that Costella describes in his paper is quite precisely the uniform cardinal B-splines. (The terminology around B-splines is unfortunately very confused, with different authors using "spline", "B-spline", and "cardinal B-spline" with different meanings, so I'm doing the best I can here.)

This is also an efficient way to implement the Magic Filter in practice if addition is cheaper than multiplication:

    >>> x = [0, 0, 0, 1, 0, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0, 0, 0]
    >>> for i in range(len(x)-3): x[i+2] += x[i+3]; x[i+1] += x[i+2]; x[i] += x[i+1]
    ... 
    >>> x
    [1, 3, 3, 1, 0, 2, 6, 6, 2, 3, 9, 9, 3, 0, 0, 0, 0, 0]
You can see that the sequence has been convolved with the Magic Kernel. Each of the addition operations computes the two-sample boxcar filter on a sequence of input samples, and the resulting sequence is pipelined to the following addition operator, so the final sequence you get is the desired convolution.

If you want to do this at a different scale, what you have is a Hogenauer filter or CIC filter, which requires an addition and a subtraction per sample per order, six in this case. Commonly they use a FIR sharpening filter, though usually just in direct form.

The multidimensional splines you get by doing this kind of interpolation successively in more than one dimension are commonly called "box splines", because you get them by convolving "boxes" instead of the one-dimensional boxcar function. This is the normal way to do Gaussian blur in image processing, for example.

If you want to use B-splines of a given degree to approximate ideal interpolation with a sinc kernel as closely as possible with a given support, avoiding any blurring, that's a solvable problem; https://www.cs.tut.fi/~gotchev/DIPII/lecture3.pdf is a set of lecture notes on the topic.

If you're interested in this kind of thing you might enjoy my notes from a couple years ago: https://dercuano.github.io/notes/sparse-kernel-cascade-gabor... https://nbviewer.jupyter.org/url/canonical.org/~kragen/sw/de... https://dercuano.github.io/topics/sparse-filters.html.


Nice — it did look to be a more general class of interpolating functions, and that does indeed look to be the case. Interesting!




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: