Hacker News new | past | comments | ask | show | jobs | submit login

Isn't the magic kernel simply a 2× bilinear downscale filter? For example, if you use 2× bilinear downscaling in Tensorflow (as opposed to bilinear downsampling which would end up being just a box filter) using tf.image.resize it uses the magic kernel [1,3,3,1].

There is an article [0] that points this out, although later it goes into recommending odd filters, which insidiously break many applications where you would want to use downscaling by shifting the image by a fraction of a pixel.

[0] https://cbloomrants.blogspot.com/2011/03/03-24-11-image-filt...




I discuss Charles Bloom's 2011 article in some detail on my page. The original "magic" kernel (i.e. [1,3,3,1]) can be rightly categorized as just many things. It's the generalization to the continuum case, and the addition of the Sharp step — and, now, to arbitrary higher "generation" 'a' — that makes it useful and more powerful. Bloom's post ten years ago inspired me to play some more with the "original" and figure out at least the "continuum" version.

I recommend you check out the paper http://johncostella.com/magic/mks.pdf for developments since the original 2006 "magic" kernel. :)




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: