Hacker News new | past | comments | ask | show | jobs | submit login
Using ImageMagick to make sharp web-sized photographs (even.li)
168 points by n_e on July 6, 2013 | hide | past | favorite | 62 comments



Part of the reason rescaled images look dim and blurry is because rescaling software usually assumes gamma 1.0 instead of 2.2: http://www.4p8.com/eric.brasseur/gamma.html

There are many more good essays on image rescaling here: http://entropymine.com/imageworsener/


Preview.app in OS X scales the picture correctly. I think any program that uses Cocoa/CoreImage/CoreGraphics etc. to scale images on OS X should get the right result.

However, Safari ignores gamma and produces a flat gray rectangle when scaling down the Dalai Lama image, like Firefox does. I seem to remember that Safari used to do proper gamme correction in earlier versions, but they were pressured to abandon it because it was more important to give the result expected by websites.


Another issue is one needs to use a sharp filter during the resize, like sinc or something. Then post sharpening is not required in the first place.


It's because downscaling is technically a digital filter, so it can smooth the signal and cause ringing artifacts.


Dissecting a bad oneliner:

    ls *.jpg|while read i;do gm convert $i -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 `basename $i .jpg`_s.jpg;done
The part I take issue with is the unnecessary invocation of `ls`. The shell does the glob expansion. A `for` loop is better suited for this. Also, if using bash,* we can get rid of basename.

    for file in *.jpg; do gm convert $file -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 ${file%.jpg}_s.jpg; done
* zsh fans, I'm sure it works there too.


sysadmin...sense...tingling...

Get in the habit of putting $file (and, in this case, ${file%.jpg}_s.jpg) in double quotes. Your one-liner (and theirs) will blow up spectacularly if there's whitespace in a filename:

    bash-3.2$ for i in 'lots       of spaces'.jpg '1 2'.jpg; do printf 'arg = %s\n' ${i%.jpg}_s.jpg; done
    arg = lots
    arg = of
    arg = spaces_s.jpg
    arg = 1
    arg = 2_s.jpg

    bash-3.2$ for i in 'lots       of spaces'.jpg '1 2'.jpg; do printf 'arg = %s\n' "${i%.jpg}_s.jpg"; done
    arg = lots       of spaces_s.jpg
    arg = 1 2_s.jpg
EDIT: It's also worth noting that, by default, zsh doesn't split on whitespace in parameter expansion like sh, bash, etc. do. ("man zshexpn" and search for "SH_WORD_SPLIT")


As long as we're bashing, it could easily be written loopless like so:

    basename -s .jpg -a *jpg | 
      xargs -I{} convert "{}.jpg" \
       -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 "{}_s.jpg"
EDIT: weird line breaking is to prevent sidescroll box


You could also easily parallelize this with xargs' -P option.


mod_pagespeed (and ngx_pagespeed) can automate this for all your images. If you have inline width/height or inline style dimensions, it will do the resampling on your behalf: https://developers.google.com/speed/pagespeed/module/filter-...

The advantages of having correctly sized imagery are immense, but in short.. something like 70% smaller total byte payload for mobile, huge reduction in image decode & resize cost, better scroll performance, and faster relayouts when orientation changes or browser window changes.

(And I do think browsers themselves should resample scaled images like this to achieve equal quality, but your users will benefit way more if assets are delivered well to begin with.)


I think you missed the point: image the filters should be applied before pictures are distributed rather than letting the browser do it. The reason to resize it and sharpen it is because browsers do a poor job resizing, where doing it yourself will result in a better looking photograph.

This shouldn't be automated by a plugin AT ALL, and should be done based on the type of image it is. (icons, clipart, logos, etc. would make poor subjects).


where are the resizes stored when this directive is used?

   > pagespeed EnableFilters resize_images;


Probably /var/mod_pagespeed/cache, though I don't have it installed right now to check.


My solution: https://github.com/vladstudio/Vladstudio-smart-resize-Bash-s...

The biggest problem of scaling down an image is to find right settings for resampling and sharpening. After many expreiments, I found it impossible to achieve good results by simply running convert with a line of arguments. So I came up with this script, which basically does the following:

* configures -interpolate bicubic -filter Lagrange; * resizes source image 80%; * applies -unsharp 0.44x0.44+0.44+0.008; * repeats steps 2 & 3 until target size is reached.


This looks interesting, do you have any sample images for comparison?


This is fine if you have a limited set of images that you handpick and optimize.

Do you have any suggestions when there are thousands of images and all this needs to be automated?


Use GNU parallel: https://www.gnu.org/software/parallel/

It can distribute the work across the available CPU cores, and it can even distribute the work across different machines using SSH: https://www.gnu.org/software/parallel/man.html#example__dist...


He has an example at his github link where he uses a simple bash for loop to do many images at once.


but it does seem inefficient if you often have lots of images to process


how so?


Just for the record, the "unsharp mask" filter doesn't really sharpen but it increases edge contrast, which makes it seem sharper. It is something that you should only apply to the final version of an image, after all resizing has been done, however.


Sharpening improves "acutance"; your perception of boundaries. https://en.wikipedia.org/wiki/Acutance

Unsharp masking is mostly aesthetic. For an image like the example, a mountainside, it gives you a feeling of crisp details, but there isn't any more data there. It looks great on a landscape, but it can be disastrous on a portrait of a person (pores and stubble will be highlighted, usually in an unpleasant way).

(EDIT: incorrect assertion about unsharp masking before or after scaling removed)

Sometimes it's better to unsharp-mask it to a degree that looks slightly oversharpened at a large size, but looks great when reduced - especially for very small images, like avatars or other icons.


> I have found that unsharp mask is best applied before you scale the image. Shrinking the image first will eliminate some of the details that you wanted to enhance in the first place.

Unsharp mask doesn't work that way: you get the same results applying it before or after scaling, as long as you adjust the radius accordingly.

The reason I suggested applying it after scaling is because aesthetically pleasing settings don't change much as the image size changes.


I tested this with a couple of images and you are right. Except for a few errant pixels here and there the images are precisely identical. Sorry for the misinformation.


Tangential to the point being made, 98 as quality seems way too high for an image to be shown on the web. 85 seems to be a decent tradeoff. Doing so might actually increase the image size.


I agree: that jumped out at me enough that I came here to make the same comment. I can hardly think of a case where I'd want to use a 98 quality setting on a JPEG. Maybe it would be a good choice if I wanted to preserve an almost-pristine copy of an original, but I'd compare the resulting file size to a lossless PNG first. Every quality step from 100 down to 95 gives a huge benefit in file size, and going from 95 to 90 almost always seems like a hefty savings for imperceptible differences, too. I usually save web images at quality settings between 70 and 90, and I've never felt like I'm losing by it.


This also jumped out at me. My company (http://www.firebox.com) is built on having amazing looking images for products. No one could see any difference between the quality of images between 87 and 95, but it saved us roughly 50% in file size. Also for what it's worth, we spent a lot of time a/b testing 87 vs 95 with our users, but there was no conclusive difference


We use GraphicsMagick at 92, as dipping into the 80s has an annoying tendancy to introduce visible noise on something like 1:50 images. It's very annoying to ship that extra bandwidth.

Interestingly, we've done WebP support, and while the files are smaller across the board, visual quality deteriorates really quickly once you start dropping that quality value, even in small increments.


You used ImageMagick? The results are impressive.

(Awesome stuff! That looks like a fun company to work for or found.)


True, my app processes around 100,000 images daily and it uses a quality of 85. Never had a problem. But I do keep an archive of the original image just in case


The sharpened version looks over-sharp to me. You may find that a level or curves adjustment is more what this image needs to pop a little.


Yup, I over-sharpened it a little so the effect is more obvious.

Though the amount of sharpening can depend on the context : photographers tend to be annoyed by over-sharp images, while on a marketing website it might be good idea to make the pictures pop.


Trivia: The example image is taken at the Col de l'Iseran which, at 2770 metres, is the highest tarmacced road pass in Europe.


ImageOptim has been my best buddy, been getting very agreeable results with it. It's just for compression but end results don't seem to have the need for sharpening. Using ImageMagick this way might be best for special cases... http://imageoptim.com/


ImageOptim is not just for compression, it is for stripping away useless metadata, color palettes and other tricks.

For optimal images, you could do something like this:

1) Resize the image (And apply the trick from this article) use the proper quality marker (anywhere between 85-95)

2) Run it through Imageoptim

If you want a commandline version of ImageOptim, I have had good results with https://github.com/toy/image_optim


ah.. right on. cheers!


Has anyone played / tested this with regard to the responsive image hack/trick by the filament group : http://filamentgroup.com/lab/rwd_img_compression/


I've actually started using this method, and it seems to work pretty well. It reduces file sizes significantly and yet the quality is fairly good. If you pay attention, you can see some quality loss/artifacts, though it's not bad.

An example:

http://www.andrewmunsell.com/blog/

Scroll down to the "Now is the Future" blog post. The cover image of the Seattle skyline is 1700x666 and ~65kb (though, it's been recompressed into WebP by mod_pagespeed if your browser supports it). The JPG (before WebP conversion) is ~88kb.

To see it without the recompression and turn mod_pagespeed off, you can look at the image on the page:

http://www.andrewmunsell.com/blog/?ModPagespeed=off


For my games, I found out that the best thing when scaling is use IM Lancsoz filter.

Granted, my games use high-res hand-drawn vector-ish art (not pixel art, neither photo-style or paint-style art) so I dunno if this is applicable to photos.


Imagemagick already uses a Lanczos resampling filter when downsampling. Lanczos inherently preserves sharp transient data, e.g., sharp edges in images, but it looks like the author wanted something more.


I think the script to apply it to a folder could be clear with:

for image in *.jpg ; do gm convert $image -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 `basename $image .jpg`_s.jpg ; done


The script on the linked page deals with filenames with spaces while that for loop does not.

Neither of them deal with filenames with \n in them, eg 'pretty

sunset.jpg'

For that, you'll want to run find:

    find . -name '*.jpg' -exec convert '{}' -resize "900x" -unsharp 2x0.5+0.5+0 -quality 98 `basename '{}' .jpg`_s.jpg  \;


imagemagick/graphicsmagick comes with a batch tool called mogrify. no need for scripting.


In my experience, scaling down an image makes it look sharper. A blurry image will often look perfectly fine when sufficiently scaled down; which is not surprising, because a blur with a five-pixel radius at camera size can easily become a fraction of a pixel at web size. Similarly, scaling down counteracts camera noise, because each output pixel averages out the noise between several different sensor pixels.

Of course, you can always make them even sharper with filters if you want.


Reduce the quality quotient (severely, like .98 -> .3) and don't cut the resolution as far, e.g. Keep 2x the pixels.

Much better appearance for the same file size.


Could also be useful to downscale images using the 'high contrast downscale' filter (as we call it in our lab; maybe this isn't the common name). For each set of pixels to become one, you compute min, max and mean, and select the one of {min, max} which is closer to mean.

Unfortunately, I don't have any examples handy.


One problem I recently encountered is that some images refused to load on IE8. Turns out that the IE does not support CYMK color spaced images and the image appeared significantly differently on chrome and firefox. Was quite surprising since I had assumed that jpeg was a standard format and would be supported by all.


I run into this same issue when creating previews from print comps. You should be able to specify a colorspace to resolve that: http://blog.rodneyrehm.de/archives/4-CMYK-Images-And-Browser...


Yeah, but changing the color space to RGB also significantly alters the way it appears. Then I have users who complain that this is not what they uploaded and its a genuine complain.

I was going to suggest that the real problem is people using IE8 but that wouldn't go too well I guess


I think before and after shots would have been helpful.


Although there technically are, the after shot is not available for those of us on mobile.

"If you hover the mouse on the image, you'll see the sharpened version."


I have noticed that sharpening an image makes them look worse on high resolution displays like the iPhone 4+ or Macbook Retina.


Is this to be regarded as image quality preservation or a photo manipulation that increases sharpness "artificially"?


The latter.


It'd be great if you put two in this post so we can see the difference without having to run all of the commands first.


Apparently if you mouse over the image, the sharpened image overlays the original (no I didn't pick that up either, it was mentioned on another comment :)


But it totally breaks when you can't hover over the image... like on mobile


Flickr does this, don't they? I always wondered why images on Flickr look sharper.


Is this tool doing anything other than a deconvolution?


I would guess it doesn't even do deconv. but only adds a simple high pass filter to it. That would explain the parameter choices (sigma is the std. dev.).


madVR uses the ImageMagick jinc with anti-ringing filter to upscale video. Looks pretty good. :)


really who time to do all this, image by image.I use aperture which is ok.


Read the bottom of the article:

>How do I resize a whole folder of images? [...]




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: