ImageOptim is fantastic. You can use it via CLI, so I've added it to a few different archiving / publishing workflows in the past. One thing that some people don't realize is you can run it multiple times on the same (PNG) images and get better results. Each filter is dependent on the input from the previous filter, so running them again can give you better results even after one run. This works out because filter 3 might give you some savings or rearrange the data in a way that filter 1 can now take advantage of.
The other JPG filters are lossless, but be aware that Guetzli is lossy!
I was impressed with just how much it could losslessly compress some massive JPGs until I did a visual diff. I can't see the difference, like one-bit deltas sort of thing, but it's not 1:1 lossless as I would expect.
zopflipng typically beats pngcrus and optipng (on Linux at least) but by default it drops auxillary PNG chunks [0] which can result in browsers (and other applications) using a different color space, causing the resulting images to look more washed out than the original. To prevent this you need to explicitly pass --keepchunks=cHRM,gAMA,pHYs,iCCP,sRGB,oFFs,sTER to zopflipng.
cwebp seems to have a simmilar issue when starting with png files. Sucks that color space support is still so inconsistent.
Unfortunately zopflipng (and most other tools) don't have APNG support, keeping only the first frame :|
Personally, I use `oxipng` if I want lossless compression. However, most of the time I use `pngquant` instead, since it gives significant size reduction even at `99%` (I can't even distinguish between the original and reduced image).
It seems to depend somewhat on the input image. Software like ImageOptim[0] brute force the best compression by trying multiple different compressors/modes and picking the best result and there is rarely a consistent winner.
That said, the best solution today is probably to just use a newer image format like WebP or (soon) AVIF if you need to meaningfully reduce image sizes.
cjxl also has a near perceptually lossless mode (-d 1) that actually works and unlike cwebp it does not ignore color space chunks when converting from PNG so the result really does look the same.
But browser support is still disabled by default and AFAIK is still missing support for animations for both Chrome and Firefox. This was/is a problem with webp where aninated support came later without any new mime type so you can't fall back to gif / apng if it is not supported using <picture> - hopefully this will be handled better with jxl.
It can gives significant reduction sometimes because sometimes the input is "uncompressed" (which png supports), basically just like bmp.
In general, I don't find optimizing png beyond the default compression level worth it for typical personal use. But can see why they are useful if you're hosting things.
Yeah, I'm using the lossy compression for blog posts.
And, I was comparing `pngquant` to `oxipng` sizes (not the default png output I get after creating a poster). For example, 70KB with oxipng changes to 32KB with pngquant at 99% quality level. This depends on the image of course, but most of the time I do see significant savings.
Disclaimer: There may be newer tools with better support for Linux or Windows, but for macOS users, the venerable "ImageOptim-CLI" project ^1 remains a powerful option.
It wraps ImageOptim, ImageAlpha, and JPEGmini in a single executable, and is able to produce visually-indistinguishable image assets with 60-80% reduction in bytecount, for nearly any corpus of unoptimized files. I've used it to great effect as a webperf consultant. Enjoy! :)
Image formats are open to all platforms, so what makes this a tool for macOS users only? If it uses proprietary Apple frameworks, for example, how can we verify that your claims are true?
It wraps and drives 3 separate macOS desktop apps which are unfortunately not all cross-platform. So without macOS, you can't directly verify anything about this tool.
If you're looking for a GUI for most of these on Linux, you can use Trimage[1] which I created a long long time ago, but it still works. It runs images through a bunch of optimizers, but only compresses losslessly.
JPEG compression has multiple steps, only one of which is inherently lossy (quantization of DCT coefficients). The last step is Huffman coding of the coefficients, and this data can be losslessly rearranged and recompressed.
I’ve been using Optimage for a while (a native Mac app that I believe wraps some of these tools, if I’m not mistaken) and I’ve been impressed by how much it can compress videos. A few times I’ve dropped in “animated gifs” (which were actually MP4s) and it’ll cut a few MB file down to a couple hundred KB.
I remember trying (and giving up after like an hour waiting for it to complete) some Google thingy a few years back to optimize JPEGs. Both then and while looking at this I realize that the best solution is to use a newer image format.
To everyone who is interested, here is a part of my .shrivel.json including command line tools and parameters to shrink different kind of image formats (shrivel[1] is a task runner for shrinking images using command line tools, I recently built for my blog[2], unfortunately without release yet).
- use vipsthumbnail instead of imagick (way faster)
- use resized lossless png images as source for avifenc, since it does not support resize atm[3]
- use cwebp or avif images wherever possible
- use use avifenc with fine graned params (I use docker, beause avifenc did not compile on my server)
- use jpegoptim to optimize jpegs inplace
- use pngquant to optimize pngs inplace ("--ext .png --force")
- for gifs I would use gifsicle, but I don't have gifs
Then remove avif files with size > webp files (this can happen on smaller dimensions). And here is my according html snippet:
<!-- wrapper element -->
<picture>
<!-- avif srcset: 1x, 2x, 3x as auto select higher quality images for higher screen resolutions -->
<source srcset="/img/articles/iphone-3566282_235.avif 1x, /img/articles/iphone-3566282_235@2x.avif 2x, /img/articles/iphone-3566282_235@3x.avif 3x" type="image/avif">
<!-- webp srcset -->
<source srcset="/img/articles/iphone-3566282_235.webp 1x, /img/articles/iphone-3566282_235@2x.webp 2x, /img/articles/iphone-3566282_235@3x.webp 3x" type="image/webp">
<!-- jpeg also as srcset to provide screen resolution based images -->
<source srcset="/img/articles/iphone-3566282_235.jpg 1x, /img/articles/iphone-3566282_235@2x.jpg 2x, /img/articles/iphone-3566282_235@3x.jpg 3x" type="image/jpeg">
<!-- fallback image with loading=lazy, to load images only when visible -->
<img src="/img/articles/iphone-3566282_235.jpg" alt="Access and recover files from an iPhone on Linux" title="Access and recover files from an iPhone on Linux" loading="lazy" class="size-235 raster ext-jpg" width="235" height="129">
</picture>
See my comment above, you might understand it. I don't bother with source sets for different resolutions and file types, I just assume Retina and always serve a well optimised webp from a different cookieless domain.
I did try AVIF but I have 60% iOS and those iPhones don't read AVIF, or didn't until last time I checked. Like yourself I was struggling away with the compile but then I wondered why I was bothering when I had something good already.
For VIPS I use PHP. My index.php expects params of width and height in a query string. This sits behind an Nginx proxy that caches based on query string and accept headers. This is the origin server for a CDN that respects the headers.
I am glad to see you are doing the colour profiles as sRGB as I do that too. Look into the trick of serving webp when another format is requested, it works great. You can then simplify the HTML and use picture elements for more fun things such as responsive things.
I have a thumbnail resizer that varies the output on the basis of the accept header value, to invariably serve webp instead of JPG or PNG.
I did not get great results from AVIF but I am good with webp.
The trick to it is that if a JPG is requested then you can serve whatever format you like so long as the browser can read it. It is the image header that is read, not the file extension.
To get the colours right I do things with the profile so that everything can be done in Adobe with it working out alright on all devices with sRGB assumed.
I use VIPS to achieve this and in a resize for a thumbnail I go from the big originals supplied by the artist, use the VIPS resize algorithm, turn off colour sub sampling for small thumbnails, selectively trimming the whitespace.
By going from a big JPG to a smal webp I am not using an intermediate step of a small JPG that gets converted. By doing the resize and format conversion in one hit I get better image quality and better compression.
I use a commercial CDN to access my origin server which has its own cache. This is really fast on an empty cache, and all the headers are stripped away with the images served as immutable.
Bandwidth is not always the bottle neck. I go for image quality and serve images at 1.5x pixels to make them always Retina-ish as most people seem to have 1.5x pixels on their screens these days, for example a 1920 Full HD shows up as 1280 in the nerd stats.
If a browser has the data saving flag on then I respect that and crank up the compression. The images look fine but they are 20% the size of the JPG.
If someone right clicks and saves an image it actually goes back to the server to get the JPG rather than a webp.
I make the JPGs special too by using the Mozilla JPEG encoder. This is the one Mozilla wrote for Instagram. The file sizes are smaller, however, the image quality is where it is at, the look up tables are designed for high DPI digital screens, not analog CRTs. Hence better.
Even though my JPG optimiser is awesome I am not interested in legacy formats, I have got my nginx proxy serving and CDN dialled in. As mentioned, bandwidth is not the problem for people on fast connections and image quality is a much more interesting goal.
The thing about removing defects in images is that nobody notices. I did some tests with people where one page had the blurry thumbnails and the other had the crisp and vibrant thumbnails. It only works subliminally, with more clicks. The untrained eye does not have the comparison to make, but they might click through that bit more.
Along the way I also learned that there is no such thing as a quality setting on a JPG image. It is not part of the file specification. But if exporting to JPG you expect that quality slider. Really you need to have the fine control VIPS or MozJPEG gives you over the colour encodings and look up table.
I am afraid to say that these image optimisers that fiddle around with headers to shave a few bytes are a waste of time. On your server you need to keep control of your images and just keep the originals you need in highest quality, with nginx/VIPS and a CDN serving modern formats that look better and reduce bandwith not just with image format choice but also doing the 'all in one resize and convert' and serving cookieless from a CDN.
It tries - zopfli - PNGOUT - OxiPNG - AdvPNG - PNGCrush - JpegOptim - Jpegtran - Guetzli - Gifsicle - SVGO - svgcleaner
And picks the best result. You can tweak a few other settings too.
It is a great tool which allows you drop a folder of mixed images, and just wait for the result.