Hacker News new | past | comments | ask | show | jobs | submit login
Losslessly Optimising Images (rubenerd.com)
95 points by Tomte on Aug 31, 2022 | hide | past | favorite | 42 comments



There is a great mac app called https://imageoptim.com/mac

It tries - zopfli - PNGOUT - OxiPNG - AdvPNG - PNGCrush - JpegOptim - Jpegtran - Guetzli - Gifsicle - SVGO - svgcleaner

And picks the best result. You can tweak a few other settings too.

It is a great tool which allows you drop a folder of mixed images, and just wait for the result.


ImageOptim is fantastic. You can use it via CLI, so I've added it to a few different archiving / publishing workflows in the past. One thing that some people don't realize is you can run it multiple times on the same (PNG) images and get better results. Each filter is dependent on the input from the previous filter, so running them again can give you better results even after one run. This works out because filter 3 might give you some savings or rearrange the data in a way that filter 1 can now take advantage of.

The other JPG filters are lossless, but be aware that Guetzli is lossy!

I was impressed with just how much it could losslessly compress some massive JPGs until I did a visual diff. I can't see the difference, like one-bit deltas sort of thing, but it's not 1:1 lossless as I would expect.


ImageOptim flies on an M1 compared to an Intel Mac. One of my favorite apps, I use it almost daily.


zopflipng typically beats pngcrus and optipng (on Linux at least) but by default it drops auxillary PNG chunks [0] which can result in browsers (and other applications) using a different color space, causing the resulting images to look more washed out than the original. To prevent this you need to explicitly pass --keepchunks=cHRM,gAMA,pHYs,iCCP,sRGB,oFFs,sTER to zopflipng.

cwebp seems to have a simmilar issue when starting with png files. Sucks that color space support is still so inconsistent.

Unfortunately zopflipng (and most other tools) don't have APNG support, keeping only the first frame :|

[0] https://github.com/google/zopfli/issues/113


I wonder how `pngcrush` compares to `oxipng` (https://github.com/shssoichiro/oxipng).

Personally, I use `oxipng` if I want lossless compression. However, most of the time I use `pngquant` instead, since it gives significant size reduction even at `99%` (I can't even distinguish between the original and reduced image).

    pngquant --quality=99 --ext=.png --force file.png


It seems to depend somewhat on the input image. Software like ImageOptim[0] brute force the best compression by trying multiple different compressors/modes and picking the best result and there is rarely a consistent winner.

That said, the best solution today is probably to just use a newer image format like WebP or (soon) AVIF if you need to meaningfully reduce image sizes.

[0] https://imageoptim.com/versions.html


JPEG XL beats AVIF, in many cases. See for example: https://twitter.com/jonsneyers/status/1563442356493230080



cjxl also has a near perceptually lossless mode (-d 1) that actually works and unlike cwebp it does not ignore color space chunks when converting from PNG so the result really does look the same.

But browser support is still disabled by default and AFAIK is still missing support for animations for both Chrome and Firefox. This was/is a problem with webp where aninated support came later without any new mime type so you can't fall back to gif / apng if it is not supported using <picture> - hopefully this will be handled better with jxl.


It can gives significant reduction sometimes because sometimes the input is "uncompressed" (which png supports), basically just like bmp.

In general, I don't find optimizing png beyond the default compression level worth it for typical personal use. But can see why they are useful if you're hosting things.


Yeah, I'm using the lossy compression for blog posts.

And, I was comparing `pngquant` to `oxipng` sizes (not the default png output I get after creating a poster). For example, 70KB with oxipng changes to 32KB with pngquant at 99% quality level. This depends on the image of course, but most of the time I do see significant savings.


You’ll want to add ECT to the comparison.

https://github.com/fhanau/Efficient-Compression-Tool


Disclaimer: There may be newer tools with better support for Linux or Windows, but for macOS users, the venerable "ImageOptim-CLI" project ^1 remains a powerful option.

It wraps ImageOptim, ImageAlpha, and JPEGmini in a single executable, and is able to produce visually-indistinguishable image assets with 60-80% reduction in bytecount, for nearly any corpus of unoptimized files. I've used it to great effect as a webperf consultant. Enjoy! :)

1. https://github.com/JamieMason/ImageOptim-CLI


Image formats are open to all platforms, so what makes this a tool for macOS users only? If it uses proprietary Apple frameworks, for example, how can we verify that your claims are true?


Although the formats are open. An efficient encoder may still be closed.


It wraps and drives 3 separate macOS desktop apps which are unfortunately not all cross-platform. So without macOS, you can't directly verify anything about this tool.


Shameless plug: https://recompressor.com/

Gives you a kb-vs-quality tradeoff graph.

A little over the top perhaps, but when you just need to do a couple of quick images… ;)


If you're looking for a GUI for most of these on Linux, you can use Trimage[1] which I created a long long time ago, but it still works. It runs images through a bunch of optimizers, but only compresses losslessly.

[1] https://trimage.org/


I doubt that is possible to optimize JPG losslessly as the description in the linked website.

JPG is a lossy format even with 100 quality and every additional saves will remove some data from the image.


JPEG compression has multiple steps, only one of which is inherently lossy (quantization of DCT coefficients). The last step is Huffman coding of the coefficients, and this data can be losslessly rearranged and recompressed.


> JPG is a lossy format

"Lossless JPEG is a 1993 addition to JPEG standard by the Joint Photographic Experts Group to enable lossless compression."

[1] https://en.wikipedia.org/wiki/Lossless_JPEG


Which no one uses.


> Which no one uses.

If it's in DNG as Wikipedia suggests, I think a lot of people are using it.

Either way, you can't say "JPEG is lossy" when there's almost 30 years of support for lossless coding, even if no-one is using it.


I've never had much luck using jpegoptim. In most cases it's only removing the metadata, which isn't much on high-res files.

Guetzli is nice, if you don't have too many images to recompress (quite slow): https://github.com/google/guetzli


Protip: when leveraging utils to apply optimizations, always segregate lossless operations from lossy, and do lossy first.


My favourite tool for one-off lossless image optimization is FileOptimizer, each asset is processed with several size optimization libraries.

https://nikkhokkho.sourceforge.io/static.php?page=FileOptimi...


I’ve been using Optimage for a while (a native Mac app that I believe wraps some of these tools, if I’m not mistaken) and I’ve been impressed by how much it can compress videos. A few times I’ve dropped in “animated gifs” (which were actually MP4s) and it’ll cut a few MB file down to a couple hundred KB.


This tool optimises blocky JPEGs for a better viewing pleasure: https://github.com/victorvde/jpeg2png

> jpeg2png finds the smoothest possible picture that encodes to the given JPEG file.


I remember trying (and giving up after like an hour waiting for it to complete) some Google thingy a few years back to optimize JPEGs. Both then and while looking at this I realize that the best solution is to use a newer image format.


Guetzli. Looks like development has stalled? Perhaps for just that reason.

https://github.com/google/guetzli


https://tealpod.com/compressor/

You can live preview the result with different compression levels. Its FREE, you can pay if you like it.


If you want to crop jpegs losslessly, you can use jpegtran: https://linux.die.net/man/1/jpegtran

On Android, LLcrop


To everyone who is interested, here is a part of my .shrivel.json including command line tools and parameters to shrink different kind of image formats (shrivel[1] is a task runner for shrinking images using command line tools, I recently built for my blog[2], unfortunately without release yet).

  "commands": {
    "svgo": ["/home/sandreas/.npm-packages/bin/svgo", "--multipass", "{source}", "-o", "{destination}"],
    "vips": ["vipsthumbnail", "{source}", "-s", "{size}", "-f", "{destination}[strip]", "--eprofile", "/usr/share/color/icc/colord/sRGB.icc", "--rotate", "--delete"],
    "vips_jpg": ["vipsthumbnail", "{source}", "-s", "{size}", "-f", "{destination}[optimize_coding,strip]", "--eprofile", "/usr/share/color/icc/colord/sRGB.icc", "--rotate", "--delete"],
    "vips_lossless": ["vipsthumbnail", "{source}", "-s", "{size}", "-f", "{destination}[strip]", "--eprofile", "/usr/share/color/icc/colord/sRGB.icc", "--rotate", "--delete"],
    "cwebp": ["cwebp", "-resize", "{size}", "0", "-q", "75", "{source}", "-o", "{destination}", "-quiet"],
    "docker-avifenc": ["docker", "run", "--rm", "-u", "1000:1000", "-v", "/home/sandreas/projects/pilabor/:/mnt", "shrivel", "avifenc", "--jobs", "all", "--min", "0", "--max", "63", "--yuv", "420", "-a", "end-usage=q", "-a", "cq-level=28", "-a", "tune=ssim", "--ignore-icc", "--speed", "0", "{source}", "{mappedDestination}"],
    "jpegoptim": ["jpegoptim", "--max=75", "--all-progressive", "--strip-all", "{destination}"],
    "pngquant": ["pngquant", "--quality", "70-99", "--skip-if-larger",  "--quiet", "--ext", ".png", "--force", "{destination}"]
  },

Summary:

- use svgo for svg compression

- use vipsthumbnail instead of imagick (way faster)

- use resized lossless png images as source for avifenc, since it does not support resize atm[3]

- use cwebp or avif images wherever possible

- use use avifenc with fine graned params (I use docker, beause avifenc did not compile on my server)

- use jpegoptim to optimize jpegs inplace

- use pngquant to optimize pngs inplace ("--ext .png --force")

- for gifs I would use gifsicle, but I don't have gifs

Then remove avif files with size > webp files (this can happen on smaller dimensions). And here is my according html snippet:

  <!-- wrapper element -->
  <picture>

    <!-- avif srcset: 1x, 2x, 3x as auto select higher quality images for higher screen resolutions -->
    <source srcset="/img/articles/iphone-3566282_235.avif 1x, /img/articles/iphone-3566282_235@2x.avif 2x, /img/articles/iphone-3566282_235@3x.avif 3x" type="image/avif">

    <!-- webp srcset -->
    <source srcset="/img/articles/iphone-3566282_235.webp 1x, /img/articles/iphone-3566282_235@2x.webp 2x, /img/articles/iphone-3566282_235@3x.webp 3x" type="image/webp">

    <!-- jpeg also as srcset to provide screen resolution based images -->
    <source srcset="/img/articles/iphone-3566282_235.jpg 1x, /img/articles/iphone-3566282_235@2x.jpg 2x, /img/articles/iphone-3566282_235@3x.jpg 3x" type="image/jpeg">

    <!-- fallback image with loading=lazy, to load images only when visible -->
    <img src="/img/articles/iphone-3566282_235.jpg" alt="Access and recover files from an iPhone on Linux" title="Access and recover files from an iPhone on Linux" loading="lazy" class="size-235 raster ext-jpg" width="235" height="129">

  </picture>
[1]: https://github.com/sandreas/shrivel

[2]: https://pilabor.com

[3]: https://github.com/AOMediaCodec/libavif/issues/1034


See my comment above, you might understand it. I don't bother with source sets for different resolutions and file types, I just assume Retina and always serve a well optimised webp from a different cookieless domain.

I did try AVIF but I have 60% iOS and those iPhones don't read AVIF, or didn't until last time I checked. Like yourself I was struggling away with the compile but then I wondered why I was bothering when I had something good already.

For VIPS I use PHP. My index.php expects params of width and height in a query string. This sits behind an Nginx proxy that caches based on query string and accept headers. This is the origin server for a CDN that respects the headers.

I am glad to see you are doing the colour profiles as sRGB as I do that too. Look into the trick of serving webp when another format is requested, it works great. You can then simplify the HTML and use picture elements for more fun things such as responsive things.


I am a bit surprised that no one has mentioned https://squoosh.app/ yet


Are there similar tools for video formats?


There's also Pngout by Ken Silverman. It is slower than Pngcrush, and produces smaller files.


If you need to losslessly optimize and image for free which is up to 1mb in size, you can use this: https://kraken.io/web-interface

If you need a larger file size you'll need to create an account. Just thought I would drop that here since nobody has mentioned it yet.


is this richard hendricks?


No optipng?


I have a thumbnail resizer that varies the output on the basis of the accept header value, to invariably serve webp instead of JPG or PNG.

I did not get great results from AVIF but I am good with webp.

The trick to it is that if a JPG is requested then you can serve whatever format you like so long as the browser can read it. It is the image header that is read, not the file extension.

To get the colours right I do things with the profile so that everything can be done in Adobe with it working out alright on all devices with sRGB assumed.

I use VIPS to achieve this and in a resize for a thumbnail I go from the big originals supplied by the artist, use the VIPS resize algorithm, turn off colour sub sampling for small thumbnails, selectively trimming the whitespace.

By going from a big JPG to a smal webp I am not using an intermediate step of a small JPG that gets converted. By doing the resize and format conversion in one hit I get better image quality and better compression.

I use a commercial CDN to access my origin server which has its own cache. This is really fast on an empty cache, and all the headers are stripped away with the images served as immutable.

Bandwidth is not always the bottle neck. I go for image quality and serve images at 1.5x pixels to make them always Retina-ish as most people seem to have 1.5x pixels on their screens these days, for example a 1920 Full HD shows up as 1280 in the nerd stats.

If a browser has the data saving flag on then I respect that and crank up the compression. The images look fine but they are 20% the size of the JPG.

If someone right clicks and saves an image it actually goes back to the server to get the JPG rather than a webp.

I make the JPGs special too by using the Mozilla JPEG encoder. This is the one Mozilla wrote for Instagram. The file sizes are smaller, however, the image quality is where it is at, the look up tables are designed for high DPI digital screens, not analog CRTs. Hence better.

Even though my JPG optimiser is awesome I am not interested in legacy formats, I have got my nginx proxy serving and CDN dialled in. As mentioned, bandwidth is not the problem for people on fast connections and image quality is a much more interesting goal.

The thing about removing defects in images is that nobody notices. I did some tests with people where one page had the blurry thumbnails and the other had the crisp and vibrant thumbnails. It only works subliminally, with more clicks. The untrained eye does not have the comparison to make, but they might click through that bit more.

Along the way I also learned that there is no such thing as a quality setting on a JPG image. It is not part of the file specification. But if exporting to JPG you expect that quality slider. Really you need to have the fine control VIPS or MozJPEG gives you over the colour encodings and look up table.

I am afraid to say that these image optimisers that fiddle around with headers to shave a few bytes are a waste of time. On your server you need to keep control of your images and just keep the originals you need in highest quality, with nginx/VIPS and a CDN serving modern formats that look better and reduce bandwith not just with image format choice but also doing the 'all in one resize and convert' and serving cookieless from a CDN.


Which CDN are you using?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: