Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Every new image codec faces this challenge. PNG + Zstandard would look similar. The ones that succeeded have managed it by piggybacking off a video codec, like https://caniuse.com/avif.


It is possible to polyfill an image format, this was done with FLIF¹². Not that it mean FLIF got the traction required to be used much anywhere outside its own demos…

It is also possible to detect support and provide different formats (so those supporting a new format get the benefit of small data transfer or other features) though this doesn't happen as it isn't usually an issue enough to warrant the extra complication.

----

[1] Main info: https://flif.info/

[2] Demo with polyfill: https://uprootlabs.github.io/poly-flif/


Any polyfill requires JavaScript which is a dealbreaker for something as critical as image display, IMO.

Would be interesting if you could provide a decoder for <picture> tags to change the formats it supports but I don't see how you could do that without the browser first downloading the PNG/JPEG version first, thus negating any bandwidth benefits.


Depending on the site it might be practical to detect JS on first request and set a cookie to indicate that the new format (and polyfill) can be sent on subsequent requests instead of the more common format.

Or for a compiled-to-static site just use <NOSCRIPT> to let those with no JS enabled to go off to the version compiled without support/need for such things.


Why would PNG + ZStandard have a harder time than AVIF? In practice, AVIF needs more new code than PNG + ZStandard would.


I'm just guessing, but bumping a library version to include new code cam integrating a separate library might be the differentiating factor.


The zstd library is already included by most major browsers since it is a supported content encoding. Though I guess that does leave out Safari, but Safari should probably support Zstd for that, too. (I would've preferred that over Brotli, but oh well.)


Btw, could you 'just' use no compression on this level in the PNG, and let the transport compression handle it?

So on paper (and on disk) your PNG would be larger, but the number of bits transmitted would be almost the same as using Zstd?

EDIT: similarly, your filesystem could handle the on-disk compression.

This might work for something like PNG, but would work less well for something like JPG, where the compression part is much more domain specific to image data (as far as I am aware).


If there is a particular reason why that wouldn't work, I'm not aware of it. Seems like you would eat a very tiny cost for deflate literal overhead (a few bytes per 65,535 bytes of literal data?) but maybe you would wind up saving a few bytes from also compressing the headers.


5 bytes per block or 0.000076 overhead.


zstd compresses less, so you wait a bit more for your data




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: