Hacker News new | past | comments | ask | show | jobs | submit login
JPEG XL support has officially been removed from Chromium (googlesource.com)
292 points by jiripospisil on Dec 10, 2022 | hide | past | favorite | 378 comments



Sad day for image formats. JXL really looked like the future, it could have replaced all the image formats I use on my websites. Especially the lossless JPEG recompression mode, for which there is no existing option.

I think I will continue using it with WASM polyfills, in hopes that Google will change their mind once the rest of the industry has switched to JXL.


> in hopes that Google will change their mind once the rest of the industry has switched to JXL.

What rest of the industry? Apple is clearly not interested, neither is Microsoft[0], and xl weirdoes have been busting mozilla's balls for years with little progress.

Even if you manage to bully Mozilla into shipping and default-enabling xl rather than just removing it, Firefox is currently sitting at an incredible 2.5% market share (and I'm saying that as a Firefox user).

[0] well they obtained a patent on rANS so they might be interested in other people using it I guess, that's money in the bank: https://www.theregister.com/2022/02/17/microsoft_ans_patent/


> Apple is clearly not interested

They have made no such statement AFAIK. The fact that they adopted AVIF relatively swiftly, and generally improved their cadence a bit, makes me ready to not be surprised if they ship JPEG XL in a year or two.


> The fact that they adopted AVIF relatively swiftly

Can be linked to Apple being a governing member of AOM.

> and generally improved their cadence a bit, makes me ready to not be surprised if they ship JPEG XL in a year or two.

Feel free to hold your breath, but I don't think that'll be good for your health.

The webkit bug for AVIF was opened in February 2020, an "initial support" patch was proposed in May, support was merged in March 2021. Apple separately shipped the decoder[0] in iOS and macOS in September/Octoboer 2022, and enabled the feature for Safari.

The webkit bug for JXL was opened in February 2020. That's about all that's happened to it. Reaction has been:

- Webkit is not super interested in adding a bunch of third-party unsafe code[1]

- Safari support requires OS support, which is non-existent for JXL[2]

[0] Webkit on non-apple platforms can use Webkit's decoders, on Apple platforms it uses CoreGraphics, though as seen with HEIF having CG support doesn't mean Apple enables the format in Safari.

[1] https://lists.webkit.org/pipermail/webkit-dev/2021-May/03184...

[2] https://lists.webkit.org/pipermail/webkit-dev/2021-May/03185...


They’re working on ending the OS-parity nonsense with AVIF. https://webkit.org/blog/13584/release-notes-for-safari-techn...

To be clear I didn’t say I’m holding my breath. Just that it’s not “obviously” impossible.


They’re not working on anything, this just means the WebKit previews or the odd product which embeds WebKit rather than use webviews can run regardless of OS support.

Apple ain’t going to enable features they don’t support on safari.


That’s Safari’s preview.


No, it is not. It's webkit. Safari is based on webkit, but the webkit project at large does not decide what does and does not go in Safari.

That's why there are regular "Webkit features in Safari X": that something is added to webkit does not guarantee it'll land in Safari, ever.

Hell, it took 18 months and 2 iOS/macOS releases for the AVIF support to go from landing in Webkit to landing in Safari. And that was something Apple wanted.


> Just that it’s not “obviously” impossible.

Neither was webm support, but that didn't stop them from taking several years to implement it (and even then it was half-finished). Apple clearly wants to play the superiority game with their own codecs. They have zero incentive to support any codecs they didn't build, and have been known to abandon standards that they help develop.

Nothing is technically impossible for Apple, but codec support is stuck between an ideological/financial rock and a hard place.


> Apple clearly wants to play the superiority game with their own codecs. They have zero incentive to support any codecs they didn't build, and have been known to abandon standards that they help develop.

And on the other end you have Google who threatens others if they don't bend over and implement Google's codecs in hardware [1]. The codecs space is, and has always been, a game of shitty players.

[1] e.g. https://www.protocol.com/bulletins/av1-android-14-requiremen... and https://www.protocol.com/youtube-tv-roku-issues


AV1 is a more open codec than MOV or AAC, so it's not really any shittier of a play than Apple demanding third-parties use AAC for every iPhone accessory.


Apple dragged their feet on introducing webms to Safari and they still often require fiddling with to get working on iOS. If Apple isn’t jumping at supporting a format, it’s generally safe to assume they’ll do all they can to not support it unless the pressure is overwhelming.

Which doesn’t seem to be the case with this format.


Apple is a company that cares deeply about usability. Once JPEG XL reaches critical mass in photography or creative work (video mastering, image processing, graphics and similar), Apple will very likely be the first to add support. Their users much rather have a thumbnail than a stock icon for photographs and graphics. I wouldn't be surprised if JPEG XL support would be announced by mid 2023.


That patent is completely irrelevant to JXL. JXL doesn't do anything described in the patent.

And in any case, I think the person you responded to is referring to non-web actors like Abode, who seem to like JXL.


> That patent is completely irrelevant to JXL. JXL doesn't do anything described in the patent.

In the article I linked, the creator of ANS asserts that Microsoft's patent covers the variant used in JXL.

A Cloudinary lead says it doesn't but they're not a lawyer, and it's not exactly in their interest to say it does.

> And in any case, I think the person you responded to is referring to non-web actors like Abode, who seem to like JXL.

That is completely irrelevant to Chrome, why would Google change their mind on that basis?


> In the article I linked, the creator of ANS asserts that Microsoft's patent covers the variant used in JXL.

This doesn't necessarily mean JPEG XL is patent-encumbered (the patent is still annoying, but not necessarily this way). It is recommended for ISO standards that relevant patents should be disclosed and available on a non-discriminatory basis [1], and while patent holders can ignore this recommendation they generally have no reason to do so. In the case of JPEG XL only Google [2] and Cloudinary [3] filed patent declarations, so Microsoft is reasonably thought to have no patents relevant to JPEG XL.

[1] https://www.iso.org/iso-standards-and-patents.html

[2] https://isotc.iso.org/livelink/livelink/fetch/2000/2122/3770...

[3] https://isotc.iso.org/livelink/livelink/fetch/2000/2122/3770...


> In the article I linked, the creator of ANS asserts that Microsoft's patent covers the variant used in JXL.

Microsoft is a member of ISO. They're obligated to notify ISO if they have any relevant patents to new standards. The subcommittee that JPEG is part of is even chaired by a Microsoft employee, so it's not like they'd be unaware.

Duda seems to think that Microsoft's patent simply covers rANS in general, but this seems dubious. Not only would it cover a lot more than JXL, it would have shitloads of prior art. The patent seems to describe an improvement of rANS, which isn't used in JXL.

Further still, Microsoft has straight up said that the patent is free to use for any open codec.

> That is completely irrelevant to Chrome, why would Google change their mind on that basis?

WebP adoption was slowed, and is to some extent widely hated, because it for many years had zero support outside the web. Not only is JXL seeing faster than adoption outside the web than WebP, it's seeing faster adoption than AVIF. That seems relevant to me. And of course, web companies literally can't adopt it until at least one browser starts supporting it. Shopify already serves JXL when you enable the flag. Facebook wants to use it. Yet Chrome's devs claim that there is basically no ecosystem interest. It's pretty ridiculous.


There was some discussion about this rANS patent among data compression experts, but it looked like they don't know what improvement it is supposed to bring:

https://encode.su/threads/3863-RANS-Microsoft-wins-data-enco...


Besides the fact that only some image viewing programs and image editors support JXL... it is worth remembering that AVIF is just the image format used within the AV1 video codec. So, when AV1 becomes more popular and starts really showing up in silicon (which Google is putting huge pressure on silicon vendors for), almost everyone will have a hardware AVIF decoder for free, should they choose to use it. If that happens, any decoding speed advantages that JXL has could be blown apart very quickly. This makes AVIF, in my view, a much stronger long-term play, as how many people are going to implement hardware JXL decoders? AVIF will be much faster, perhaps slightly less efficient, and plenty good enough.


> So, when AV1 becomes more popular and starts really showing up in silicon (which Google is putting huge pressure on silicon vendors for), almost everyone will have a hardware AVIF decoder for free, should they choose to use it. If that happens, any decoding speed advantages that JXL has could be blown apart very quickly.

No one has any plans to decode AVIF still images in hardware. There are two problems. The subset of AV1 supported by hardware is much narrower than what is in practice used in AVIF images. Such images cannot be decoded by hardware. Second is that it actually doesn't get you any speed. Each image would need to be shuffled to the GPU, decoded, then sent back because browsers aren't really set up right for doing everything on the GPU. And any non-stateless HW decoder would be problematic, since every single image would need to reconfigure the decoder state, which is slow.


> Each image would need to be shuffled to the GPU, decoded, then sent back because browsers aren't really set up right for doing everything on the GPU.

This isn’t a problem on any mobile SoC because “sent to the GPU” doesn’t mean anything on a unified memory system. (And Safari does use hardware image decoding.)


>The subset of AV1 supported by hardware is much narrower than what is in practice used in AVIF images

The majority of AVIF images are hardware decodeable, even if most viewers don't bother with it. The biggest limitation is that profile 0 decoders are 4:2:0 only, but that describes the majority of Web images.


> Besides the fact that only some image viewing programs and image editors support JXL.

Krita, GIMP, DarkTable, Affinity, and freaking Adobe are just "some" image viewing programs to you?


+ ImageMagick, imlib2, Firefox with a flag…


The ones most people actually use do not support JXL. Namely, for better or worse, Windows Photo Viewer and MacOS Preview. Nor does any web browser (except experiment flags that nobody knows about).

Adobe supports it... for importing, but you can't export to JXL. You can count plenty of obscure photo formats like JPEG 2000, IFF, PCX, OpenEXR, etc. in that category.


OpenEXR isn’t obscure for folks who work in animation / VFX :) but yeah I agree that JXL doesn’t seem to be getting wide support any time soon.


It works fine in feh (image preview) and all of the open source tools I use for photography mentioned before. I have the browser flags enabled.

I guess I’m not an actual user.


Wikimedia has 10% on Non-Input-crippled (or handicapped) devices for Firefox.


>>Firefox is currently sitting at an incredible 2.5% market share

Starting to think Firefox users can be put on the "non team players list" and "people that have something to hide list".


Why? I don’t understand the angle.


The main reason this is sad to me, despite the existence of AVIF, is: JXL could losslessly re-compress JPEG images. There's a vast amount of JPEG images out there where the uncompressed data is long lost. Those images can't really take advantage of AVIF, since then you'd end up double-compressing the image; AVIF will do its best to re-create the JPEG compression artefacts. JPEG XL can achieve better compression ratios on existing JPEGs without this double-compression problem.


That doesn’t seem like a great reason to use JPEG XL. Existing JPEGs are meeting the needs of their users. Sure, maybe those images could be ever more compressed, but it’s clearly not a deal breaker otherwise those images wouldn’t have been JPEG encoded in the first place. Second, the amount of existing JPEGs [that people care about] will decrease over time, both in quantity and as a portion of the total image population. So lossless re-encoding is a solution in search of a problem.


Over the past 30 years, JPEG has been the best generally available compression format, and it's what consumer cameras generally use to store their photos. I'm not sure what you imagine people would have used instead if storing images efficiently is "a deal breaker".

As for the amount of existing JPEGs that people care about decreasing over time: This might eventually become true, but only to a point. People's existing photo archives are generally all JPEGs. People care about even (especially?) the older images in their photo albums. And it will take a long time before everyone is adding exclusively AVIF-encoded rather than JPEG-encoded images to their archives.


Does re-encoding JPEGs as JPEG XL really unlock anything new? The answer is likely no. So this case is not useful to optimize for.

As for people's photo albums. The iPhone is already storing new images in HEIF, so JPEGs are becoming less relevant with each passing day. And of course, you can still view the old images perfectly fine as JPEGs.


I don't know what you mean by "unlocking anything new". You have identical images at a smaller size.

Old images in your photo library aren't becoming less relevant with each passing day. I don't know why you would suggest that they are. I know that iPhone has switched to storing images as HEIF and nothing in my comment suggested otherwise.


> I don't know what you mean by "unlocking anything new". You have identical images at a smaller size.

Yes exactly, nothing new is unlocked. So why would I use this format? Was the old size of the images preventing me from enjoying them in any way? No. Of all the old JPEGs out there that absolutely must be losslessly transcoded from the already lossy JPEG encoding, how many are prohibitively large? You can still look at both a JPEG and JPEG XL image locally. Most likely you'd be able to serve both images over HTTP, although the JPEG image may take a bit longer to load. And how many cases require the new and old images be exactly the same? My point is that, if the main selling point of your codec is that you can create the same thing as 30 years ago only smaller, then it's going to be a hard sell compared to other formats out there. If the choice was between JPEG and JPEG XL, then sure, let's use JPEG XL. But the choice is between JPEG XL and other formats with better features.

Other image formats like AVIF add new features that enhance images. Yes, the images are smaller, but also way more powerful. Image sequences, for instance, enable new features like "live images." Converting your old JPEG to a smaller JPEG isn't going to magically enable live images.

tldr: I'm looking for a reason to care that I can losslessly re-transcode my old JPEGs. Ok, the re-transcoded images are smaller and exactly the same as before, but what is a real world reason why I would need that vs. using the old JPEG directly or re-transcoding it to AVIF?


> So why would I use this format?

Because the size is smaller and the quality is the same? Literally the same reason why you'd use any new image format, except that this benefit also applies to existing images, not just new ones?

I'm not against AVIF or anything, if you want to add future images which are using fancy animation features from AVIF then that's cool. You don't have to choose between image formats, you can (and already do!) use the right one for the task.

> tldr: I'm looking for a reason to care that I can losslessly re-transcode my old JPEGs. Ok, the re-transcoded images are smaller and exactly the same as before, but what is a real world reason why I would need that vs. using the old JPEG directly or re-transcoding it to AVIF?

You can use the old JPEG directly, but then your library takes up more space. If you re-transcode your old JPEGs to AVIF, you're losing quality or at the very least irreversibly changing the images.

I'm not arguing that it's the world's biggest deal or anything, but a free 20% size reduction on all photos in an image library and all JPEGs sent over the web doesn't seem like a bad deal.


I think the point is that it is much more important to achieve better compression ratios on the monstrously large files modern digital cameras produce than to reduce the already tiny sizes of peoples' family albums from the 90's. Compressing a 90 kb to 45 kb without loss of quality is impressive, but not as useful as compressing a 20 mb raw image to 18 mb.


My JPEGs are significantly larger than 90kB. At this point you're just trolling.


I think you're not understanding my question. You said that the most exciting feature of JPEG XL is the ability to losslessly encode old JPEGs. My point is that, if this is the most exciting feature, then it's unsurprising that the industry is choosing to ignore JPEG XL as the "most exciting feature" is more or less useless.

Technology is a means to an end. Transcoding images is a means. The end is what you do with it. In the 90s and early 2000s, encoding images in JPEG allowed for digital cameras to store vastly more images on their limited storage space. Bandwidth was extremely limited, so JPEG encoding allowed for detailed images to be shared over the web, which wouldn't have been possible in a different format. Today, however, storage and bandwidth are cheap. JPEG already does a decent enough job for old images. Making old images even smaller isn't really enabling any new "ends."

I guess the ultimate question I'm asking is "Why do you want your old JPEGs to be smaller?" What is the end you hope to achieve with smaller JPEGs?


Smaller is what compression is for.


And there are other codecs that compress similarly to JPEG XL that also support other features like animation. So, again, why would I use JPEG XL?

But anyway, this conversation was helpful. I guess there aren't really many reasons to use JPEG XL.


JXL has the feature set that is better than any other solution.

Killer feature #1 is compressing existing JPEG images. Nothing else can do this

Killer feature #2 is HDR images with over 10 bits of depth and high resolution. It comes out of your camera/phone already like this but we can't post it online! What a joke


There are no such similar codecs. Jpeg xl is in its own category in compression efficiency when rated by users at densities/qualities used in the internet. Roughly similar efficiency steps between libjpeg-turbo, WebP, AVIF and then jpeg xl.


Individual features of JPEG XL might be not enough, but JPEG XL is a package for all those features, which make it much more attractive than alternatives. And JPEG is really popular while WebP, which finally reached a point of universal support recently, still struggles [1]; it will remain popular enough for at least a decade and probably more.

[1] https://w3techs.com/technologies/overview/image_format


> I think I will continue using it with WASM polyfills

Images that can't be displayed without Javascript. Incredible.


Images unconstrained by vendor support. Incredible.


I live in a world where Javascript is disabled by default, but I already see many sites where images are not viewable. In the case of JPEG XL, the alternative is to use Firefox and enable the flag, so unlike these sites, the claim that the images cannot be viewed unless Javascript is enabled is not valid.


You can use <picture> element for a JPEG/PNG/WebP fallback.


While you need some little JavaScript to initialise it, WebAssembly isn’t JavaScript.


It's still images that can't be displayed without Javascript enabled. It's also a distinction without a difference. It's third party code downloaded over the internet and automatically executed on my computer.


I was simply a bit pedantic.

And also, JavaScript should always be enabled. Very very few people disable it and they are used to have a broken web experience anyway so it’s not an issue to ignore them. Worst case they don’t see pictures until they eventually enable JavaScript.

Executing third party code in a sandbox is a security risk. You need to allow a level of risks when you use your computer on internet, and by disabling third party code you are safer but you also have a lot less useful computer.


> JavaScript should always be enabled.

Why? Most pages are static text and images, most pages don't need JS. So why should JS be always enabled?


I think they wanted to say that when you make a website you can pretty much expect everyone to have javascript enabled since it's on by default. Outside of HN IT nerds of course :p


I wouldn't be as prescriptive as the OP, but it seems to me that the vast majority of web sites for 15+ years have had some JavaScript, even if they aren't "single page applications". Sometimes those sites will break completely without JavaScript, but very often it's more that parts of them break, or just work worse.

I know the folks who advocate turning off JavaScript think that such reduced functionality is a worthwhile tradeoff, but they don't know what they're missing in a very literal sense. And I can't help but suspect the main benefit they're getting is not improved security; it's a warm, fuzzy feeling of smugness. Many of them probably also use Emacs.


When I look at the most visited websites, I see mostly web applications that are a bit more complex than static text and images and they all require JavaScript. For sure you could create lightweight versions of many of these applications, without JavaScript and server side generated HTML, but my point is that the users should have JavaScript enabled to browse the web.

https://en.m.wikipedia.org/wiki/List_of_most_visited_website...


That doesn't counter the fact that it's an image that won't display without JavaScript.


True. The very very few people who disabled JavaScript will have to add some more websites to their allow list.


It's not the "very-very few". It's the combined population of several large countries.

Because you're immediately excluding people on old devices, on underpowered devices, on devices on bad networks, on devices with weird/incomplete/glitchy/limited support for what you need, sites where JS just errors out (and dies, because there are no recovery options if there's a global JS error)... And the list just goes on, and on, and on.

The inability of developers to look beyond their latest-and-greatest dev machines with unlimited power and everything under the sun enabled is just mind boggling.


JXL had this coming. They should have written specifications. The implementation can never be the standard. I hope everyone watching this learns from their mistake.


Another take on this:

AVIF is an ad hoc agreement from an industry coalition interested in web media delivery. There are literally no individuals or governments participating, only big internet media corporations and companies producing solutions for internet media needs.

JPEG XL is an international standard related to a larger field of interests (photography, print, industrial quality assurance imaging, multi-spectral imaging, space technology, heat cameras, mastering, raw-like imaging, medical imaging, storage, archiving, media delivery, government needs, ...). Participation fee is moderage and individuals can participate.


I don't know where you did get that impression but JPEG XL has written specifications which is exactly the standard.


Have you actually tried to read them? They don't exist.

https://jpeg.org/jpegxl/

https://jpeg.org/jpegxl/documentation.html

All we have is a whitepaper.


I'm the author of the first independent reimplementation of JPEG XL [1] and I have definitely read all of them. Do you want to criticize the lack of openly available standards? In which case developers have given recent (and generally better) drafts to anyone interested so it's practically open, though a formal action by JPEG would be more desirable.

[1] Previously on HN: https://news.ycombinator.com/item?id=32885509


I'm going to say this as politely as I am able: closed standards can eat a big, steaming plate of shit. If it's not open, it doesn't exist. Period.


If that's the intention you can update your original comment to say the "open" standard, because your definition of standard is not universal. And then I would have instantly agreed because I've experienced this firsthand.

Besides from that technicality though, do you really think that publicly available standards would have changed the situation? I don't think so---Chrome (and to be clear, most other browsers) has supported tons of closed standards anyway.


> If that's the intention you can update your original comment to say the "open" standard, because your definition of standard is not universal. And then I would have instantly agreed because I've experienced this firsthand.

No. A closed standard is not a standard.

> Besides from that technicality though, do you really think that publicly available standards would have changed the situation? I don't think so---Chrome (and to be clear, most other browsers) has supported tons of closed standards anyway.

I think it might have helped, yes.


> I think it might have helped, yes.

Please elaborate.


As an editor of the JPEG XL spec, I fully agree that ISO's paywall policy is bad in many ways.

But I wouldn't say it's a "closed" standard. Not any more closed than, say, the C++ standard or the Unicode standard. What makes a standard open is not the price the publisher puts on the copies, imo, but how transparent and open the process is to design and change the standard, and whether it is possible or not to implement and use the standard for free or not. In that regard, I would consider JPEG XL just as open a standard as C++ or Unicode.

The HEVC spec is publicly available without paywall, but you cannot freely implement and use this standard since it's a patent encumbered, very non-royalty-free codec. I consider it a closed standard for that reason, and the spec being available for free (but not implementable without paying royalties) does not make much of a difference imo.


There are many contradicting examples:

The original JPEG standard is an ISO standard (ISO/IEC 10918) with payed access.

MP3 is also an ISO standard (ISO/IEC 13818-3). Perhaps not as relevant today but was once used by basically everyone.

Access to the standard is only relevant to the implementer. It's of no consequence to users of a piece of software.


Sorry Drew, disagree here. But also not surprised.

Open Standard does not mean Free Standard. As much as I would want the Final Spec 1.0 to be freely available.


JPEG XL is an approved international standard: ISO/IEC 18181

https://www.iso.org/standard/77977.html

Anyone interested in doing an independent implementation can use this specification (or a recent draft of the upcoming 2nd edition). Alternatively, you can look at the source code of one of the three JPEG XL implementations currently available: libjxl, J40, and jxlatte. These are all open-source, and in a way that's an executable specification.


Where does one find a recent draft?


ISO standards are not standards. Closed standards are not standards. Implementations are not a standard. This is a bad joke.


What is a closed standard from your point of view?

Is it simply just one that cost money to access?

If I made a language standard on my own it would not be a open standard from quite a few orgs point of view. It is annoying that you are just throwing around vague use of words without giving a clear definition.


A closed standard is not a standard. That's it. It's a document of little to no relevance that may as well not exist.


ISO 7010 is an ISO standard for graphical hazard and safety signs. It is literally everywhere, well not everywhere because there are some exceptions but still used in a majority of the world and therefore has an enormous relevance. And it is closed by your definition; you have no way to freely obtain it. Now please explain an apparent discrepancy between your claim and the reality, and also do not ignore my prior request to elaborate how publicly available standards may have helped this particular situation with Google.


You are being incredibly inarticulate, or you are just arguing in bad faith.

What is a closed standard?

Also the world run on those documents with little to no relevance, no matter how much people dislike how you need to pay to view them. It is a sad state of affairs.

If you actually wanted to make changes you would lobby for it, you are just saying they don't exist which is of little help to change the status quo.


Have you signed this open letter already?

https://www.theregister.com/2021/07/31/iso_paywall_battle/


JPEG XL was not used anywhere which is why it got deprecated. It was never gonna be the future of anything.


Not used anywhere? E.g. Affinity Photo has support for JPEG XL, but not for AVIF. Not used in browsers because they are nearly controlled by Chromium, which is clearly biased toward AVIF. https://en.wikipedia.org/wiki/JPEG_XL#Official_support Official support:

    Squoosh – In-browser image converter[42]
    Adobe Camera Raw – Adobe Photoshop's import/export for digital camera images[43]
    Affinity Photo – raster graphics editor[44]
    Chasys Draw IES – raster graphics editor[45]
    Darktable – raw photo management application[46]
    ExifTool – metadata editor[47]
    FFmpeg – multimedia framework, via libjxl[48]
    GIMP – raster graphics editor[49]
    gThumb – image viewer and photo management application for Linux[50]
    ImageMagick – toolkit for raster graphics processing[51]
    IrfanView – image viewer and editor for Windows[52]
    KaOS – Linux distribution[53]
    Krita – raster graphics editor[54][55]
    libvips – image processing library[56][57]
    vipsdisp – high-performance ultra-high-resolution image viewer for Linux[58]
    Qt and KDE apps – via KImageFormats[59]
    XnView MP – viewer and editor of raster graphics[60]
    Pale Moon – web browser[61]


There's only one issue with this list - many of these applications also support AVIF, and were "sitting on the fence" over the issue, which should not be interpreted as a sign of JXL's success. And the ones that don't are very likely to fall in line after the Chromium decision.


> Not used anywhere? E.g. Affinity Photo has support for JPEG XL, but not for AVIF.

It also support PSDs. Should Chrome support PSDs?

> Not used in browsers because they are nearly controlled by Chromium, which is clearly biased toward AVIF.

Ah yes, Google, a major contributor to JXL, is "clearly biased towards AVIF".

I'm sure you'll find your saviours at Apple (governing member of AOM, shipped AVIF this year), Mozilla (governing member of AOM, shipped AVIF in June 2020, default-enabled in October 2021), or Microsoft (governing member of AOM).


> Ah yes, Google, a major contributor to JXL, is "clearly biased towards AVIF".

Why is it surprising to you? Many major companies are contributors to a bunch of standards that they don't end up supporting. Meanwhile Google is forcing device manufacturers to support its codecs in hardware on pain of sanctions and removing support and software. Guess which codec Google wants them to support (hint: AV1)


> Should Chrome support PSDs?

PSD is a pretty heavy format, and does not have the benefits for serving images to end users (e.g. good compression) that JPEG XL/AVIF have. So probably not. But there are still use cases for serving PSDs on the web in creative communities, so it would be pretty cool to have.


I linked https://cloudinary.com/blog/the-case-for-jpeg-xl in another comment which links to several big players requesting full JPEG XL support. It seems the reason why it wasn't adopted more broadly is that there was no full support for it in Chrome. I wouldn't even consider it a chicken and egg situation.


Thanks for this. I looked through the big-player comments and the one I found most persuasive was from Shopify, who had specific, practical reasons for needing this: https://bugs.chromium.org/p/chromium/issues/detail?id=117805...


The reply by the Chromium engineer (from Google? - I'm not sure) to a long thread of people from several companies quantifying the benefits of JPEG XL and requesting that it be supported is just sad:

> Thank you everyone for your comments and feedback regarding JPEG XL. We will be removing the JPEG XL code and flag from Chromium for the following reasons:

> - Experimental flags and code should not remain indefinitely

> - There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL > - The new image format does not bring sufficient increm ental benefits over > existing formats to warrant enabling it by default

> - By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome

---

If I were to put on my tinfoil hat, I would imagine the people involved here are desperate to put 'Removed unused code and removed maintenance burden by X%' in there performance reviews for this year


There's not much to tinfoil hat about here, Google is killing jxl in favor of a format they control: webp.


Your link doesn't mention anyone asking for it.

Also AVIF is more performant for most cases. Lossless is not what matters on the web.


It contains 8 links to the Chromium bugtracker in this paragraph:

However, if the enthusiastic support in the Chromium bugtracker from Facebook, Adobe, Intel and VESA, Krita, The Guardian, libvips, Cloudinary, and Shopify is any indication, it seems baffling to conclude that there would be insufficient ecosystem interest.

That's why I assumed there was demand for JPEG XL.


JPEG XL implements lossless image compression, but that's definitely not the most interesting feature. It also implements lossless JPEG recompression. So your existing JPEGs can be served with ~20% less bandwidth, without quality loss.

Unlike AVIF, JPEG XL also has advanced progressive delivery features, which is useful for the web. And if you look at the testing described in the post, JPEG XL also achieved higher subjective quality per compressed bit, despite having a faster encoder.


JPEG XL supports lossless, lossy and lossless JPEG recompression.

You can see lossless benchmarks against other formats here:

https://docs.google.com/spreadsheets/d/1ju4q1WkaXT7WoxZINmQp...


Lossless JPEG recompression, if it’s so good, can be done at the HTTP layer.

If a new image format doesn’t have a hardware decoder it’s dead. The security surface of new formats is unacceptable if it’s going to be slow and power-hungry too.

Only problem with JPEG is the lack of HDR.


JPEG XL as a HTTP Content Encoding:

1) transfer JPEG XL,

2) decode the JPEG XL to DCT coefficients,

3) encode a new JPEG1 file

4) decode the new JPEG1 file

5) render pixels

JPEG XL as image format:

1) transfer JPEG XL

2) decode the JPEG XL to DCT coefficients

3) render pixels

Two additional coding steps (3 and 4) are needed in the HTTP Content Encoding approach. If we want to transfer lossless JPEG1s, it is less computation and a faster approach to add JPEG XL as an image codec.

If JPEG XL is too powerful and creates danger for AVIF, then one possibility is to remove features such as adaptive quantization, lossless encoding and larger (non-8x8) DCTs. This effectively makes JPEG XL as JPEG1 recompressor as an image codec.

Also, JPEG XL's reference implementation (libjxl) has a more accurate JPEG1 decoder than any other existing implementation. Asking someone else to paint the pixels leads to worse quality (about 8 % worse).


Nope. In my eyes AVIF is not more performance. It makes photos suffer and become blurry, especially so in highly saturated areas, skin, marble, vegetation. Once beautiful things start to look like cheap plastic.


Check the file size - anything will look bad if you set the compression too high. If you see AVIF images with some artifacting, find a JPEG of the same image and compare file sizes. An AVIF the same size as a JPEG will be better than the JPEG - an AVIF only 50% of the size will probably be visually undetectable to most people. An AVIF only 20% of the size... you'll tell. Same for JXL - if I set my JXL to compress to only 15% of the size of the JPEG, it's hardly a fair comparison.


For my eyes:

AVIF fails to deliver a consistent experience at 3+ BPP -- I'd hate to compress my family pictures at AVIF even at high BPP, some part is smudged in a weird way.

libjpeg-turbo and mozjpeg does deliver a consistent experience at 4 BPP

guetzli and jpegli delivers a consistent experience at 3 BPP

JPEG XL delivers a consistent experience at 1.7 BPP


>Also AVIF is more performant for most cases.

Source? JPEG XL has already shown, over 10,000 sample size and over a wide variety of BPP to be better than AVIF, on latest JPEG And AVIF versions.

AVIF, on the other hand, has yet to shown anything similar.


JPEG XL was not used anywhere on the web because no browser shipped with JXL support. As soon as Google enabled it in Chrome, Mozilla in Firefox or Apple in Safari, we would've started to see adoption.


Previously yes. Two browsers have now added support.


No? Every single browser on caniuse has either no JXL support at all or has it disabled by default: https://caniuse.com/jpegxl


I don't know another, but Pale Moon does support it so your original statement is just false. And it can be actually a good way to test a proposed patch to Gecko---many issues were found and fixed after it first got JPEG XL.


It's not used anywhere because it is/was behind a feature flag so browsers didn't support it out of the box. IIUC, various companies like BBC, Facebook, and others were ready to deploy JPEG XL support.

If you take your approach, nothing new (including the webp standard that Google/Chrome pushed) would be released on the web because nothing is using it.


WebP should not have been released and no one should have used it. Google should stop letting individual employees invent new image formats as hobbies. (This happened _twice_. WebP lossless is a whole different codec from lossy.)

AVIF is the first one that might be acceptable.


Another take on this:

AVIF is a Netflix engineer quick-hacking a video format as an image format -- without actual experience in codec design, image compression or psychovisual research, without consulting such people to get the design right. When size and memory limitations were noticed, a tiled approach was proposed where globally spanning artefacts through the image will emerge. No one has been able to propose a fix for this and other kludges yet.

AVIF doesn't match WebP lossless in density, even given the 10 years more of compression research. AVIF doesn't even beat PNG at lossless, which itself was hacked together in a few months ~25 years earlier (primitive filtering, mixing filter bytes with image data, botched 16 bits compression, layering a primitive 1980's byte-compressor and filtering instead of designing a codec).


I am glad I am not the only one seeing it. While in extreme minorities, still happy to be this view is not some insanity as many claims to be.


I wonder why nobody used a hidden feature that needed to be activated manually.


It is supported by every image manipulation program I use on a regular basis. Photoshop, Krita, EOG, IrfanView. That's enough for me.


This is irrelevant, what matters is the consumption of the format, not its production.


Isn't that circular reasoning? Chrome shouldn't support JPEG-XL because not enough consumers support it? Chrome is the number one consumer of images. If Chrome added support, suddenly 90% of people would be able to use the format.

(Also EOG and IrfanView are image viewers, not production tools.)


Yeah, to me this is another sign that Chrome's dominance is a menace to the open web. Whatever the best outcome is here, it's very concerning to me that a possibly important improvement can be blocked with so little meaningful explanation.


The explanation is pretty plain: google controls the webp format, jxl competes with webp.


In what way does Google control WebP? They have frozen the format and nothing will be added to it. WebP2 was abandoned. Google is also involved with JPEG XL - actually some developers that worked on WebP now work on JPEG XL.

If someone controls something it is the Chromium/AOM/AVIF/AV1 team that controls and decides what multimedia formats go inside Chrome/Chromium thus controlling what is used on the web and ignoring what the web community has to say about it.


Consumption wasn't possible in standard Chrome. So that's hardly a benchmark.


Can't really consume it if it's locked behind a flag. 99% of chrome users never touch the flags at all.


In that case WebP, CSS grid, CSS shadow DOM, HTML5, or any other new feature on the web would be untennable (esp. if deployed behind a feature flag first) because when those were created and implemented in browsers there was no uses/consumption of them.


It isn't going to be used anywhere when it is behind a flag.


WebM, VP8/9, WebP, and for that matter MP4, MP3, and the original JPEG weren't used anywhere until there was support for them.


Was it patent encumbered?


From the views of ISO, JPEG Committees, no, they are not patent encumbered.

From the view of AOM members or supporters, yes they are or at least not clear.


Do you have any sources on that? And who would be the one making any claims relevant to JPEG XL or asking for royalties?

Because I have not seen anyone other than Cloudinary and Google declare to have patents relevant to JPEG XL, and those two parties have explicitly granted royalty-free licenses. So to me the situation is as clear as it gets.

Of course it can always happen with anything less than 20 years old that a patent troll appears and makes claims. This has not happened yet though in the case of JPEG XL.


>From the view of AOM members or supporters, yes they are or at least not clear.

Sounds complicated. Why not PNG? :-)


PNG is really bad for anything photographic. As in, it's pixel-perfect, and the images are also absolutely enormous, unusably so.


PNG, while being a very nice format, is dramatically larger than JPEG, JPEG-XL, AVIF, HEIF etc. For multi-megapixel images that size cost adds up.


> I think I will continue using it with WASM polyfills

This is the answer. You can also build your own application-specific codecs this way.

I've been exploring a variation of jpeg/mpeg that uses arithmetic coding and larger block sizes. Virtually all of the fun patented stuff we couldn't use in the early 00's is in the public domain now.


The only thing worse than an image that won't display without javascript is an image that doesn't even exist without webassembly. I get that google has removed support so you're looking for other options but maybe consider putting up an announcement on your site(s) that google chrome is not supported instead of making it worse for all other browsers.


> I get that google has removed support so you're looking for other options but maybe consider putting up an announcement on your site(s) that google chrome is not supported instead of making it worse for all other browsers.

I have no horse in the JPEG XL race. I am not even necessarily focused on images. I see value in using WASM (and/or JS) for application-specific codecs. That is all.

None of my decisions have any ability to make things "worse" for other browsers, especially when those browser vendors never intended to support my application-specific codec to begin with.


>making it worse for all other browsers.

What other browsers? Desktop Firefox users who changed a flag in about:config? That's practically nobody.


Also, chrome(/ium) has like 70% market share. Not supporting it is pretty much the equivalent of shooting yourself in the foot.


If you're only making websites for profit, yes. If you're a human person making websites for other humans then only targeting the standards breaking Chrome is bad for you and them. Gotta be the change you want to see in the world. Even if you know everyone else is doing the wrong thing for profit.


One of the main benefits of JPEG XL is a superior progressive decoding and polyfills won't be never able to replicate that feature.


Why do you think that polyfills can't implement progressive decoding?

It's simply a matter of using <canvas> for progressive rendering.


Canvas has to retain all pixels unconditionally, unlike native images that can be unloaded from memory as needed. It is technically possible to implement all other features (using service workers or CSS Houdini), but tons of limitations apply and can easily outstrip supposed benefits.


Can't you just read the stream and emit the images(s) as they get downloaded to be rendered in a canvas - exactly as a native implementation would do?


My comment was apparently too concise to give you a sense of complications. I can think of those concrete problems:

- Rendering images from a different origin without CORS. This is a fundamental limitation of any JS or WebAssembly solution and can't be fixed. Thankfully this use case is relatively rare.

- Not all approaches can provide a seamless upgrade. For example if you replace all `<img src="foo.jxl">` with a canvas DOM will be changed and anything expecting the element to be HTMLImageElement will break. Likewise, CSS Painting API [1] (a relevant part of CSS Houdini) requires you to explicitly write `paint(foo)` everywhere. The only seamless solution will be therefore a service worker, but it can't introduce any new image format; it can only convert to natively supported formats. And browsers currently don't have a "raw" image format for this purpose. JXL.js [2] for example had to use JPEG as a delivery format because other formats were too slow, as I've been told.

- It is very hard to check if a certain image is visible or not, and react accordingly. This is what I intended to imply by saying that canvas has to unconditionally retain all pixels, because if implementations can't decide if it's safe to unload images, they can't do so and memory will contain invisible images in the form of canvases. Browsers do have a ground truth and so can safely unload currently invisible images from memory when the memory pressure is high.

[1] https://developer.mozilla.org/en-US/docs/Web/API/CSS_Paintin...

[2] https://github.com/niutech/jxl.js


> Not all approaches can provide a seamless upgrade. For example if you replace all `<img src="foo.jxl">` with a canvas DOM will be changed and anything expecting the element to be HTMLImageElement will break. Likewise, CSS Painting API [1] (a relevant part of CSS Houdini) requires you to explicitly write `paint(foo)` everywhere. The only seamless solution will be therefore a service worker, but it can't introduce any new image format; it can only convert to natively supported formats. And browsers currently don't have a "raw" image format for this purpose. JXL.js [2] for example had to use JPEG as a delivery format because other formats were too slow, as I've been told.

You can get around many these compatibility issues by creating a custom element that inherits from HTMLImageElement. This provides API compatibility. For CSS compatibility, the elements you would replace in a MutationObserver would be the same tag name but a different namespace for CSS compatibility.

For the CSS compatibility trick, see https://eligrey.com/demos/hotlink.js/ which replaces images with CSS-compatible (not HTMLImageElement-compatible) iframes.

> - It is very hard to check if a certain image is visible or not, and react accordingly.

You can use Element.checkVisibility()¹ and the contentvisibilityautostatechanged event²𝄒³ to do this. Browser support is currently limited to Chromium-based browsers.

1. https://drafts.csswg.org/cssom-view/#dom-element-checkvisibi...

2. https://github.com/vmpstr/web-proposals/blob/main/explainers...

3. https://caniuse.com/mdn-api_element_contentvisibilityautosta...


Thank you for pointing out contentvisibilityautostatechanged, I was aware of `content-visibility` but didn't know that it has an associated event. I'm less sure about CSS compatibility, hotlink.js for example used an iframe which opens a whole can of worms.


You dont need to replace the img with canvas DOM - you can capture the canvas output as a dataURL.


Can't reply to the reply to your reply but... Blobs can be used here and avoid the inefficiency of data URL conversions


Thanks for the tip! (Not for jpegXL specifically, but ill definitely check that out and update some code where i use dataURLs instead of objectURLs accordingly)


This is exactly what I use in my JXL.js, as well as Web Workers and OfflineCanvas.


You can, but that is hugely inefficient. Any additional draw to the canvas has to generate a data URL for the image to be progressively decoded.


I've just added a config parameter to JXL.js for choosing the target image type: JPG/PNG/WebP. Keep in mind that PNGs can take a lot memory!


>It is very hard to check if a certain image is visible or not

Canvases are rectangles. The viewport is a rectangle. Checking if rectangles overlap is easy.


The image is visible != the canvas and viewport overlaps, and this is not even a good enough approximation (the image can be obscured by other layers for example). Intersection Observer v2 takes us a bit closer but the visibility in this definition (not obscured at all) doesn't fully agree with what we want (has some pixels visible, some false positives allowed).


This is what I did with JXL.js Multithread (https://github.com/niutech/jxl.js#multithread-version) - but instead of <canvas> I am pushing the blob to the <img>.


Do not do this. Your website will become very slow. Wasm does not have the hardware acceleration necessary to efficiently do codecs.


> Wasm does not have the hardware acceleration necessary to efficiently do codecs.

See: https://jsmpeg.com

and: https://jsmpeg.com/perf.html


I don't speak for Google/Chrome but I really need to clarify something that keeps being misstated:

The test of JPEG XL support has been removed.

It was always behind a flag and never actually supported. That is not to say that Chrome will not ever support JPEG XL. The two concepts are fundamentally unrelated. So we should stop conflating them.

In fact, it could be that the test was successful and now Chrome is waiting on something else before it adds proper support. I would argue this is likely the case. And the thing they should be waiting on IMHO is wide spread support.

Adding image formats to the web should be the last step because it is so difficult to remove them. I bet other browser devs in the image space wish they could remove ico support. Oh well.

"So why add it in a test just to later remove it?" I hear someone ask. Because you want to make sure that path is clear and you're in position long before it is needed. Once you know the path is clear, you don't want to keep dead code around. It is easy enough to dig it out of old commits when the time comes.

Heck, it could even be as simple as someone wanted to add a big item to their promo packet. Who knows.

But at the end of the day, Chrome did not remove JPEG XL support. And I haven't seen anything saying Chrome wouldn't later add it. If we all want JPEG XL (I do), we should continue to help its adoption grow. Using it via WASM is a strong indicator. Get it to the point where "of course Chrome will add support -- it is everywhere".


I think it's fine to assume the more likely interpretation that this company simply isn't interested in the format anymore, rather than whatever larger stretch you're describing here. "You don't want to keep dead code around." That's rich :) maybe if this was some one person hobby project or something.


I strongly disagree with "more likely interpretation". For several reasons:

- FireFox & Safari also don't fully support it. For your claim to be true, Google would need to be in control of them, too.

- I still get nag emails that I'm long overdue to remove dead test code from my Chromium tests. I'm being a bad person to other Chromium devs by being lazy and not getting around to it.

I think it is far more likely that they ran a test and know they're ready to support JPEG XL when the time comes. And that time (IMHO) is when there is already wide spread support outside of browsers. Browsers should be the last ones to add it. Not the first.


Chrome supports plenty of things that are unsupported by Firefox and Safari. E.g. you get set the audio sink id on individual media elements in Chrome. You can’t do that in Safari and Firefox.


Cleaning up code is something I get great pleasure. It not only makes navigating the code base easier for people new to the code base won't look at something that isn't ever actually run in production. It also prevents people from depending on something that we don't really want to be used anymore.


> In fact, it could be that the test was successful and now Chrome is waiting on something else before it adds proper support.

Waiting for libjxl 1.0 is reasonable. Though I can't help but to think it would be less work to just keep the experimental feature in there until it is stable.


>Adding image formats to the web should be the last step because it is so difficult to remove them.

Not from a company that added WebP to its browser and wanted a new video codec every 2 years.


> Using it via WASM is a strong indicator.

Is there a good way to do that, though? What I think will needing to do properly:

1. You can implement picture/audio/video/document file formats as extensions, without needing to intercept requests/responses (so that save, view-source, etc will still access the original file). (I think some versions of Mozilla have this capability (although some details of the design are not as good as they could have been), but as far as I know, WebExtensions does not.)

2. Extensions can be native codes, WebAssembly codes, or JavaScript codes. (Native extensions will be excluded from the extensions catalog, and the end user or system administrator must install them manually.)

3. Add a "Interpreter" response header, to indicate what to do if the browser does not understand the file format. For example, this might link to an implementation of that file format in WebAssembly.

4. Such WebAssembly programs can also be used outside of a web browser, in local standalone programs. For example, an external program might run the WebAssembly code to convert data from stdin to stdout.


> In fact, it could be that the test was successful and now Chrome is waiting on something else before it adds proper support. I would argue this is likely the case.

If Chrome deems that something is useful (to them, for some definition of useful), they just release it (and if this something was on a standards track, too bad, it's out now).


In that case they would have worded their rejection message differently.


I don't know about that. Some of the first responses I saw were along the lines of "It was a test. Tests have end dates. When that end date comes, the code gets removed unless it is being committed to full support."

Re-read this after-outrage-began-and-being-put-on-the-spot response with the new lens of "Non-browsers should add it first": https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...

Doesn't it now read like the bulk of the message is other's support? And the internal debate of "Should we be the ones to push this first, despite other's support and objections and browsers ideally adding support last?"

EDIT: To help clarify what I mean, let me insert my thoughts here inside a quote: "When we evaluate new media formats [new as in not yet widely supported], the first question we have to ask is whether the format works best for the web [which is why it isn't yet widely supported -- IE it should actually come to browser first]."


Browser vendors have other priorities besides having the latest-greatest compressor: code size, attack surface, long-term maintenance and compatibility risks, and interoperability.

Having a format that is a few percent smaller than the formats they already have is just not a high priority.

This isn't a conspiracy against JPEG XL. Browser vendors (other than Google) have been resisting shipping WebP for 10 years, until Chrome-only websites forced them to. GIF still is a thing despite being by far the worst in every image benchmark.

Every few years browser vendors are asked to adopt a new format, which is then beaten by an even newer format, but the vendors are stuck supporting the old one forever. They've dodged the bullet on JPEG 2000 and JPEG XR, but are now stuck with WebP (and AVIF, if they were to replace it).

If in a few years it turns out everyone uses JXL and other vendors have it, Chrome can re-add it.


I think the way to go for browsers is to support new codecs (as long as they're plausibly useful for the web, royalty-free, etc), but to avoid getting stuck with them by having web devs assume they can use codecs without content negotiation / fallbacks. So e.g. they could randomly in 1% of the requests pretend only to support the legacy codec (say jpeg and png for images, or h264 for video), which basically forces web devs to put proper fallbacks in place (using multiple sources, or using http content negotiation) since otherwise random parts of their website would just be broken. Putting those fallbacks in place is needed anyway: it's just a bad practice to ignore the long tail of browsers that don't support the latest codecs yet. The 1% "fallback-only" requests would mainly be a way to force web devs to take that long tail seriously since they would notice the failures even when testing only on Chrome.

That way, we can get 99% of the compression gains of new codecs right now, without having to wait until a codec gets sufficient traction outside the web, and without getting stuck with the codec forever if it turns out not to be that great after all, or if a better alternative shows up. If it does turn out to be great and it gets ubiquitous support everywhere (like JPEG and PNG), then it can be considered to make it a permanent addition to the web platform, i.e. make it part of the "fallback formats" pool instead of the "99% of the requests" pool.

The web platform needs a long-term media codec deprecation strategy. It needs to be a better strategy than just resisting any change until a codec has become ubiquitous and unavoidable — the web is an extremely important use case for media compression, and it's at this point unlikely that new codecs can even become ubiquitous when they cannot be used for this key use case. I think the strategy of "enable by default but only for 99% of requests", as bizarre as it may seem at first glance, would be a good way to make progress without adding irreversible bloat to the web platform at every step.


> Browser vendors have other priorities besides having the latest-greatest compressor: code size

If this was the case, Google wouldn't output 400 new web apis every year, and shove every feature under the sun into Chrome


For Web APIs there are various motivations. For example, the Chrome side of Google sees native apps as an existential threat, so parity with native trumps most other concerns (IMHO they're exaggerating, but that's what they do). In terms of images they aren't behind enough to worry: AVIF is close enough to HEIC (both are based on the HEIF spec).

Incremental upgrades to CSS properties and selectors may take less code than an advanced codec library, and it's 1st party code, not 3rd party code. There's likely an engineering bias here: everyone prefers to write their own code and trusts it more. Dependencies are automatically seen as suspicious.

Some new flashy APIs like VR/AR, local filesystem access, WebAuthn, or payment request enable functionality that was not possible before. That's user-visible and adds something to the platform. On a high level a new codec is just images, like the platform already had, only a little bit faster. I advocate for web performance, but I also realize it's a tough sell, not a "wow" feature that can be used in marketing. Most sites aren't even optimizing their JPEGs yet, WebP has very little use, almost nobody uses AVIF, so lack of a better codec isn't a limiting factor yet for web perf in general.

In terms of "fire and motion" play, they can churn 400 minor CSS or JS properties to get 400 points in browser-vs-browser comparisons. JPEG XL just gives them one checkbox. Even looking at this super cynically, if they add a WebTrombone API that no other vendor wants, they'll get to permanently look better in "who's got more APIs" comparisons, but JPEG XL can be easily added to other engines. However, if you convince other vendors to release JPEG XL support, and add it to high-profile benchmarks, Google may be more interested.


HEIC and AVIF are similar, similar like a Barbie toy and a monster truck toy placed in a similarly sized box, wrapped in the same Christmas wrapping, both with a red rosette clued on the box. Toy connoisseurs focus on what is inside.


You've nailed it on the head. Hats off to you, I'd never come up with such an apt description.


>Browser vendors have other priorities besides having the latest-greatest compressor: code size, attack surface, long-term maintenance and compatibility risks, and interoperability.

You’ve missed two most important ones: “business reasons”, eg trying to harm some (potential) competitor, and someone trying to get promoted. Don’t assume there are valid technical reasons behind every corporate decision like those; usually there aren’t.


Maybe if JPEG XL had a chat functionality, Google would have a motivation to include it…

But seriously, it's an ISO standard with a free implementation. There's no Google competitor attached to it. Nobody gets prompted for failing to add features. Google is even missing out on a chance to make Safari look outdated again. There's no reason to interpret indifference as a conspiracy.

BTW: I've worked on the HTML5 spec and codecs for many years, so I've seen how the sausage is made on both sides.


Incorrect. Facebook have said they want to use JPEG-XL. Facebook/instagram is definitely a big tech competitor to google, especially in regards to ads.


I quite liked this article from about a month ago on "The Case for JPEG XL": https://news.ycombinator.com/item?id=33442281


The case is not that compelling to me…

> * Lossless JPEG recompression

Obviously, jpeg has this. I know it is supposed to be a 20% reduction in size, but that is a relatively small incremental improvement.

> * Progressive decoding

jpeg also has this, and it’s a niche feature these days.

> * Lossless compression performance

png covers this

> * Lossy compression performance

Nice, but a relatively small incremental improvement in the general case.

> * Deployable encoder

This isn’t a case for jpeg xl, just table stakes for any format.

> * Works across the workflow

Not really a reason to include jpeg xl in the browser unless it actually is being widely used across workflows. Actually, this is something of a negative considering the resulting complexity of the codec and image format and that it muddles the distinction between the authoring and published form of an image.

There are negatives too, so it’s not enough to be a little better for some cases..codecs are attack vectors. An all-new complex codec is an all-new broad attack surface. And, of course, it’s yet another feature to test and maintain, which costs time and necessarily pulls focus away from other things.


> Obviously, jpeg has this. I know it is supposed to be a 20% reduction in size, but that is a relatively small incremental improvement.

Let's not forget that a 20% reduction across the entire internet is very significant.

> jpeg also has this, and it’s a niche feature these days.

Because we can't rely on it, I'd suspect?

> png covers this

Without covering the rest.

> Not really a reason to include jpeg xl in the browser unless it actually is being widely used across workflows.

This line of thinking will inevitably lead to a chicken-and-egg problem. Just like with EdDSA certificates and for example H/2 PUSH (support of which got deprecated basically the same time some frameworks just added support).

I think the web doesn't move as fast as Google thinks or hopes it does, and it has and will make people reluctant on adopting next new features.


I think progressive jpeg works just fine, I recall it being quite popular in the early days of the internet when loading a large image could take a long time on landlines. You'd get a blurry preview that would incrementally become more detailed.

I agree that it's niche these days because typically modern connection speeds mean that it's a waste of resources to iterate on the picture instead of loading it at once. Even on mobile networks the issues are connection reliability and latency, once the image starts pouring in it'll likely come in its entirety quite quickly.

If you load images large enough that it becomes a real issue you would generally (in my experience) just have a smaller independent thumbnail/preview image you'll display while the large version is loading. That usually gives you more control on how things will look and avoid having a blurry mess on the user's screen.


I don't think that's accurate. Progressive JPEG has been the biggest winner in the image formats during the last 10 years. It grew from pretty much nothing to about 25 % of all JPEGs today.

Much of this is powered by mozjpeg creating progressive jpegs by default. Chrome and other render them progressive, with recent (2019–21) improvements in quality. While first round or two of updates can be noticeable, the last 40–55% of loading is usually impossible to notice as a change.


Interesting! That may be why I never noticed it. Thank you for the correction which unfortunately few people will see...


> Let's not forget that a 20% reduction across the entire internet is very significant.

In 2003. Now that we have streaming video, it's pretty minor.


I can assure you that for anyone dealing with large volumes of storage and/or bandwidth of still images (i.e. pretty much any high-traffic website), where costs directly related to storage/bandwidth are measured in millions of dollars, will not consider a 20% saving to be a "pretty minor" thing.

I agree that in video, the savings can be even more substantial. But keep in mind that the median web page still has zero videos, while the median web page has about 1 MB in images. In practice most of the transferred bytes in web browsing are html (which is very small after compression) and images. Javascript, css and fonts also contribute to the page weight but those tend to often be locally cached since they're mostly static compared to the html and images.


Since you'll have to support clients that don't have jpeg XL support, however, storage savings are unlikely to materialize any time soon if ever.


And also in 2016: "Lepton image compression: saving 22% losslessly from images"

https://dropbox.tech/infrastructure/lepton-image-compression...

It's now deprecated though and quite comically tragically they are suggesting to switch to a different format and mention JPEG XL as an alternative.


PNG quality AND progressive decoding. AFAIK there is no other supported codec offering this.


PNGs and GIFs can be interlaced, which is similar to progressive decoding.


Not really similar. 10% of an interlaced image is effectively a bunch of horizontal lines. 10% of a progressive image is usually high enough fidelity to present the image in full colour - even less if you are happy with greyscale.


https://bugs.chromium.org/p/chromium/issues/detail?id=117805...

Star this bug to express support for JPEG XL.


So obviously not all of Google agrees with the decision to remove support then!


This is the original issue which led to the feature being added behind a flag. Then it was removed.


Please feel free to "star" the issue for jpegXL support. https://bugs.chromium.org/p/chromium/issues/detail?id=117805...

From reading through the justification for the removal it seems this decision was (in part) due due to some erroneously compiled benchmarks:

> http://storage.googleapis.com/avif-comparison/index.html

"That test was done in Chrome without even using the latest version of chrome! There were many performance improvements since. Additionally, the color format (yuvj444p) used is a color format no sane person would use for JpegXL. If they were to test different color formats, they would see JpegXL wins in all of them, except the exact one they used!" (https://bugs.chromium.org/p/chromium/issues/detail?id=117805...)


Sad. Being able to immediately upgrade all JPEGS to jxl without any reencode loss is huge.

And no royalty issues either, plus no need for a much more complex decoder.


Yeah, jxl support is a huge win for anyone working with large quantities of jpeg files; aka basically everyone now that we all have smartphones with cameras and jpeg is the default format.

Jxl is a huge upgrade for simpler images as well, I think I save something like ~60% of the space used by my comics collection with no visual detail loss by converting to jxl.


Minor point: JPEG is not the default format for Apple iPhone cameras, they’ve used HEIC by default for quite a while now.


My main boggle with images today is that I am making red/cyan anaglyphs that perform well on older displays but have crosstalk on ‘wide gamut’ displays because wide gamut displays turn a (0,180,0) green in sRGB into (16,176,16) on the display in an effort to make the saturation closer to the sRGB green and less like the color you see when you get hit with a green laser pointer. The extra red shines through the right filter though and makes a ghost image.

I am thinking it ought to be possible to have a sRGB and a wide gamut image and have the front end pick the right image. What I really want to do do the anaglyph processing with WebGL but that involves telling the web browser to render a wide gamut color space directly.


You're correct it's possible to have the front end select the right image. That can be done in CSS via "@media(color-gamut: srgb) {}" (p3 and rec2020 are the other supported options) for the different pre-rendered images. In this way you don't need to script anything and the client still only loads the required image.

WebGL/Canvas methods will be possible one day but right now how that will all get implemented is still being worked on. https://github.com/WICG/canvas-color-space/blob/main/CanvasC... the same can be said for general CSS support or any other interface in the browser beyond images and video at the moment https://w3c.github.io/ColorWeb-CG/.


You are trying to do colour management on the web, and also likely in the general consumer PC space.

This is impossible, and will remain so for at least a decade.

No vendor other than Apple has the slightest interest in getting colour even vaguely right, because they're all run either by colorblind lizards, or have corporate drones that are given identical monitors. The cheapest monitors possible that are basically greyscale.

Recently, Windows has taken several steps back from Vista's fairly good colour management. For example, the newer Win UI 3 SDK is 8-bit sRGB SDR only. Similarly, HDR support on Windows 11 is totally broken, and every semi-annual release does random things to it. When bugs are identified by consumers (not Microsoft!), they delay the fixes to the next semiannual release... at which they break other things. There hasn't been a release of Windows ever that can correctly drive a directly attached HDR display, such as the type used in a laptop.

None of these corporate drones understand the problem because they literally cannot see it.


Do we know why they chose to remove it?

I can't see any rationale posted on the linked page, or the comment threads it links to. Maybe I missed it though.


The reasons stated are:

- Experimental flags and code should not remain indefinitely

- There is not enough interest from the entire ecosystem to continue experimenting with JPEG XL

- The new image format does not bring sufficient incremental benefits over existing formats to warrant enabling it by default

- By removing the flag and the code in M110, it reduces the maintenance burden and allows us to focus on improving existing formats in Chrome

https://www.phoronix.com/news/Chrome-Dropping-JPEG-XL-Reason...


Thanks, that all sounds fairly reasonable then.


Not really. It was hidden behind a flag, so where should the "entire ecosystem" (who?) show interest?


The ecosystem are the other browser vendors.


By that logic WebP should have never got out of the door, because all non-Chromium browsers were explicitly reticent in implementing support for it and only finally started doing so with several years delay (in the case of Apple several years even more). But of course WebP was Google's own image format…


It took the other browsers up to a decade to support WebP, who wants to do that again?

JPEG-XL is based on the Google Pik proposal among other things and they were active in the spec'ing - there is no conspiracy here.


Yes, fair enough, parts of Google contributed to JPEG-XL, too. But the basic gist remains – for WebP they decided it was the greatest thing since slice bread and enabled it regardless of what the rest of the "ecosystem" thought, whereas now all of a sudden they're supposedly that much concerned, even though Chrome is by far and large the majority of that ecosystem, Apple prefers their own format choices anyway, and it's not as if it's not also a chicken-and-egg situation.


Yes, they recognize now that WebP was a mistake, which is why they won’t single-handedly force any more image formats upon the web.

(Safari learned this lesson earlier when they got stuck being the only browser to support JPEG2000, and Microsoft with JPEG-XR)


Have you heard about AVIF?


Things can change in 12 years. Google also killed off it's WEBP v2 development last month. Apple is all in on AVIF/AV1 but has shown little to no interest in JXL (might be burnt from back when they supported JP2).


> has shown little to no interest in JXL

I'm not aware that they've shown any interest in JXL, so "little" seems unnecessary.


> It took the other browsers up to a decade to support WebP, who wants to do that again?

Google, with AVIF


> Google, with AVIF

Google is a major contributor to JXL.

AVIF, as an outgrowth of AV1, has support from every major browser publisher, with the possible exception of Microsoft (though they are a governing member of AOM) as they're the only one which isn't shipping AVIF support right now (and as of October): https://caniuse.com/?search=AVIF


Huh? AVIF is an image format offspring of AV1 from the Alliance for Open Media which founding members include Apple, Mozilla and Google.

AV1 and AVIF is landing in all browsers. They get the AVIF support for free when they implement the AV1 decoder.


>They get the AVIF support for free when they implement the AV1 decoder.

No, it isn't free. You have to implement the support for the AVIF container. This is why libavif exists.


Also cameras. Which either aren’t interested (no one really cares about non-RAW formats on DSLRs), or are going AVIF once hardware encoders land (Android phones)


DSLR market has pretty much stopped (rarely any new DSLR camera is released) as everyone shifted to mirrorless cameras. Camera manufacturers mostly added HEIF (Canon, Sony, Fuji) as the non-RAW image format, because they already have HEVC for video.

Camera with a lossy / lossless 12-bit, 14-bit JPEG XL would definitely be interesting for many photographers. Not everyone wants to be forced to do a complete post-processing with RAW. JPEG is to limited (8-bit only) and HEIF isn't much better, while not having much support (especially on the web) because of the patent situation.


Fully agree with this sentiment.

Also good to know that Jpegli (a traditional jpeg codec within libjxl) allows for 16 bit input and output for '8-bit jpegs' and can deliver about ~12 bits of dynamics for the slowest gradients and ~10.5 bits for the usual photographs.

Jpegli improves existing jpeg images by about 8 % by doing decoding more precisely.

Jpegli allows for encoding jpeg images 30 % more efficiently by using JPEG XL adaptive quantization (by borrowing guetzli's variable dead zone trick), and the XYB colorspace from JPEG XL.

Together, you get about 35 % savings by using jpegli from the decoding + encoding improvements. Also, you get traditional JPEG that works with HDR.

Jpegli is predicted to be production ready in April or so, but can be benchmarked already.


jpeg xl can power raw formats and raw-like lossless formats, and is super interesting in that domain

another format that is limited to three channels or has bit depth limitations would not be similarly interesting

another format that is for example 50 % worse in lossless compression would not be interesting in this space


That's as useful as saying DNG can be powered by lossless JPEG. No standard lossless JPEG decoder can do anything useful with DNG files.


Most RAW formats are based on TIFF... no standard TIFF decoder can decode a RAW format either.


..who together compose what, like 10% of the browser market?


At 100% depending on the platform. For years many have complained about Chrome moving too fast and shipping stuff without a consensus with the other vendors, this might signal a change from that.


Waiting for consensus from other browsers while ignoring consensus from the broader software industry is not what most people meant.


Firefox has it on Nightly. Not great either.


Couldn't any interested parties just enable the flag for testing? That doesn't seem a insurmountable challenge, given that the code is, or was, already in there.


Well, I am interested in saving server bandwidth by delivering smaller image files at the same quality. I can't enable a flag on my user's browsers though. :)


you know, industry / browser people talk to each other. But not necessary on social media. Or publicly.


Yep, and they wanted support for JPEG XL


> Do we know why they chose to remove it?

Phoenix had an article on why Google decided to remove JPEG-XL.

https://www.phoronix.com/news/Chrome-Dropping-JPEG-XL-Reason...

The official justification seems to be a) experimental nature of JPEG-XL, b) lack of demand, c) maintenance burden.

Take it with a grain of salt. Personally I don't buy it.


>experimental nature of JPEG-XL

I wonder what exactly they mean by this because JPEG-XL is a ISO standard. So if it's already standardized, how the heck is still "experimental" ?

https://www.iso.org/standard/77977.html


JPEG XL was implemented in Chrome behind a flag.

Other than that there is nothing experimental in JPEG XL.


> I wonder what exactly they mean by this because JPEG-XL is a ISO standard.

I see there's some confusion. A specification might be formalized in an international standard, but implementations of said standard can and often are experimental.

You don't magically turn code production-ready by having a committee sign off on a document.


> I wonder what exactly they mean by this because JPEG-XL is a ISO standard.

So were JPEG-XR and MJPEG2000.


and neither of your examples is experimental


Seems to have been partly based on this: https://storage.googleapis.com/avif-comparison/index.html


At the bottom there's a link to a lossless comparison, which shows JPEG XL outperforming WebP, AVIF, and PNG on size (no metrics on decode speed).


Yes, and lossless JPEG doesn't matter, especially on the web.


lossless matters for lots of images on the web (like logos and other images with text). lossless jpeg recompression is also really useful because it allows CDNs to save bandwidth by replacing jpeg with jpeg-xl.


> lossless matters for lots of images on the web (like logos and other images with text)

Couldn't some make the point that SVG can be a better format for things like that, especially logos? After all, raster graphics will always have drawbacks when compared to vector graphics for things like logos and other non-photographic content.


On lower-DPI displays, which remain very widespread outside of mobile, vector graphics usually look worse than pixel graphics.


Even with losses it's better than WebP.


and AVIF


Thanks for the link. That does look quite damning, seems that the already-supported AVIF format is overall the better choice based on these metrics.


I have a moderate sized selection of mostly B&W comics. JXL is the first format that out performs JPEG 2000 on them. It cleans AVIF's clock on them, in a rather unfair comparison of JXL's default settings vs hand tuned settings for AVIF with different settings for color and B&W comics.

It also far outperforms an optipng encoded PNG for lossless. I know either my corpus or my eyeball is different from the benchmarks used by AVIF but I do want to share this outlier.


Not sure about your comparison specifically, but in general the hard edges in comics should be an ideal case for AV1's spatial prediction. JPEG-XL eschewed that in favor of being inherently progressive; its splines promised to make up some of the difference but last I checked the encoder still doesn't use them.

Categorically, this is the main reason why even the main JPEG-XL dev agrees that AVIF is currently better with non-photo content [1]

[1] https://twitter.com/jonsneyers/status/1550161314961555457


I observe that it depends on which quality you aim to. At low quality AVIF does a good job with line drawings. I suspect much of it is because of the 8-color local palette mode. When you raise the quality/size a tiny bit (to need 9+ unique colors in an area), JPEG XL does an equal job with the edges, but starts getting all the subtleties, lonely faint dots, noisy or weak textures right, whereas it can be impossible to convince an AVIF encoder to store them at any quality setting.

Cartoon/anime collectors seem to be passionate about image quality and they may have observed the same as I have. At least I have learned a lot about image quality from anime fans during my image compression career.

Web devs don't care that much about quality, they are more looking into creating monetary savings in bandwidth and sometimes trying to lower the latencies. E-commerce is yet another thing, where the image quality may turn into revenue and becomes important again.


It's the stippling and hatching that most compression formats struggle with


Keep in mind that it's from the AVIF team though. :)


Actually, the results of the AVIF team's comparison do show the value of JPEG XL, if you look a bit further than the main page and dig into the actual plots. E.g. according to this plot, JXL is about 15% better than AVIF for 'web quality' images (around 1 bpp): https://storage.googleapis.com/avif-comparison/images/subset...

Here is a detailed analysis: https://cloudinary.com/blog/contemplating-codec-comparisons


There are lots of questions about that data for example why was the testing carried out in Chrome 92?

https://mobile.twitter.com/jonsneyers/status/159877483772342...


Try to replicate it.


It was no longer unstable beta


It competes with WebP?


This seems vaguely familiar, I wonder where I've seen another dominant browser vendor engage in anticompetitive behaviour before?


Google has added AVIF.

In both cases the reason was that they've already invested in the video codec powering them, and the 1-frame-video-as-image format was almost free to add.


This "two for one" reasoning does have some downsides though. Here I listed ten of them:

https://twitter.com/jonsneyers/status/1596965036131426304?s=...


Back in the good old days, things like these could be added through plugins, instead of requiring everything to go through the browser manufacturer.

Plugins were removed mostly due to security issues (and also maintainability and portability worries), but nowadays that could be avoided by using WASM (see also https://hacks.mozilla.org/2020/02/securing-firefox-with-weba...).


Have a look at WASM decoder JXL.js: https://github.com/niutech/jxl.js


Google's justification for removing this feature sounds like Tesla's for removing Radar.


JPEG XL needs some support from the camera manufacturers. They’ve mostly gone with HEIF so far and seem unmotivated to do anything different.


JPEG XL was finalized 6-7 years after HEIF so I am not sure it is a good comparison. Especially seeing that JPEG XL was first finalized this and last year.


I know, I’m just saying that HEIF is the next-gen incumbent to beat. A lot of comparisons are being done with WebP and AVIF and legacy JPEG, which aren’t the real competition.


Yeah the comparisons that are made here are more for the Web market which HEIF have not entered at all which is why I guess it is not being included.

Adobe have added JPEG XL for Camera raw so maybe there is a chance for JPEG XL to catch up in that area, but it is hard to say. And I agree that it could be interesting to see comparisons.


HEIF is an abstract concept like 'binary'. Saying that image is stored in binary doesn't tell much, the same for HEIF. JPEG XL can be stored in a HEIF container.

It wouldn't help with technical compatibility, and a container format containing everything under the sun is going to bring a lot of long-term support burden and attack surface with it. Also, it is going to bring chaos since no one supports the whole format but just some fraction of it.


> HEIF is an abstract concept like 'binary'

Okay.

> Saying that image is stored in binary doesn't tell much, the same for HEIF.

No, it tells plenty. It's more like saying "beat saber binary". Yeah, maybe it's a picture of Grandma inside, but it's very probably a song.


Camera manufacturers are old spineless companies. In all those years they have done nothing for digital image formats and it is the most important thing in a camera. They weren't even able to come up or attempt to make a standard RAW format. All that came from Adobe (DNG), which the camera manufacturers happily ignored for their own proprietary solution until this day.


Can't really blame camera manufacturers when RED is suing everyone who's shipped a feature even remotely resembling their patent portfolio.


Well RED is mostly suing for their RAW video compression patents, which are just dumb an should never be allowed to be passed in the first place (and AFAIK Nikon is currently battling to invalidate that). But this is also their own problem - they haven't put almost no R&D into the software side of digital photography and videography like formats, processing. They have a nice camera which outputs 12-bit and higher images - they should be the first ones requesting and defining a new image format for consumption, which can handle that.


Reading the comments on the bug, supporting comments for JPEG XL started flowing in there around 24 August 2022 (with a couple early ones on Aug 16 and 17). Was there some publicity about JPEG XL around that time, and did someone urge others to request Google to enable it by default? I can imagine that that was when the decision process at Google started whether to keep or remove it. The reasons for removal given on October 31 are rather vague and seem to underestimate the interest in the format.


That was around the time the the final parts of the standardization procedure was getting finished: https://en.wikipedia.org/wiki/JPEG_XL#Standardization_status


That's when Adobe formally expressed a support for JPEG XL (comment 61), which is a big milestone by its own and people started to push Google to finish the experiment in lieu of that event, as everyone expected that it will land at some point, most likely shortly after libjxl reaches 1.0.


Good riddance to bad rubbish. PNG 'optimizers' that change the image data rather than the compression and framing settings were bad enough; the last thing we need is to displace JPG/PNG entirely with a format that doesn't distinguish[0] between lossy and lossless compression in the first place.

0: No, embedded metadata doesn't count; the same format is being used for two incompatible kinds of compression.


Unfortunately, if the format supports at least lossless compression (i.e. able to represent all uniform range in the pixel domain with a reasonable compression rate, amortized) then there is no practical way to distinguish lossy compression from lossless compression. PNG -> JPEG -> PNG is a lossy compression but the resulting output is seemingly lossless while it isn't.


Sufficiently badly behaved software can definitely still screw you over, but with separate formats you at least have a chance to notice that there's a problem during the PNG -> JPEG part of the process.

Decoders like libpng will outright refuse to handle JPGs, image viewers will usually refuse to handle JPGs named "image.png" (invalid png file), and even some web browsers at least say "image.png (JPEG Image)".

With JPEG XL, images can get switched to lossy mode silently, without breaking anything, and then back, also silently, also without breaking anything. JPEGXL -> JPEGXL -> JPEGXL is a lossy compression that doesn't give you any intermediate state where touching the images will loudly scream "something is wrong here"; you just get compression artifacts, for no obvious reason, with no indication that they weren't already like that in the original image.


> image viewers will usually refuse to handle JPGs named "image.png" (invalid png file), and even some web browsers at least say "image.png (JPEG Image)".

As you have noticed, the latter behavior is slowly becoming a norm even for user-facing applications (paint.net for example can handle them) because users have no idea how .png and .jpg can differ. And good JPEG encoders have made a visual inspection a lot harder. It's kinda futile.

> you just get compression artifacts, for no obvious reason, with no indication that they weren't already like that in the original image.

I should note that, while this is hardly an ideal situation, generation loss in modern image formats is much harder to spot in the intended bits-per-pixel range. And JPEG XL is one of the best performing image formats in terms of generation loss, so repeated lossy compression is not a big issue for it.


> generation loss in modern image formats is much harder to spot

As noted, things that make lossy compression more silent make the situation worse, not better.

> As you have noticed, the latter behavior is slowly becoming a norm even for user-facing applications

Yes, and that's a problem that should be fixed. JPEG XL instead makes things even worse than they already are. Hence good riddance to bad rubbish.


I believe the problem is fundamentally not solvable and maintaining a false dichotomy is bad for users. I can argue the same for, say, audio formats. Users have been trained to think that mp3/ogg are lossy and flac is lossless. Now flac streaming is a thing, but streaming services often give you flac files transcoded from an mp3 original, because that's all they've got. If you have spent a premium for using flac (oh, many do) you are screwed, but in reality users can't tell any difference anyway. So maybe the premium was pointless all the way and we should be less tolerant about technical lossy vs. lossless distinction. If you can here a compression artifact then it doesn't matter if it was apparently lossy or lossless.


Ultimately, 'real lossless' does not exist since all raster image data (or audio data) consists of quantized samples with a limited spatial (or temporal) resolution (and in case it was captured, as opposed to synthesized, there will be errors caused by the capturing process too). But as long as the precision is high enough to be indistinguishable by humans in the viewing conditions (or editing conditions) that matter to them, everything is fine. Whether that is achieved through lossless codecs (possibly used in near-lossless ways) or through lossy codecs is probably less important in practice.

What matters is getting the workflow right to minimize generation loss and precision loss during authoring. If your workflow critically depends on assumptions like "png is lossless, jpeg is lossy", you should probably re-think your workflow. A jpeg straight from a camera is 'more lossless' than a downscaled 8-bit png that was created from it. Just going by file type to determine where in the workflow you are, is not a good strategy.


> A jpeg straight from a camera is 'more lossless' than a downscaled 8-bit png that was created from it.

Well there's your problem right there: PNGs don't come from cameras. PNGs come from image generation software that produces specific, discrete pixel values. Changing #555555 to #565656 in lossless image data is no more acceptable than changing "UUU" to "VVV" in lossless text data. Unfortunately, gzip et al don't perform well on image data, so different but still lossless compression algorithms are required.


> As noted, things that make lossy compression more silent make the situation worse, not better.

Ridiculous.


Pure politics is clearly and officially the only reason why Google does things these days.

Best image codec on the planet has "not enough interest from ecosystem".

Being able to say this on a public mailing list with a straight face goes a very long way in showing how disconnected from reality folks at Google now are.

And it seems that comment comes from someone who claims to be an engineer ... that also goes to show the quality of engineers who still work there, where reality doesn't seem to factor in decision making anymore.

There was a time where one of Google's motto was "put the user first and everything else will follow".

LOL, how things have changed.


There’s also the politics of them not supporting img=*.mp4 too

https://bugs.chromium.org/p/chromium/issues/detail?id=791658


You should see what they said about supporting DCHP on Android for ipv6.

They have simply opted not to support it because you "should be using something else"


And they were right; nobody should use DHCP with IPv6. It is like bringing horse carts into space age.


Chromium is in the business of adding things websites should have no business concerning themselves with, like Bluetooth support, while removing support for useful things: now JPEG XL, previously RSS, MathML.


MathML is making a return next month. https://chromestatus.com/roadmap


> MathML is making a return next month

Yeah, and that's arguably thanks to Igalia and all of their sponsors. They've put a colossal amount of work into this project. If you have the means, it's worth donating for ongoing support, too: https://opencollective.com/mathml-core-support

Google just removed it and would've been content with it staying gone. They said "it wouldn’t have been a good return on investment for us to do it".


Well MIDI support is pretty cool, I could just go to Novation website and update SL49 firmware (after appropriate confirmation). Also Tasmota project has a firmware upload over serial, pretty accessible if you have Chrome.

I think Bluetooth could similarly be useful for e.g. heart rate monitoring apps for sports.


> I think Bluetooth could similarly be useful for e.g. heart rate monitoring apps for sports.

As the op commented, why not leaving the burden of managing/pairing the heart rate monitoring device to the OS? In the same way audio and input devices work? The only real use case that comes to my mind with Bluetooth support is indeed monitoring, but it’s not about heart rate.


Because then you are limited by the use cases the browser vendors have thought of, instead of enabling new applications someone else thought of.

I mean, why not also use it for controlling a LEGO robot from a Scratch app? Configuring presets of your coffee maker without needing a mobile app? Updating the firmware of your soldering iron via vendor web page?

Just providing supervised access to Bluetooth LE would satisfy these all without adding custom interface for all current and all future devices we might want a web page to interact with.


chromium is a business?

I thought the business was Chrome, whereas Chromium was the open source (and GPLed) project that makes the code that Chrome uses and that would almost certainly not start out as GPLed if it was a new project now.


Chrome isn't GPL.


ah, so the open code would (will?) vanish the instant it's not cheaper (and-or better) to develop the core engine in this way.


If JPEG XL was actually utilised they would not be deprecating it


How would a web developer use a feature, which is disabled by default? Should they put up a banner saying "in order to view this page, restart your browser with this command line parameter"?


You'd use it the same way any other web feature without widespread browser support is rolled out--progressive enhancement, i.e. load it at runtime if heuristics indicate the necessary browser APIs are available, otherwise fall back to a "degraded" version.


Not the parent commenter, though one thing is to support a feature that is somehow spread across some browser families and/or versions and another thing is to support a feature that is mostly unavailable on clients. It is probably not worth it form an economical point of view once you factor in the time spent developing/maintaining the feature plus (for this case) the additional required storage.


I use it with the picture element as the preferred codec. I found often AVIF was a bigger size than WebP and JXL so I stopped bothering as the encoding times are just awful.


I think the bug tracker around jpegXL support showed high interest and support for it already².

Having it opt-in also does means most websites can't use it given that the vast majority of their userbase are on default settings.

2: https://bugs.chromium.org/p/chromium/issues/detail?id=117805...


Does this mean that some Googler is going to get a promotion in six months by adding it back? It's my understanding that a lot of bizarre changes Google makes are best explained this way.


There are people at Google still fighting for JXL


This is disappointing to see, I had high hopes for JXL.


I guess that’s that then. AVIF wins the format war for the next JPEG.


No loss there, just slowdown and additional complexity for the web devs to deal with.

Jpeg xl covers several important domains that are not served by AVIF. Jpeg xl will anchor in these domains and where AVIF cannot provide sufficient quality or speed. Eventually it cannot be ignored any more and will be made available in browsers, too.

Once jpeg xl is made available there is no reason to use jpeg1, WebP or AVIF anymore. Then we can simplify again.


> Jpeg xl covers several important domains that are not served by AVIF.

I'm not finding any. Mind listing them?

> Once jpeg xl is made available there is no reason to use jpeg1, WebP or AVIF anymore. Then we can simplify again.

Unfortunately, JPEG XL wasn't "better enough" to have a unique selling proposition. And AVIF is the format that actually simplifies things, because it uses the same compressed media format (AV1) as the next-gen video distribution format.


>> Jpeg xl covers several important domains that are not served by AVIF.

> I'm not finding any. Mind listing them?

I don't know much about JXL vs AV1 from a technical perspective, both seem significantly better than JPEG. But JXL has the practical advantage that it can losslessly recompress JPEG images for a size improvement without an image quality improvement. When converting JPEG to AV1, you're stuck decompressing the JPEG into raw pixel data, with JPEG artefacts and all, and then compress that pixel data with AV1, so you get the artefacts from JPEG and the artefacts from AV1. Decompressing the resulting AV1 image will result in different and almost certainly worse quality pixel data.

So the ability to do a risk-free lossless conversion from JPEG to a better format and get a size improvement is a pretty darn big domain that's not served by AV1.


Thanks!


But video is not the same as still imagery. Different tradeoffs must be made for motion.


Like BPG and HEIF, AVIF is based on keyframes (i-frames). There's no tradeoff because there is no motion, the point of keyframes is to reset the entire frame.

Keyframes are still images embedded in the video stream.


Keyframes are meant to be visible for less than 100 milliseconds in typical cases. Not to say that AV1 is bad at still images, but there are tradeoffs associated to this fact ("low fidelity imagery").


> Keyframes are meant to be visible for less than 100 milliseconds in typical cases.

Keyframes don't disappear with the next frame — they're the basis of all the delta (P- and B-) frames that follow until the next I-frame.

Also, AVIF does support features like progressive encoding/decoding¹, which may be the kind of "still image only" tradeoff you're thinking of.

¹ https://aomediacodec.github.io/av1-avif/v1.1.0.html#layered-...


> Keyframes don't disappear with the next frame — they're the basis of all the delta (P- and B-) frames that follow until the next I-frame.

Exactly this. P/B-frames are not encoded as a delta to the input I-frame; it is encoded as a delta to once compressed then decoded I-frame, otherwise errors in the I-frame will accumulate. So any imperfection in I-frames has a chance to be fixed by subsequent P/B-frames, but this is impossible in a still image.

You have a good point that AVIF layered image items can act like such P/B-frames. Do libavif (or other AVIF implementations if any) make use of them? JPEG XL has the same feature (zero-duration frame) but I think libjxl doesn't use it to encode a residue when the input image is not animated---in fairness such residues are very hard to compress though.


> You have a good point that AVIF layered image items can act like such P/B-frames. Do libavif (or other AVIF implementations if any) make use of them?

Seemingly! https://github.com/AOMediaCodec/libavif/blob/main/CHANGELOG.... inclues the notes "Support for progressive AVIFs and operating point selection" and "Update the handling of 'lsel' and progressive decoding to AVIF spec v1.1.0" in 2021.


There’s also the challenge that AVIF isn’t cheap to transcode

Getting a cache miss with Cloudinary on large avifs can sometimes result in a multi second delay while they’re generated

Lack of JXL support is going to cost sites that server huge numbers of images e.g. Cloudinary millions of dollars in extra compute power


I don't see why you're getting downvoted. With Chrome removing JXL and all the browser vendors' expressed intentions to get behind AVIF, it certainly seems like AVIF is the clear winner in this round of the image format wars.


I believe current AVIF decoders and encoders are not competitive against JPEG1 codecs from the libjxl project. Check out the 'jpegli' effort. My guesswork is that jpegli is likely 5-10 % better for photography use case than AVIF. Jpegli also allows for similar HDR dynamics, 10-12 bits, like AVIF.


Jpegli can be further compressed by lossless jpeg1 support in jpeg xl, and some more by using full lossy jpeg xl.


It’s a tough battle.

For about 10 years I have been thinking about new file image file formats, it wasn’t until last year that I started really using them. For the web there is the cost of data transfer and the cost of data storage. You can save on data transfer by adding more files in more formats but you pay more in storage cost. To fully realize the economy of better file formats you need to retire JPEG and not add more files.

I finally got convinced that WEBP was supported well enough that it was worth using 𝐚𝐧𝐝 that the image quality was really good with small files. I learned that the disadvantages of new file formats that claim to compress better are real but the better compression isn’t always real. I see people write a blog post where they compress three sample images but a real evaluation project is a lot more than that.

So the world may well be better off with a second-best file format that people really use than being stuck with JPEG because people are struggling to make up their mind about what new format to use, support, etc.


Pretty sure that JPEG won the format war for the next JPEG, decades ago.

No one wants this. The benefits just don't outweigh the headaches in compatibility and content duplication. JPEG is good enough, and always has been. We'll all be happier by celebrating such a impactful technology than grousing and constantly trying to fix what ain't broken.


> Pretty sure that JPEG won the format war for the next JPEG, decades ago.

JPEG for lossy image formats, PNG for lossless.


and SVG for vector-vased.


Yeah nah. The scripting support (both direct and via styling) makes shipping SVG a complete hell. Aggressive (restrictive) CSP policies actually degrade and break SVG styling because of that. And it's absolutely necessary if you allow direct SVG linking / opening.

If anything, SVG is the one image format most in need of replacement.


AVIF? The war was lost when Apple set the default iPhone photo format to HEIF. You can't compete with that.


Apple is on the AOM board, and introduced HEIF before AVIF even existed (HEIF shipped in 2017 in iOS 11 and macOS 10.13, the first AVIF images were published late 2018, and the format was finalized early 2019).

Apple has also shipped AVIF support in iOS 16 and Ventura, and AVIF-enabled Safari in the same.

HEIF support was never enabled in Safari. Still isn't.

There is no war. HEIF support was mostly a way to optimise storage on iPhones.


Not even Safari on iPhone supports HEIF. Compare HEIF support with AVIF support: https://caniuse.com/?search=heif, https://caniuse.com/?search=avif.

I don't see this changing either, most browser vendors are going to decide that AVIF is good enough and not pay to license the HEIF patents from the MPEG group.


AVIF lives inside an HEIF container, too.


To add a bit more detail, both AVIF and HEIF use the same file/container format: ISO Base Media File Format¹ — a.k.a. "ISOBMFF", ISO/IEC 14496-12.

ISOBMFF is also used for MPEG-4, AV1, etc. If nothing else, the world appears to have settled on a flexible, extensible container format that can support any future media formats we can come up with.

¹ https://en.wikipedia.org/wiki/ISO_base_media_file_format


Your HEIF are insignificant compared to JPEG.


HEIF can't compete while being bogged down by patent licensing costs.


An image file format with a maximum resolution of 8193x4320 (without hacking pieces together) can never win a format war, let alone one in 2022 for pete's sake.


Only JPEG can surpass JPEG.


What's up with this JPEG XL cult in hacker news? I think the Chrome team pretty much handled this in the best way possible. They put everything in public and allowed anyone to comment on their main thread. They responded to lot of the comments. They seems to have their best intention in having JPEG XL implementation in chrome but the team decided it's not worth it, which is a valid decision for someone doing ROI analysis, after all engineers specialising in this are not cheap. It's like everyone is hinting at some conspiracy by Google here that is not there as they spent time in the spec and wanted to promote this.

Even I think the incremental 20% gain of JPEG XL is probably not worth the security risk and increased maintenance. JPEG decoders had been targeted so many times in the past even though it had been audited by best security folks.


There is an interesting cult I only realized after this affair, namely a group of WebP haters. Apparently there are a lot of them but weren't visible until JPEG XL is seen as a "solution" to WebP, and many of them became a vocal supporter for JPEG XL at any cost. So that's a big part of the cult.

Another part is probably a growing opposition to Google and Chrome. I had to explain again and again that JPEG XL was also jointly developed by Google (Research Zurich) and Cloudinary, but you can still reasonably argue that Chrome aligns better with Google's own interest than Google Research. And if you apply the same stringent criteria to AVIF, AVIF doesn't stand well either---it would have made zero positive difference if AVIF was deployed this year and not 2 years ago, and libavif is not that small. As AVIF and JPEG XL have their pros and cons when compared to each other, this alone can be seen suspicious enough from outsiders. Keep in mind that Chrome is now the Web Browser so there are a lot more to expect from them, like or not.


I'm probably part of the group you'd call "WebP haters", especially if you look at my comment history[1]. I would rather describe myself as extremely disappointed in WebP.

It's a clear win over PNG in the lossless use case. I have encountered a few outliers where a PNG results in a smaller file size than lossless WebP, but they are sufficiently rare to not worry about it. It also excludes the image sizes too large for WebP to handle (my system has a mix of WebP-lossless and PNG files for that reason). For lossy use cases, however, WebP is not a win over JPEG if you are concerned about anything other than file size. MozJPEG closes the encoding size gap rather significantly and WebP-lossy's origins in VP8 intraframes and limiting itself to 4:2:0 subsampling is just... awful. I never use WebP-lossy. It gains nothing from JPEG and comes out as inferior in basically all scenarios.

I don't view JPEG XL as a "solution" to WebP, but rather as a solution to JPEG. WebP was never competitive in the first place.

[1] In particular: https://news.ycombinator.com/item?id=33448714


You have a right to be disappointed, I was the same and that alone doesn't make you a part of the cult. ;-)

I agree that WebP was initially oversold and better JPEG encoders have made lossy WebP increasingly less appealing; a lossless WebP is still worth today though.


I dont even believe it was oversold. There were lots of independent reviews and analysis, even by Mozilla themselves that WebP wasn't even good. And yet people still uses it because they somehow hate JPEG.

It there was ever a "cult", it is "Anti JPEG cult" and "WebP Cult". Not JPEG XL.


I'm the author of lossless WebP. From my personal point of view I agree, the benefits of WebP lossy were too marginal to balance off its weaknesses (I have observed somewhat unpredictable quality, tendency to blurring, occasional reduction of saturation, and limitation to 420).

I like MozJPEG in the lower quality range, but I'm disappointed with it in the medium-to-high quality range (q85-100) -- there guetzli or jpegli does quite a bit better. Often MozJPEG appears worse than libjpeg-turbo in the highest quality whereas both guetzli and jpegli are substantially (~35 %) better.


HN loves, in the aggregate, to hate on Google. So Chrome removing support for an image format is a good opportunity come up with inane conspiracy theories. Repeatedly.

The good thing about a conspiracy theory is that facts don't matter. JPEG XL being a primarily Google-created format in the first place? The format being dead in the water on the web without support from Apple/Safari, which was never forthcoming? Nobody wanting a repeat of web? Irrelevant, you can just pretend that this is actually a format that every other browser supports.


> The format being dead in the water on the web without support from Apple/Safari, which was never forthcoming?

Never stopped Google from pushing its own codecs. Often without even a stable version of the spec for years. And often threatening third party ecosystem with "implement this or else".


Some people don't like Google or Chrome in general, for various reasons (valid or not), and they'll use any opportunity to criticize them.

JPEG XL was made in the JPEG committee and the two main proponents were Google and Cloudinary. So it's insane to say that Google in general would be opposed to JPEG XL. This was not a Google team that was part of the Chrome team though.

The way the Chrome team has handled this was not "the best way possible" in my opinion. They first made a decision, and then a month later they showed the data that lead to it (that was one week ago). This data was created by the AVIF team. They did invite anyone to provide feedback on this data, but they did not wait for this feedback to execute on the decision. I don't think this is the best way possible to handle this. A better way would be to first provide data, then wait for feedback, allow all sides to provide their input, have an open and honest discussion involving all stakeholders and experts, and then make a decision. That would be a better ordering of steps.

I agree though that this is not some conspiracy by Google. It is just a difficult technical decision to make, and they're currently making the wrong decision in my opinion, but this is something that has to be motivated technically, not emotionally or politically.


I am probably the first few if not the first to put JPEG XL on HN's radar. I am definitely not in an JPEG XL Cult. Although I do dislike Google.

I am simply a supporter of Better Tech. (And I am already omitting WebP hate here. ) And I suspect a lot of us are. And IMO neither WebP nor AVIF are better than JPEG XL.

Had Google actually posted a thorough analysis, as they stated that JPEG XL is not better or even inferior to AVIF, and suggest they decide not to go with it. It would have been fine. ( I may disagree with their take, but at least you can see it from their POV ). Not only did they not do that, they provided the Data with out of date version of JPEG XL [1] and a set of test that does not represent the actual internet usage. Compared to [2], where JPEG XL is surprisingly better when they wont expected.

So if, a) everything JPEG XL released and tested were true, and truly better than AVIF, then b) Why isn't Google supporting the better Image codec?

I am also perfectly fine with Google saying, I dont like it. I hate it because It is not AOM even if JPEG XL is better. Fine. Chrome is Google's browser. Do what ever you want with it.

Just dont ever come out again and say everything you do is better for the Web.

[1] https://storage.googleapis.com/avif-comparison/index.html

[2] https://twitter.com/jonsneyers/status/1563442356493230080


I know essentially zero about this space, but I would almost argue categorically that 20% is not "incremental".


Then why haven't everyone switched to webp from png/jpeg which gave >25% gain over jpeg and supported by chrome from 12 years? For better or worse, no one wants to sacrifice compatibility for 20% gain.


Because WebP does not offer a >25% improvement. As far as I know, it is only better than JPEG in some cases, and then seldom by that much. [1] And of course, this is only when you compress an image lossily. If you try to recompress an already-lossy JPEG as a WebP, it will suffer even more quality loss in the process. (And if you try to recompress it losslessly, the resulting WebP will be bigger.)

JPEG XL lets you recompress an existing lossy JPEG losslessly, saving you 20% in the process. This is something that other formats like WebP or AVIF do not and can not offer.

[1]: https://siipo.la/blog/is-webp-really-better-than-jpeg


Even the site you pointed to show average 18% decrease in size compared to best but slow jpeg encoder.

"JPEG XL lets you recompress an existing lossy JPEG losslessly” This is such a niche usecase. Who cares about bit more lossy compression for an already lossy compressed file.


Are we looking at the same graphs? In the linked comparison, WebP has at best maybe 10% of an improvement over JPEG for a 500px image, and actually performs worse as the images get bigger. (The 18% you’ve likely read are in comparison to the cjpeg, not mozjpeg, which compresses better.)

There’s also no reason one should tolerate further loss just because the image is already lossy. Whether the original is lossy or not, you either want to keep it exactly the same, or you're okay losing quality to save some space. Archiving and preservation come to mind as the first reason someone would want to go with the former.


> Who cares about bit more lossy compression for an already lossy compressed file.

Perhaps for those unaware: "without quality loss". I guess the people who care are those who store a lot of images in jpeg format (at least relative to the storage space they have). In my example, I have >1TiB of photos in JPG format (yes, I know), and it would be nice to get 200GiB back for free without losing quality. For a company offering an image storage product, this would be great cost reductions.


I don't think it's a niche use case. The amount of existing JPEG files is huge. Having a way to compress them better without having to worry about introducing additional loss, is a "nobrainer" type of improvement.


I guess you have a misunderstanding here, it's a lossless compression of an already lossily compressed file (which had used suboptimal coding methods). You will get a bit-perfect original JPEG file when you decompress a recompressed JPEG XL file. No additional compression artifact there.


JPEG and JPEG XL are compatible.


> What's up with this JPEG XL cult in hacker news?

I'm not sure it's on HN specifically. There seems to be a JXL cult, and like butters they flock to any mention. And in the same vein, they seem to inject large amounts of copium when reality doesn't go their way.


I don't like them too, but you do sound like naysayers to Rust back when it was less mature, don't you? Having cults doesn't mean anything, just stick to verifiable facts and logical reasoning regardless of them.


What’s the issue with being naysayer to rust? As long as somoneone is not actively harming rust community or spamming the community, it is fine and Rust is free to do what they want. On the other hand, look at the JPEG XL google group and see all kind of demanding and spammy messages asking the same questions literally dozens of times.

> Having cults doesn't mean anything

It does mean that they are the worst person to talk to.


> What’s the issue with being naysayer to rust?

There is no issue per se, but I'm referring to a certain group of people who didn't like Rust because they thought many if not most Rust users are in a cult. This observation was partly true and partly false; I've previously described what went possibly wrong [1].

Even at that time some people, no matter they like or don't like Rust, tried to continue a constructive discussion and I'm asking the same here. Your claim should not vary by the presence or lack of the quote-unquote cult.

[1] https://news.ycombinator.com/item?id=32802291


Which google group are you referring to? The bugtracker? Some of the comments there are useful imo, e.g. when image experts from various companies bring their views. Others are basically just "+1", which is better done by just starring the issue. But it does look like most people who did not have anything useful to add did indeed do exactly that, comparing the number of comments to the number of stars.


Yes, the bugtracker. It’s subjective and I could point to multiple examples, but those would be (rightly) dismissed as cherry picking. To me the entire thread really felt like a mob attack, and is filled with comments like:

> "We do not bloody care!"

> "We are the kings of the universe, we dictate which image formats you can use".

[0]: https://bugs.chromium.org/p/chromium/issues/detail?id=117805...


I agree that such comments are not useful. I can understand the frustration people may feel (very well actually), but it is not productive to express it in such ways. I don't encourage such reactions, but I do think they are understandable when the first argument for removal is "There is not enough interest". That can be perceived as someone telling you they're removing the thing you want to have because _you_ are not interested in it, and I can see how that can be infuriating and provoke emotional responses.


Proposal: some of the criticism may be motivated primarily by the meta-level issue rather than the object-level issue, viz. what this kerfuffle says about HTML5 and web technology in general.

The web has many positive aspects but modularity isn't one of them. HTML5 has turned into an unfocused attempt to specify a whole computer from bottom to top - everything from a [virtual] CPU, to programming languages, graphics stacks, text rendering, UI toolkits, MIDI, sockets, storage, Bluetooth, USB and more. Unlike other operating systems, the resulting Chrome pseudo-OS has nearly no extensibility. Attempts to use WebAssembly to hack around this yield mixed results at best given the problems of needing to constantly re-download the modules (cache segmentation), needing to re-JITC the modules and inability to share code (process isolation).

Images are a core part of the web's functionality, they were literally one of the first features ever added to HTML. The fact that new image formats can't be added for code maintenance reasons, even when having clearly unique features and interest from major companies, worryingly suggests that the web platform may be heading towards a form of technical bankruptcy. HTML has support for obscure features virtually no web sites will ever use like MIDI or USB support, whilst an upgrade to a core feature that every website uses (except HN of course!) apparently cannot be done any longer. It means implementing a complete yet non-extensible OS stack from top to bottom is a task too large for even the world's largest and richest corporations.

In the beginning, the web's architects realized and understood that it wasn't feasible for browsers to implement everything you might want to embed in a web page. The plugins API was an early addition to the platform as a consequence. When it was deprecated, the stated reasoning was as follows:

https://blog.chromium.org/2014/11/the-final-countdown-for-np...

"Last September we announced our plan to remove NPAPI support from Chrome, a change that will improve Chrome’s security, speed, and stability as well as reduce complexity in the code base"

At the time this sounded wise. But since then tens of millions of lines of code have been added to Chrome (almost all C++) so "reducing complexity in the codebase" seems like a hard argument to make with hindsight. Browsers were capable of running plugins inside their process sandboxes and exploits in Blink are found regularly, so it's not really clear that it made much security difference either.

So maybe that's one reason it's getting such a reaction. This JXL incident isn't just about JXL. It's about what it implies for the future. Who will even bother researching new image codecs now, if even Google itself can't get a better codec into the web?

It may be time to start thinking about what alternative designs to the web might look like, with a strong focus on modularity and extensibility whilst still taking into account everything that was learned over the past 30 years. I have some ideas already but haven't written them up yet.


> The fact that new image formats can't be added for code maintenance reasons

Your entire premise is wrong here. A new image format is being added across all major browsers (AVIF).

So what's the distinction between JPEG XL and AVIF? On a tecnical level, it's hard to find a way that AVIF is preferrable. It's just gross. But the reality is that Apple will support AVIF and will never support JPEG XL, and an image format that doesn't work on iOS-based browsers is useless. Basically nobody will deploy it, even if it works on Chrome, they'll just continue using other formats. We know this from nobody deploying webp in a similar situation. The blood is on Apple's hands here.

Why is adding e.g. USB support useful, then? Because that's entirely new functionality, unlocking new capabilities, with no obvious fallback to a slightly less efficient older version unlike with images.


Where did you get the information that Apple will never support JPEG XL?

I haven't heard or seen anything from Apple, neither in favor nor against JPEG XL.

I would assume though that when Adobe and Serif (and their open source alternatives like Gimp, Krita, and darktable) are adding JPEG XL support in their products, there will be some incentive for Apple to add it in their products as well. After all, they do have nice HDR-capable screens now, so it would not really make sense for them to not support the best codecs currently available for HDR, which are AV1 for video and JPEG XL for still images. Apple does have a long-standing reputation for being excellent in high-fidelity image workflows, and I don't see how they would be able to keep that reputation without adding support for JPEG XL.

So I don't see why Apple can't just support both AVIF and JPEG XL. AVIF is 2-3 years older so of course they'll land support for that one first, but I don't think that should be seen as an indication that they'll not support JPEG XL.


I believe Apple will be the second major OS to add JPEG XL after Linux distros. They understand user value in the creative space. Creative work occasionally needs more than 12 bits, no surprises approach to quality, and performant lossless coding.


Perhaps that should have been worded more precisely: "The fact that some new image formats can't be added"? Code maintenance costs are the stated justification for not adding JXL. We can assume any other new image format that isn't AVIF would hit the same barrier. What if tomorrow someone comes out with e.g. a new binary vector format? Same thing. Apple may well be the real issue, but that just moves the question around (why doesn't Apple add JXL, well, probably code maintenance costs too).

It's good that they're adding AVIF, but it will presumably be the last image codec added for a very long time, especially as we face the question of who will bother researching new such codecs now. And IIUC the primary reason they like AVIF is that it's basically a video codec in image form, so again - the choice is being dominated by code maintenance costs.

So I think the wider point stands. There is already a big issue with Chrome adding features that Safari and Firefox never match. Now this. It suggests the architecture is not scalable. Why shouldn't we have as many image formats as people want to create? It's due to the desire by browser makers for everything to be in HTML5 rather than bringing back a plugin architecture.


> The blood is on Apple's hands here.

Come on. This is bullshit, and you know it. The only reason industry at large is even supporting Google's codecs is because Google literally threatens hardware manufacturers to support their codecs in hardware, or else: https://www.protocol.com/bulletins/av1-android-14-requiremen... and https://www.protocol.com/youtube-tv-roku-issues

Google can't really strongarm Apple into supporting anything. And Apple either sets the trends or follows the industry. Stop pretending Apple has anything to do with JPEG XL's imagined or real failures.

In this particular case blood is squarely on Google's hands. And it's all for the same reasons that you accuse Apple of: https://news.ycombinator.com/item?id=33937745


> So what's the distinction between JPEG XL and AVIF? On a tecnical level, it's hard to find a way that AVIF is preferrable.

AVIF support comes nearly for free with AV1 already supported for video.


Life span of image formats is 5x the life span of video formats. With AVIF we will be stuck with loads of unnecessary complexity.

Look at WebP. VP8 are already being phased out and WebP is only emerging now.


seems like they backtracked on the issue just a hour ago;

https://chromium-review.googlesource.com/c/chromium/src/+/40...


No. It's some random person sending a pull request, which has not been accepted. Looking at their history [0], the account was created two weeks ago and the only contributions were attempts to revert earlier JPEG XL removal commits. Those requests were not accepted.

[0] https://chromium-review.googlesource.com/q/owner:ayz.out%254...


oh, okay. thanks for clarifying!


FFS :/


Related:

Chrome team released the data for the decision to remove JPEG XL support - https://news.ycombinator.com/item?id=33866388 - Dec 2022 (1 comment)

Revert “flag_descriptions: Add note about JPEG XL removal” - https://news.ycombinator.com/item?id=33803941 - Nov 2022 (17 comments)

What Is JPEG XL? - https://news.ycombinator.com/item?id=33646153 - Nov 2022 (7 comments)

Chrome Responds "No" to JPEG XL - https://news.ycombinator.com/item?id=33563378 - Nov 2022 (55 comments)

The case for JPEG XL - https://news.ycombinator.com/item?id=33442281 - Nov 2022 (209 comments)

Removing the JPEG XL code and flag from Chromium - https://news.ycombinator.com/item?id=33412340 - Oct 2022 (42 comments)

Chrome drops JPEG XL, “not enough interest” - https://news.ycombinator.com/item?id=33404840 - Oct 2022 (4 comments)

Google set to deprecate JPEG XL support in Chrome 110 - https://news.ycombinator.com/item?id=33399940 - Oct 2022 (93 comments)

Google Chrome Is Already Preparing to Deprecate JPEG-XL - https://news.ycombinator.com/item?id=33383880 - Oct 2022 (20 comments)

What’s the best lossless image format? - https://news.ycombinator.com/item?id=31657006 - June 2022 (164 comments)

FFmpeg now supports JPEG XL - https://news.ycombinator.com/item?id=31177098 - April 2022 (91 comments)

The JPEG XL standard has now been formally approved - https://news.ycombinator.com/item?id=29598090 - Dec 2021 (17 comments)

Using Saliency in progressive JPEG XL images - https://news.ycombinator.com/item?id=28468284 - Sept 2021 (44 comments)

JPEG XL - https://news.ycombinator.com/item?id=27577328 - June 2021 (234 comments)

JPEG XL would be Turing-complete without the 1024×1024 pixel limitation - https://news.ycombinator.com/item?id=27559748 - June 2021 (36 comments)

JPEG XL - https://news.ycombinator.com/item?id=26186707 - Feb 2021 (1 comment)

Overview of JPEG XL - https://news.ycombinator.com/item?id=25560683 - Dec 2020 (1 comment)

Brunsli: Practical JPEG repacker (now part of JPEG XL) - https://news.ycombinator.com/item?id=22456764 - March 2020 (72 comments)

JPEG XL: Next-Generation of Image Format for the Internet [video] - https://news.ycombinator.com/item?id=21612708 - Nov 2019 (3 comments)

JPEG XL Reaches Committee Draft - https://news.ycombinator.com/item?id=20603501 - Aug 2019 (2 comments)

JPEG XL is coming to store our photos at 1/3 size - https://news.ycombinator.com/item?id=19816485 - May 2019 (2 comments)

JPEG XL could let you pack twice as many photos into your phone - https://news.ycombinator.com/item?id=17356913 - June 2018 (57 comments)

JPEG XL could let you pack twice as many photos into your phone - https://news.ycombinator.com/item?id=17345470 - June 2018 (6 comments)


Competition with webp. Elephant. Room.


De most advance image compression technology that humanity managed to master out, and yet they removing it entirely... What kinda conspiracy hafta be behind of all of that?


did you tell chatgpt to act like jar jar binks or what?


Daily reminder that Chrome (and Chromium) is not unlike other Google projects: users are its cattle, not its clients. It’s naive to assume good intentions here.


A very good format killed by it's license


Are you thinking of JPEG 2000? Different thing.


JPEG XL is royalty-free


I wanted to post a JPEG XL pic of some Sony Betamax tapes, but I couldn't find anything to support encoding it.


Some options: ffmpeg, GIMP, Krita, Photoshop (if you enable it in the settings), and the reference implementation cjxl which you probably already have installed if you use Linux (because QT and GTK depend on it).


the mention of betamax probably meant that you replied to a joke. either the fact that betamax was just dead, or that technically superior options don't always win the market


Being a joke doesn't make a comment immune to having wrong parts. Those obvious jokes are not improved by saying you can't find an encoder unless it's actually true.

Can you explain how that is clearly part of the joke? Otherwise the correction makes plenty of sense. Despite the comment being a joke.


If you don't understand it, you could always try just being quiet until you figure it out.


Yeah that's how conversations work. Good second joke.


You'd think more people would have got that, wouldn't you?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: