When the Google JXL controversy first went down, I found that Google's commit rejecting JXL was authored by someone with AOMedia contributions, and that the manager who signed off and commented on it had some interview about the benefits of AV1.
The links are buried somewhere on Phoronix, I am looking... But what I am saying is Google's rejection of JXL seems to be as bad as it looks.
Given google and chromes involvement with AOMedia I think that its pretty natural that anyone focused on image/video codecs in chrome would have some sort of (distant or direct) connection to AOMedia.
If AOMedia was profit driven in any way (like MPEGLA sorta is via patent pools) it'd look worse, but in this case I just think it's the case that the pool of people working on codec support in a particular browser probably isn't that large so the overlap is to be expected.
The commit to add the flag saying JXL was being removed soon was reviewed and approved by James Zern, who also created and authored the commit that actually ripped out the JXL code from Chromium. Zern is one of the co-authors of WebP and is the primary contributor to libwebp.
I see this comment here[0] from one of the developers of av1/avif[1] but it is important to note that nowhere is it mentioned that it is him who made the decision to reject jxl.
From the browser makers' point of view there's quite a bit of risk with introducing a new image format. libjxl is written in C++ so undoubtedly will be full of undiscovered security issues. I'm sure that someone will write a decoder in a safer language, but that work still needs to be done and/or finished, and then integrated with the browser. At the same time there are to 5 significant places probably 0% of websites that host .jxl files. So at the start it's all downside and almost no upside.
(Chicken and egg problem here of course which is no one will create the websites until there is wide browser support.)
> If having a high quality Rust decoder implementation would arise as the only gating factor for choosing JPEG XL into interop 2024, the JPEG XL developer team in Google Research can deliver such by the end of Q2 2024
> We have tested conformance of the jxl-oxide decoder (which is implemented in Rust) and it is a fully conforming alternative implementation of JPEG XL. It correctly decodes all conformance test bitstreams and passes the conformance thresholds described in ISO/IEC 18181-3.
Unfortunately a Rust implementation doesn't solve everything that could go wrong in a browser. You need to think about amongst other things: total memory that an image could allocate, safety of network references (if the format allows that, like SVG or XML), any kind of unbounded processing or memory usage caused by the image (such as a "zip bomb"), and what could possibly go wrong for every corner case in the standard. The Wikipedia page says that JPEG XL supports up to 1 terapixel images, which is unlikely to be a good idea for a browser even if it's handled in a memory safe way.
A while back I fuzz tested qemu's handling of various different disk image formats (I know, a different type of "image", but bear with me!) I found many cases where qemu could consume huge amounts of memory or CPU time on some inputs. Often times the inputs were quite small too, allowing nasty amplification attacks. As a result of this standard advice for clouds that allow you to upload untrusted images is to decode in a separate process. That process is protected with ulimits, so it will die, rather than trying to allocate all memory in the machine or consume huge amounts of CPU.
How are they not irrelevant? This is a cyclical problem browsers and OSes have dealt with many times before, and JPEGXL will hardly be the last time. It's a fundamentally challenging situation that applies to the newest image codec as much as it does to old ZIP files or hostile PDFs.
There will always be some new format with some advantage or another, but safely parsing complex user generated content just isn't trivial, so every one of these is both a cost benefit analysis on its own merits but also a chance to reflect on historical implementations, vulnerabilities, and lessons learned.
If the argument is between two new formats, how are old formats at all relevant? The issues you outlined are faced by both (or any) new format so is essentially moot in the context of this conversation.
Did I miss something? The title and article are mostly about JPEG XL. What's the "both" in this? JPEG XL is the newest and has poor support. AVIF is mentioned offhandedly in that article, but it's a little older and still doesn't have great support. WebP is even older and also has occasional issues.
The image formats past WebP offer very minor improvements but have big potentials for new zero-days. I don't think it's wrong to play it safe and/or just don't implement them.
Hopefully it isn't singled out, and any prospective support for a new image format gets the same scrutiny.
The question is different for any image format that is already supported, because removing it breaks the web to the extent the format is being used. That's really an argument to be particularly careful about adding support for a new format: once it is widely available for a while it is almost impossible to remove. This is a one-way decision (unless it's barely used, in which case there wasn't a good reason to add it in the first place).
It's definitely better than no memory safety, but not sufficient to deal with all the cases that a browser (or anything parsing untrusted data from the internet) needs to think about.
This only points to one things: developers strictly don't understand how politics work.
They keep harping about JXL's technical superiority (who disagrees btw?) when at this point it is utterly clear that the choice to boot it from browsers have precisely nothing to do with technical concerns.
Google has been acting even stupider than usual lately, but snubbing JXL goes beyond stupidity - it's clearly malicious, it must be, otherwise I really can't even fathom the rationale behind such a moronic decision may ever be.
If you take into account that jxl was and is in large part an effort supported internally by Google teams, and fought against by another, a fairly occam's razor like explanation is that someone at Google with influence is deeply butthurt because another team built a better toaster.
Edit: Nevermind, Mac & Safari support both formats now. Good to hear.
Original comment: Ironically, .JXL opens natively on the Mac, but can't open in any browser. It's the exact opposite of .WEBP which can't open on Mac but too many websites seem to use it. https://jpegxl.info/test-page/
> It's the exact opposite of .WEBP which can't open on Mac
I suppose it depends on what version of macOS you're running; on my Mac running Sonoma, WebP files open just like JPEGs, PNGs, etc. and have for the last 2 or 3 macOS versions.
Oh you're right, Safari does show it. If I go to the URL directly it downloads the file, which is why I assumed it couldn't view it. But it does show on the page https://jpegxl.info/test-page/
Safari was an early adopter of JPEG XL. In the past couple years, actually, the team at Apple responsible for Safari has been making inroads on features and spec work. Jen Simmons especially has been astounding, particularly with her engagement with the community.
It is difficult to understand the benefits of the gainmap approach over HDR first and high quality local tone mapping. Especially when there is a modern local tone mapping algorithm with oss implementation that runs in real time.
Some industry leads believe that the tone mapping is part of artistic creativity and belongs to the photographer. I don't share this viewpoint, but I'm looking at this from a purely technical and philosophical viewpoint. I think we should have HDR first world, and SDR is a temporary fallback.
> "But instead this was just another development thread Google single-handedly stopped out of nothing but ego?"
There's a reasonable cost/benefit argument against standardizing JPEG XL in browsers. You don't have to agree with it, but JPEG XL proponents shouldn't just ignore it.
The argument is: (1) the cost is large -- implementation and maintenance of a complex image codec takes time, and image codecs are high-risk from a security perspective. (2) the benefit is relatively small -- it needs to provide a clear advantage over existing alternatives like jpg, png, webp, avif in some significant general use cases.
Now, you don't have to agree with that argument -- e.g. you can argue the cost isn't that high, or that there are valuable advantages to jxl for significant use cases that aren't covered by existing alternative.
But you do need to engage that argument.
Otherwise what else do you have? Popular demand isn't going to work, because you're in a chicken-and-egg situation. I suppose you can try to bribe and/or bully key decision makers for all the major browsers, though I hope that wouldn't work.
I'm not sure how the benefit could be considered small, being able to perform lossless compression of JPEGs alone is such a massive benefit. In my testing I do every now and then which consists of ripping thousands of images from various websites (image boards, scraping from websites I visit) I regularly get 20-40% file size savings when transcoding JPEG to JXL. These aren't some small 256x256 icons or cherry picked image sets of 25 pictures people love to test. This is as much as a real world use test one could possibly get for web testing.
Completely ignoring the potential of replacing PNG or WEBP, Completely ignoring actually competing against AVIF. The benefits of JXL when it comes to losslessly saving space for pre-existing images is so massively significant, it's hard to believe that this single feature alone doesn't meet the bar of worth.
> The benefits of JXL when it comes to losslessly saving space for pre-existing images is so massively significant, it's hard to believe that this single feature alone doesn't meet the bar of worth.
The old Google that cared deeply about the web would have been all over this. This current regime--not so much.
It's probably no coincidence that many of the folks who were huge advocates for the web are no longer there.
> The argument is: (1) the cost is large -- implementation and maintenance of a complex image codec takes time, and image codecs are high-risk from a security perspective.
If you look at the history of Google employees created the basis for JPEG XL [1], having it included in beta builds of Chrome and then removing it "for reasons", it's pretty obvious it wasn't pulled for technical or security reasons.
Obviously Apple didn't think there were significant security and implementation issues preventing them from enabling JPEG XL on over 2 billion devices.
The Chrome team has proposed a number of web features and APIs that Apple, Mozilla and sometimes Microsoft don't want to implement due to security and privacy reasons. Usually that doesn't stop Chrome from going ahead and shipping them anyway.
> (2) the benefit is relatively small -- it needs to provide a clear advantage over existing alternatives like jpg, png, webp, avif in some significant general use cases.
JPEG XL does provide advantages over existing alternatives—The Case for JPEG XL [2]:
In the past, new image formats have been introduced that brought
improvements in some areas while also introducing regressions in
others. For example, PNG was a great improvement over GIF,
except that it did not support animation. WebP brought
compression improvements over JPEG in the low to medium fidelity
range but at the cost of losing progressive decoding and
high-fidelity 4:4:4 encoding. AVIF improved compression further,
but at the cost of both progressive decoding and deployable
encoders.
We looked at six aspects of JPEG XL where it brings significant
benefits over existing image formats:
* Lossless JPEG recompression (20% on average)
* Progressive decoding
* Lossless compression performance
* Lossy compression performance
* Deployable encoder
* Works across the workflow
I don't have a strong opinion either way, but I'll play devil's advocate here...
> Lossless JPEG recompression (20% on average)
Lossless JPEG recompression isn't that valuable because it's a "tweener" solution. If you mainly just care about image size, you can live with some loss and can recompress jpegs using existing formats. Or if you care about size and quality, you can recompress from high-quality sources using existing formats. lossless jpeg recompression kind of fits in the middle somewhere... you care enough about size to go through the trouble to recompress, but you don't care so much that you will use high-quality sources and you care about quality enough that you don't want to lose quality when recompressing, but again not so much that you will use high-quality sources. So it's not nothing, but not great either.
> Progressive decoding
A solution to a vanishing edge case.
> Lossless compression performance
Explains why you might want to use jxl in your workflow but that's not a browser concern.
> Lossy compression performance
This sounds good, but is it enough better over existing formats to justify a new one in the browser? I don't think it's clear cut.
> Deployable encoder
Obviously, existing formats have deployed encoders.
> Works across the workflow
Not a browser concern. Note that even if you use jxl across your entire workflow, you're still going very typically have a publishing step for images where you find and use a level of compression/quality appropriate for your project. There's not really any particular difference if the general image format type has changes at this step or not.
The big benefit of progressive decoding is one high resolution file supports responsive apps that can fetch smaller images just by download part of the full file.
"responsive app can fetch smaller images" is not really exclusive to jxl though.
For any format you can store an image at multiple resolutions/quality-levels and a responsive app can download the one with the size/quality it wants.
jxl probably saves an incremental amount of storage, which is nice, but storage is not usually the dominant cost of anything. So this is still an edge case.
> lossless jpeg recompression kind of fits in the middle somewhere... you care enough about size to go through the trouble to recompress, but you don't care so much that you will use high-quality sources and you care about quality enough that you don't want to lose quality when recompressing, but again not so much that you will use high-quality sources.
You’re assuming that the high-quality original is still available. If the JPEG is all you have, then losslessly recompressing it is the smallest file of the highest quality you can get.
If you don't keep track of the high-quality originals, how much do you really care about having the highest quality?
That's what I mean by "tweener" solution. You care a little about quality because you don't want to lose any more that you already have lost in your jpg, but not so much that you're keeping track of the high-quality originals. It's not nothing, but it's also not a big deal.
You have to be careful about separating your interpretation of what something is saying when referring to the claims of the content directly like that. Nowhere does the page say antiquated, that's just one particularly strong interpretation of "becoming less common". It'll probably eventually be antiquated but it certainly isn't yet - it's still very popular and accepted.
This isn’t some conspiracy, it’s about money. JPEG XL is likely patent encumbered and this including it may require paying licensing fees. The companies involved can’t admit that because if they do, they’d be willfully infringing if they do end up including it at some point…
What makes it seem probable that it is patent-encumbered? Is there something specific I can read about or is it just the track record of previous standards (starting with arithmetic encoding in the first JPEG)?
HEIC is barely used and seems very cherry-picked to find something that IS still royalty encumbered.
>Pretty much all modern video/audio/image codecs are
The exact opposite is true. The most popular modern codecs are almost all royalty-free. WebP, AVIF, JXL are all royalty-free. VP9/AV1 are royalty-free. Opus is royalty-free.
I'm not sure why you didn't bother looking this up before commenting but JPEG XL is royalty-free and open source. There were some concerns raised well over a year ago about some specific subset of JXL's compression and they were completely settled and it's a non-issue. Google's decisions have nothing to do with paying royalties or licensing fees.
The links are buried somewhere on Phoronix, I am looking... But what I am saying is Google's rejection of JXL seems to be as bad as it looks.