Hacker News new | past | comments | ask | show | jobs | submit login
Still no love for JPEG XL: Browser maker love-in snubs next-gen image format (theregister.com)
99 points by DaveFlater 9 months ago | hide | past | favorite | 76 comments



When the Google JXL controversy first went down, I found that Google's commit rejecting JXL was authored by someone with AOMedia contributions, and that the manager who signed off and commented on it had some interview about the benefits of AV1.

The links are buried somewhere on Phoronix, I am looking... But what I am saying is Google's rejection of JXL seems to be as bad as it looks.


Given google and chromes involvement with AOMedia I think that its pretty natural that anyone focused on image/video codecs in chrome would have some sort of (distant or direct) connection to AOMedia.

If AOMedia was profit driven in any way (like MPEGLA sorta is via patent pools) it'd look worse, but in this case I just think it's the case that the pool of people working on codec support in a particular browser probably isn't that large so the overlap is to be expected.


The commit to add the flag saying JXL was being removed soon was reviewed and approved by James Zern, who also created and authored the commit that actually ripped out the JXL code from Chromium. Zern is one of the co-authors of WebP and is the primary contributor to libwebp.


Hmm, wasn't the name I remembered.

I hate to claim something without a citation, but for the life of me I can't find my Phoronix comment... Only my other comments referencing it.


I see this comment here[0] from one of the developers of av1/avif[1] but it is important to note that nowhere is it mentioned that it is him who made the decision to reject jxl.

[0] https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...

[1] https://research.google/people/james-bankoski/


From the browser makers' point of view there's quite a bit of risk with introducing a new image format. libjxl is written in C++ so undoubtedly will be full of undiscovered security issues. I'm sure that someone will write a decoder in a safer language, but that work still needs to be done and/or finished, and then integrated with the browser. At the same time there are to 5 significant places probably 0% of websites that host .jxl files. So at the start it's all downside and almost no upside.

(Chicken and egg problem here of course which is no one will create the websites until there is wide browser support.)


> If having a high quality Rust decoder implementation would arise as the only gating factor for choosing JPEG XL into interop 2024, the JPEG XL developer team in Google Research can deliver such by the end of Q2 2024

> We have tested conformance of the jxl-oxide decoder (which is implemented in Rust) and it is a fully conforming alternative implementation of JPEG XL. It correctly decodes all conformance test bitstreams and passes the conformance thresholds described in ISO/IEC 18181-3.

https://github.com/web-platform-tests/interop/issues/430


https://github.com/niutech/jxl.js a javascript polyfill taken from the main page https://jpegxl.info/

There are other decoders [0] written in a "safe language" (rust) listed as well. So no there are many "safe" implementations

[0] https://github.com/tirr-c/jxl-oxide


It takes some effort to implement, but Google has WUFFS for almost exactly this purpose:

https://github.com/google/wuffs


Yes this is pretty great actually.



Unfortunately a Rust implementation doesn't solve everything that could go wrong in a browser. You need to think about amongst other things: total memory that an image could allocate, safety of network references (if the format allows that, like SVG or XML), any kind of unbounded processing or memory usage caused by the image (such as a "zip bomb"), and what could possibly go wrong for every corner case in the standard. The Wikipedia page says that JPEG XL supports up to 1 terapixel images, which is unlikely to be a good idea for a browser even if it's handled in a memory safe way.

A while back I fuzz tested qemu's handling of various different disk image formats (I know, a different type of "image", but bear with me!) I found many cases where qemu could consume huge amounts of memory or CPU time on some inputs. Often times the inputs were quite small too, allowing nasty amplification attacks. As a result of this standard advice for clouds that allow you to upload untrusted images is to decode in a separate process. That process is protected with ulimits, so it will die, rather than trying to allocate all memory in the machine or consume huge amounts of CPU.


Isn't that true for the other new formats they support too?

Why single out JPEG XL?


It is true about other formats, but those have been in browsers for a long time so by now we have patched most of the exploits.

WebP is one of the most recently added image formats, and it had zero day exploits as recently as 6 months ago.


> It is true about other formats, but those have been in browsers for a long time so by now we have patched most of the exploits.

The current debate is about JPEG XL vs AVIF. Advantages of old image formats are not relevant here.


How are they not irrelevant? This is a cyclical problem browsers and OSes have dealt with many times before, and JPEGXL will hardly be the last time. It's a fundamentally challenging situation that applies to the newest image codec as much as it does to old ZIP files or hostile PDFs.

There will always be some new format with some advantage or another, but safely parsing complex user generated content just isn't trivial, so every one of these is both a cost benefit analysis on its own merits but also a chance to reflect on historical implementations, vulnerabilities, and lessons learned.


If the argument is between two new formats, how are old formats at all relevant? The issues you outlined are faced by both (or any) new format so is essentially moot in the context of this conversation.


Did I miss something? The title and article are mostly about JPEG XL. What's the "both" in this? JPEG XL is the newest and has poor support. AVIF is mentioned offhandedly in that article, but it's a little older and still doesn't have great support. WebP is even older and also has occasional issues.

The image formats past WebP offer very minor improvements but have big potentials for new zero-days. I don't think it's wrong to play it safe and/or just don't implement them.


When deciding whether A is better than B, it is irrelevant to point out problems that apply to A and B equally.


>it had zero day exploits as recently as 6 months ago

That not a measure of security.

How many malware exists for MacOS compared to Windows. Does that mean MacOS is safer?

You could easily argue the other way around that WebP and say has more undiscovered exploits.


> Why single out JPEG XL?

Hopefully it isn't singled out, and any prospective support for a new image format gets the same scrutiny.

The question is different for any image format that is already supported, because removing it breaks the web to the extent the format is being used. That's really an argument to be particularly careful about adding support for a new format: once it is widely available for a while it is almost impossible to remove. This is a one-way decision (unless it's barely used, in which case there wasn't a good reason to add it in the first place).


Chrome already handles excessive CPU or memory use by a tab, and I very much doubt the format supports urls (and it would be trivial to check).

Nothing you've said should be an issue. I expect really they just don't want "their" formats to be obsoleted.


A safe Rust implementation solves the biggest problems.


It's definitely better than no memory safety, but not sufficient to deal with all the cases that a browser (or anything parsing untrusted data from the internet) needs to think about.


> libjxl is written in C++ so undoubtedly will be full of undiscovered security issues.

There's the WebAssembly sandboxing trick (https://hacks.mozilla.org/2021/12/webassembly-and-back-again...) which might mitigate that, but an image decoder might fall into the "too performance-sensitive to accept the modest overhead incurred" case.


Aren't there C++ static code analysis tools at this point that might mitigate such risks?


To see how deep this is madness :

https://github.com/web-platform-tests/interop/issues/430#iss...

And Microsoft seems to be interested and want to integrate it into Windows : https://news.ycombinator.com/item?id=39163181


After Adobe and Apple, also Samsung has recently started support (with S24): https://news.ycombinator.com/item?id=39064820


Unfortunately that's just DNG (raw image) format with jxl compression


This only points to one things: developers strictly don't understand how politics work.

They keep harping about JXL's technical superiority (who disagrees btw?) when at this point it is utterly clear that the choice to boot it from browsers have precisely nothing to do with technical concerns.


> This only points to one things: developers strictly don't understand how politics work.

No lies detected and so painfully obvious.


Google has been acting even stupider than usual lately, but snubbing JXL goes beyond stupidity - it's clearly malicious, it must be, otherwise I really can't even fathom the rationale behind such a moronic decision may ever be.


> I really can't even fathom the rationale

If you take into account that jxl was and is in large part an effort supported internally by Google teams, and fought against by another, a fairly occam's razor like explanation is that someone at Google with influence is deeply butthurt because another team built a better toaster.


Related discussion:

JPEG XL support has officially been removed from Chromium https://news.ycombinator.com/item?id=33933208 (292 points, 378 comments)


Edit: Nevermind, Mac & Safari support both formats now. Good to hear.

Original comment: Ironically, .JXL opens natively on the Mac, but can't open in any browser. It's the exact opposite of .WEBP which can't open on Mac but too many websites seem to use it. https://jpegxl.info/test-page/


> It's the exact opposite of .WEBP which can't open on Mac

I suppose it depends on what version of macOS you're running; on my Mac running Sonoma, WebP files open just like JPEGs, PNGs, etc. and have for the last 2 or 3 macOS versions.


Oh, well look at that that, it works now. Good to hear.


> can't open in any browser.

Are you sure? When I click that link Firefox downloads the image file, which then opens correctly in Safari.


Oh you're right, Safari does show it. If I go to the URL directly it downloads the file, which is why I assumed it couldn't view it. But it does show on the page https://jpegxl.info/test-page/


Safari was an early adopter of JPEG XL. In the past couple years, actually, the team at Apple responsible for Safari has been making inroads on features and spec work. Jen Simmons especially has been astounding, particularly with her engagement with the community.

She was recently on the Syntax Podcast, and I thoroughly enjoyed the talk. Can catch a bit here: https://www.threads.net/@syntax_fm/post/C22hyslOABy/


That link works perfectly on any iOS browser


Safari (finally) supports WebP.


> Safari (finally) supports WebP.

Sure… if by "finally" you mean 1,175 days ago—November 16, 2020. That how long ago WebP has been supported in Safari [1].

[1]: https://webkit.org/blog/11340/new-webkit-features-in-safari-...


JPEG XL would have been a much better choice for HDR photos on Android than the abomination that is UltraHDR.


It is difficult to understand the benefits of the gainmap approach over HDR first and high quality local tone mapping. Especially when there is a modern local tone mapping algorithm with oss implementation that runs in real time.

Some industry leads believe that the tone mapping is part of artistic creativity and belongs to the photographer. I don't share this viewpoint, but I'm looking at this from a purely technical and philosophical viewpoint. I think we should have HDR first world, and SDR is a temporary fallback.


> tone mapping is part of artistic creativity

And its done incorrectly way too often.

(I am sitting on no high horse here, I'm guilty of it as well).


> "But instead this was just another development thread Google single-handedly stopped out of nothing but ego?"

There's a reasonable cost/benefit argument against standardizing JPEG XL in browsers. You don't have to agree with it, but JPEG XL proponents shouldn't just ignore it.

The argument is: (1) the cost is large -- implementation and maintenance of a complex image codec takes time, and image codecs are high-risk from a security perspective. (2) the benefit is relatively small -- it needs to provide a clear advantage over existing alternatives like jpg, png, webp, avif in some significant general use cases.

Now, you don't have to agree with that argument -- e.g. you can argue the cost isn't that high, or that there are valuable advantages to jxl for significant use cases that aren't covered by existing alternative.

But you do need to engage that argument.

Otherwise what else do you have? Popular demand isn't going to work, because you're in a chicken-and-egg situation. I suppose you can try to bribe and/or bully key decision makers for all the major browsers, though I hope that wouldn't work.


I'm not sure how the benefit could be considered small, being able to perform lossless compression of JPEGs alone is such a massive benefit. In my testing I do every now and then which consists of ripping thousands of images from various websites (image boards, scraping from websites I visit) I regularly get 20-40% file size savings when transcoding JPEG to JXL. These aren't some small 256x256 icons or cherry picked image sets of 25 pictures people love to test. This is as much as a real world use test one could possibly get for web testing.

Completely ignoring the potential of replacing PNG or WEBP, Completely ignoring actually competing against AVIF. The benefits of JXL when it comes to losslessly saving space for pre-existing images is so massively significant, it's hard to believe that this single feature alone doesn't meet the bar of worth.


> The benefits of JXL when it comes to losslessly saving space for pre-existing images is so massively significant, it's hard to believe that this single feature alone doesn't meet the bar of worth.

The old Google that cared deeply about the web would have been all over this. This current regime--not so much.

It's probably no coincidence that many of the folks who were huge advocates for the web are no longer there.


> The argument is: (1) the cost is large -- implementation and maintenance of a complex image codec takes time, and image codecs are high-risk from a security perspective.

If you look at the history of Google employees created the basis for JPEG XL [1], having it included in beta builds of Chrome and then removing it "for reasons", it's pretty obvious it wasn't pulled for technical or security reasons.

Obviously Apple didn't think there were significant security and implementation issues preventing them from enabling JPEG XL on over 2 billion devices.

The Chrome team has proposed a number of web features and APIs that Apple, Mozilla and sometimes Microsoft don't want to implement due to security and privacy reasons. Usually that doesn't stop Chrome from going ahead and shipping them anyway.

> (2) the benefit is relatively small -- it needs to provide a clear advantage over existing alternatives like jpg, png, webp, avif in some significant general use cases.

JPEG XL does provide advantages over existing alternatives—The Case for JPEG XL [2]:

    In the past, new image formats have been introduced that brought
    improvements in some areas while also introducing regressions in
    others. For example, PNG was a great improvement over GIF,
    except that it did not support animation. WebP brought
    compression improvements over JPEG in the low to medium fidelity
    range but at the cost of losing progressive decoding and
    high-fidelity 4:4:4 encoding. AVIF improved compression further,
    but at the cost of both progressive decoding and deployable
    encoders.

    We looked at six aspects of JPEG XL where it brings significant
    benefits over existing image formats:

    * Lossless JPEG recompression (20% on average)

    * Progressive decoding

    * Lossless compression performance

    * Lossy compression performance

    * Deployable encoder

    * Works across the workflow
[1]: https://github.com/google/pik

[2]: https://cloudinary.com/blog/the-case-for-jpeg-xl


I don't have a strong opinion either way, but I'll play devil's advocate here...

> Lossless JPEG recompression (20% on average)

Lossless JPEG recompression isn't that valuable because it's a "tweener" solution. If you mainly just care about image size, you can live with some loss and can recompress jpegs using existing formats. Or if you care about size and quality, you can recompress from high-quality sources using existing formats. lossless jpeg recompression kind of fits in the middle somewhere... you care enough about size to go through the trouble to recompress, but you don't care so much that you will use high-quality sources and you care about quality enough that you don't want to lose quality when recompressing, but again not so much that you will use high-quality sources. So it's not nothing, but not great either.

> Progressive decoding

A solution to a vanishing edge case.

> Lossless compression performance

Explains why you might want to use jxl in your workflow but that's not a browser concern.

> Lossy compression performance

This sounds good, but is it enough better over existing formats to justify a new one in the browser? I don't think it's clear cut.

> Deployable encoder

Obviously, existing formats have deployed encoders.

> Works across the workflow

Not a browser concern. Note that even if you use jxl across your entire workflow, you're still going very typically have a publishing step for images where you find and use a level of compression/quality appropriate for your project. There's not really any particular difference if the general image format type has changes at this step or not.


The big benefit of progressive decoding is one high resolution file supports responsive apps that can fetch smaller images just by download part of the full file.


"responsive app can fetch smaller images" is not really exclusive to jxl though.

For any format you can store an image at multiple resolutions/quality-levels and a responsive app can download the one with the size/quality it wants.

jxl probably saves an incremental amount of storage, which is nice, but storage is not usually the dominant cost of anything. So this is still an edge case.


That's only true because you took my quote completely out of context.

The point is not needing to store multiple copies of an image. You can simplify the back-end if you serve the same file to all clients.


> lossless jpeg recompression kind of fits in the middle somewhere... you care enough about size to go through the trouble to recompress, but you don't care so much that you will use high-quality sources and you care about quality enough that you don't want to lose quality when recompressing, but again not so much that you will use high-quality sources.

You’re assuming that the high-quality original is still available. If the JPEG is all you have, then losslessly recompressing it is the smallest file of the highest quality you can get.


If you don't keep track of the high-quality originals, how much do you really care about having the highest quality?

That's what I mean by "tweener" solution. You care a little about quality because you don't want to lose any more that you already have lost in your jpg, but not so much that you're keeping track of the high-quality originals. It's not nothing, but it's also not a big deal.


"A solution to a vanishing edge case" is a wild-ass statement to make about progressive decoding.


There is popular demand (including from Adobe https://github.com/web-platform-tests/interop/issues/430#iss..., https://crbug.com/40168998#comment62), which is arguably evidence against (2).

(Disclaimer: I’m a JPEG XL contributor.)


I wish we'd see more passive aggressive activism, for example, HN switching their logo (y18.svg) for y18.jxl


I wonder how hard it would be for smartphone to output JXL instead of jpeg

their jpeg encoder often barely compress anything


Jpeg xl team at Google has opensourced a new JPEG encoder, too. It is called jpegli. It compresses very well.


> The Firefox maker said it's neutral with regard to the technology, citing cost

Since they pay their CEO $7MM per year, this is a profoundly infuriating argument.


With the CEO getting that much, they do have to be careful about other costs.


Technically Google is the one paying Mozilla CEO's $7m salary.

Clearly a good investment, Firefox market share is lower and lower, and has been steadily declining since Google became Mozilla's main income source.


why 2 M in $7MM?

2 millions millions dollars? that's 10^12 dollars.


It's the accepted financial notation, perhaps you're confusing it with 7M$ which would be an SI like encoding.

https://corporatefinanceinstitute.com/resources/fixed-income...


That page says MM is antiquated, falling out of favor, and M is the more modern way to represent a million.


You have to be careful about separating your interpretation of what something is saying when referring to the claims of the content directly like that. Nowhere does the page say antiquated, that's just one particularly strong interpretation of "becoming less common". It'll probably eventually be antiquated but it certainly isn't yet - it's still very popular and accepted.


Because finance has weird conventions about numbers, and $7M could be misread as $7000 in the wrong context


Roman numerals. 1000 1000


Then it would be 2000


This isn’t some conspiracy, it’s about money. JPEG XL is likely patent encumbered and this including it may require paying licensing fees. The companies involved can’t admit that because if they do, they’d be willfully infringing if they do end up including it at some point…


What makes it seem probable that it is patent-encumbered? Is there something specific I can read about or is it just the track record of previous standards (starting with arithmetic encoding in the first JPEG)?


Pretty much all modern video/audio/image codecs are, look at HEIC for example.


HEIC is barely used and seems very cherry-picked to find something that IS still royalty encumbered.

>Pretty much all modern video/audio/image codecs are

The exact opposite is true. The most popular modern codecs are almost all royalty-free. WebP, AVIF, JXL are all royalty-free. VP9/AV1 are royalty-free. Opus is royalty-free.


I'm not sure why you didn't bother looking this up before commenting but JPEG XL is royalty-free and open source. There were some concerns raised well over a year ago about some specific subset of JXL's compression and they were completely settled and it's a non-issue. Google's decisions have nothing to do with paying royalties or licensing fees.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: