Hacker News new | past | comments | ask | show | jobs | submit login

If only Google could be convinced to adopt this marvelous codec... Not looking super positive at the moment:

https://issues.chromium.org/issues/40270698

https://bugs.chromium.org/p/chromium/issues/detail?id=145180...




It's so frustrating how the chromium team is ending up as a gatekeeper of the Internet by pick and choosing what gets developed or not.

I recently come across another issue pertaining to the chromium team not budging on their decisions, despite pressure from the community and an RFC backing it up - in my case custom headers in WebSocket handshakes, that are supported by other Javascript runtimes like node and bun, but the chromium maintainer just disagrees with it - https://github.com/whatwg/websockets/issues/16#issuecomment-...


> It's so frustrating how the chromium team is ending up as a gatekeeper of the Internet by pick and choosing what gets developed or not.

https://github.com/niutech/jxl.js is based on Chromium tech (Squoosh from GoogleChromeLabs) and provides an opportunity to use JXL with no practical way for Chromium folks to intervene.

Even if that's a suboptimal solution, JXL's benefits supposedly should outweight the cost of integrating that, and yet I haven't seen actual JXL users running to that in droves.

So JXL might not be a good support for your theory: where people could do they still don't. Maybe the format isn't actually that important, it's just a popular meme to rehash.


Why do you assume that the benefits would outweigh said costs? That's a weird burden to set on the format. Using JavaScript on the browser to decode it is a huge hurdle, I don't know of any format that ever got popular or got its initial usage from a similar approach. Avif was just added too, even if no one was using a js library to decode it beforehand

Fwiw I agree that there's a weird narrative around jpegxl, at the end of the day it's just a format, and I think it's not very good for lower quality images as proven by the linked article in the OP. Avif looks better in that regard.

I think it would've made more sense than WebP though (which also doesn't look good at all when not lossless), but that was like a decade ago and that ship has sailed. So avif fills a niche that WebP sucks at, while jpegxl doesn't really do that. That alone is reason enough to not bother with including it.


People don't use blurry low quality images in the web. These low qualities don't matter outside of compression research.

Average/median quality of images is between 85 to 90 depending how you calculate it.

There, users' waiting time is worth during image formats life time for about 3 trillion USD. If we can reduce 20 % of it we create wealth of 600 billion USD distributed to the users. More savings come from data transfer costs.


I use blurry lo-fi images sometimes, eg to reduce the server pain during a Mastodon preview stampede, and for hero images when Save-Data is set!


> Why do you assume that the benefits would outweigh said costs? That's a weird burden to set on the format.

I'm not assuming that there are those benefits, but that there are people to see them. Those who _very_ vocal about browsers (and Chrome in particular) not supporting it seem to think so or they wouldn't bother.

If I propose integrating good old Targa file support into Chrome, I'd also be asked about a cost/benefit analysis. And by building and using a polyfill to add that support, I show that I'm serious about Targa files, which gives credence to my cost/benefit analysis and also lets people play around with the Targa format, hopefully making it self-evident that the format is good, and from there that these benefits based on native support would be even better.

For JXL I see people talking the talk but, by and large, not walking the walk.


I see what you mean. Yeah, I think jpegxl is the format that I've heard about the most but never really seen in the wild. It's a chicken and egg problem but still, it's basically not used at all compared to the Mindshare it seems to have in these discussions


Question is for how long. Time to slam the hammer on them.


What hammer? You want US president or supreme court to compel Chrome developers to implement every image format in existence and every JS API proposed by anyone anywhere?

Unless it is some kind of anti-competitive behavior like they intentionally stiffening adoption of standard competing with their proprietary patent-encumbered implementation that they expect to collect royalties for (doesn't seem to be the case), then I don't see the problem.


Why not make a better product than slam some metaphorical hammer?


That's not how this works. Firefox is the closest we have, and realistically the closest we will get to a "better product" than Chromium for the foreseeable future, and it's clearly not enough.


The only hammer at all left is Safari, basically on iPhones only.

That hammer is very close to going away; if the EU does force Apple to really open the browsers on the iPhone, everything will be Chrome as far as the eye can see in short order. And then we fully enter the chromE6 phase.


And Firefox does not support the format. Mozilla is the same political company as everyone else.


Because "better" products don't magically win.


Where's Firefox's and Webkit's position on the proposal?


Safari/Webkit has added JPEG XL support already.

Firefox is "neutral", which I understand as meaning they'll do whatever Chrome does.

All the code has been written, patches to add JPEG XL support to Firefox and Chromium are available and some of the forks (Waterfox, Pale Moon, Thorium, Cromite) do have JPEG XL support.


I believe they were referring to that WebSocket issue, not JXL.


They didn't "lose interest", their lawyers pulled the emergency brakes. Blame patent holders, not Google. Like Microsoft: https://www.theregister.com/2022/02/17/microsoft_ans_patent/. Microsoft could probably be convinced to be reasonable. But there may be a few others. Google actually also holds some patents over this but they've done the right thing and license those patents along with their implementation.

To fix this, you'd need to convince Google, and other large companies that would be exposed to law suits related to these patents (Apple, Adobe, etc.), that these patent holders are not going to insist on being compensated.

Other formats are less risky; especially the older ones. Jpeg is fine because it's been out there for so long that any patents applicable to it have long expired. Same with GIF, which once was held up by patents. Png is at this point also fine. If any patents applied at all they will soon have expired as the PNG standard dates back to 1997 and work on it depended on research from the seventies and eighties.


There are no royalties to be paid on JPEG XL. Nobody but Cloudinary and Google is claiming to hold relevant patents, and Cloudinary and Google have provided a royalty free license. Of course the way the patent system works, anything less than 20 years old is theoretically risky. But so far, there is nobody claiming royalties need to be paid on JPEG XL, so it is similar to WebP in that regard.


"Patent issues" has become a (sometimes truthful) excuse for not doing something.

When the big boys want to do something, they find a way to get it done, patents or no, especially if there's only "fear of patents" - see Apple and the whole watch fiasco.


Patents was not the latest excuse I heard from Google. Their explanation was security concerns.


Do you have a link? Or was it a private communication?


> [...] other large companies that would be exposed to law suits related to these patents (Apple, Adobe, etc.) [...]

Adobe included JPEG XL support to their products and also the DNG specification. So that argument is pretty much dead, no?


Adobe also has an order of magnitude lower number of installed software than Chrome or Firefox which makes patent fees much cheaper. And their software is actually paid for by users.


DNG Converter (which includes JPEG XL compression) isn’t paid. You can get it here: https://helpx.adobe.com/camera-raw/using/adobe-dng-converter...


Not that simple. Maybe they struck a deal with a few of the companies or they made a different risk calculation. And of course they have a pretty fierce patent portfolio themselves so there's the notion of them being able to retaliate in kind to some of these companies.


I don't think that's true (see my other comment for what the patent is really about), but even when it is, Adobe's adoption means that JPEG XL is worth the supposed "risk". And Google does ship a lot of technologies that are clearly patent-encumbered. If the patent is the main concern, they could have answered so because there are enough people wondering about the patent status, but the Chrome team's main reason against JPEG XL was quite different.


Adobe sells paid products and can carve out a license fee for that, like they do with all the other codecs and libraries they bundle. That's part of the price you are paying.

Harder to do for users of Chrome.


The same thing can be said with many patent-encumbered video codecs which Chrome does support nevertheless. That alone can't be a major deciding factor, especially given that the rate of JPEG XL adoption has been remarkably faster than any recent media format.


Is this not simply a risk vs reward calculation? Newer video codecs present a very notable bandwidth saving over old ones. JPEG XL presents minor benefits over WebP, AVIF, etc. So while the dangers are the same for both the calculation is different.


Video = billions lower costs for Youtube.


You can get Adobe DNG Converter for free and use it to convert your raw files to DNG compressed with JPEG XL.

https://helpx.adobe.com/content/dam/help/en/camera-raw/digit...


The Microsoft patent doesn't apply to JXL, and in any case, Microsoft has literally already affirmed that they will not use it to go after any open codec.


How exactly is that done? I assume even an offhand comment by an official (like CEO, etc) that is not immediately walked back would at least protect people from damages associated with willful infringement.


That ANS patent supposedly relates to refining the coding tables based on symbols being decided.

It is slower for decoding and Jpeg xl does not do that for decoding speed reasons.

The specification doesn't allow it. All coding tables need to be in final form.


> their lawyers pulled the emergency brakes

Do you have source for that claim?


Probably this: https://www.theregister.com/2022/02/17/microsoft_ans_patent/

I think it would be much better for everyone involved and humanity if Mr. Duda himself got the patent in the first place instead of praying no one else will.


Duda published his ideas, that’s supposed to be it.


Prior art makes patents invalid anyway.


Absolutely.

And nothing advances your career quite like getting your employer into a multi-year legal battle and spending a few million on legal fees, to make some images 20% smaller and 100% less compatible.


Well, lots of things other than JXL use ANS. If someone starts trying to claim ANS, you'll have Apple, Disney, Facebook, and more, on your side :)


But that doesn't matter. If a patent is granted, choosing to infringe on it is risky, even if you believe you could make a solid argument that it's invalid given enough lawyer hours.


The Microsoft patent is for an "improvement" that I don't believe anyone is using, but Internet commentators seem to think it applies to ANS in general for some reason.

A few years earlier, Google was granted a patent for ANS in general, which made people very angry. Fortunately they never did anything with it.


I believe that Google's patent application dealt with interleaving non-compressed and ANS data in a manner that made streaming coding easy and fast in software, not a general ANS patent. I didn't read it but discussed shortly about it with a capable engineer who had.


If the patent doesn't apply to JXL then that's a different story, then it doesn't matter whether it's valid or not.

...

The fact that Google does have a patent which covers JXL is worrying though. So JXL is patent encumbered after all.


I misrecalled. While the Google patent is a lot more general than the Microsoft one, it doesn't apply to most uses of ANS.


I'm just inferring from the fact that MS got a patent and then this whole thing ground to a halt.


Not only you have no source backing your claim, but there is a glaring counterexample. Chromium's experimental JPEG XL support carried an expiry milestone, which was delayed multiple times and it was bumped last time on June 2022 [1] before the final removal on October, which was months later the patent was granted!

[1] https://issues.chromium.org/issues/40168998#comment52


In other words, there's no source.


>To fix this, you'd need to convince Google, and other large companies that would be exposed to law suits related to these patents (Apple, Adobe, etc.), that these patent holders are not going to insist on being compensated.

Apple has implemented JPEG XL support in macOS and iOS. Adobe has also implemented support for JPEG XL in their products.

Also, if patents were the reason Google removed JXL from Chrome, why would they make up technical reasons for doing so?

Please don't present unsourced conspiracy theories as if they were confirmed facts.


[flagged]


Mate, you're literally pulling something from your ass. Chrome engineers claim that they don't want JXL because it isn't good enough. Literally no one involved has said that it has anything to do with patents.


>There must be a more rational reason than that. I've not heard anything better than legal reasons. But do correct me if I'm wrong. I've worked in big companies, and patents can be a show stopper. Seems like a plausible theory (i.e. not a conspiracy theory)

In your first comment, you stated as a fact that "lawyers pulled the emergency brakes". Despite literally no one from Google ever saying this, and Google giving very different reasons for the removal.

And now you act as if something you made up in your mind is the default theory and the burden of proof is on the people disagreeing with you.


People who look after Chrome’s media decoding are an awkward bunch, they point blank refuse to support <img src=*.mp4 for example


Seems entirely reasonable, considering that <img> is for images, whereas mp4 is for videos, no?


Doesn't make sense when they support GIF or animated WebP as images. Animated WebP in particular is just a purposely gimped WebM that should not exist at all and would not need to exist if we could use video files directly.


If you want a simple conspiracy theory, how about this:

The person responsible for AVIF works on Chrome, and is responsible for choosing which codecs Chrome ships with. He obviously prefers his AVIF to a different team's JPEG-XL.

It's a case of simple selfish bias.


Why not take Chrome's word for it:

---cut---

Helping the web to evolve is challenging, and it requires us to make difficult choices. We've also heard from our browser and device partners that every additional format adds costs (monetary or hardware), and we’re very much aware that these costs are borne by those outside of Google. When we evaluate new media formats, the first question we have to ask is whether the format works best for the web. With respect to new image formats such as JPEG XL, that means we have to look comprehensively at many factors: compression performance across a broad range of images; is the decoder fast, allowing for speedy rendering of smaller images; are there fast encoders, ideally with hardware support, that keep encoding costs reasonable for large users; can we optimize existing formats to meet any new use-cases, rather than adding support for an additional format; do other browsers and OSes support it?

After weighing the data, we’ve decided to stop Chrome’s JPEG XL experiment and remove the code associated with the experiment. [...]

From: https://groups.google.com/a/chromium.org/g/blink-dev/c/WjCKc...


I try to make a bulletin point list of the individual concerns, the original statement is written in a style that is a bit confusing for a non-native speaker such as me.

* Chrome's browser partners say JPEG XL adds monetary or hardware costs.

* Chrome's device partners say JPEG XL adds monetary or hardware costs.

* Does JPEG XL work best for the web?

* What is JPEG XL compression performance across a broad range of images?

* Is the decoder fast?

* Does it render small images fast?

* Is encoding fast?

* Hardware support keeping encoding costs reasonable for large users.

* Do we need it at all or just optimize existing formats to meet new use-cases?

* Do other browsers and OSes support JPEG XL?

* Can it be done sufficiently well with WASM?


* [...] monetary or hardware costs.

We could perhaps create a GoFundMe page for making it cost neutral for Chrome's partners. Perhaps some industry partners would chime in.

* Does JPEG XL work best for the web?

Yes.

* What is JPEG XL compression performance across a broad range of images?

All of them. The more difficult it is to compress, the better JPEG XL is. It is at its best at natural images with noisy textures.

* Is the decoder fast?

Yes. See blog post.

* Does it render small images fast?

Yes. I don't have a link, but I tried it.

* Is encoding fast?

Yes. See blog post.

* Hardware support keeping encoding costs reasonable for large users.

https://www.shikino.co.jp/eng/ is building it based on libjxl-tiny.

* Do we need it at all or just optimize existing formats to meet new use-cases?

Jpegli is great. JPEG XL allows for 35 % more. It creates wealth of a few hundred billion in comparison to jpegli, in users' waiting times. So, it's a yes.

* Do other browsers and OSes support JPEG XL?

Possibly. iOS and Safari support. DNG supports. Windows and some androids don't support.

* Can it be done sufficiently well with WASM?

Wasm creates additional complexity, adds to load times, and possibly to computation times too.

Some more work is needed before all of Chrome's questions can be answered.


Safari supports jxl since version 17


Mozilla effectively gave up on it before Google did.

https://bugzilla.mozilla.org/show_bug.cgi?id=1539075

It's a real shame, because this is one of those few areas where Firefox could have lead the charge instead of following in Chrome's footsteps. I remember when they first added APNG support and it took Chrome years to catch up, but I guess those days are gone.

Oddly enough, Safari is the only major browser that currently supports it despite regularly falling behind on tons of other cutting-edge web standards.

https://caniuse.com/jpegxl


I followed Mozilla/Firefox integration closely. I was able to observe enthusiasm from their junior to staff level engineers (linkedin-assisted analysis of the related bugs ;-). However, an engineering director stepped in and locked the discussions because they were in "no new information" stage, and their position has been neutral on JPEG XL, and the integration has not progressed from the nightly builds to the next stage.

Ten years ago Mozilla used to have the most prominent image and video compression effort called Daala. They posted inspiring blog posts about their experiments. Some of their work was integrated with Cisco's Thor and On2's/Chrome's VP8/9/10, leading to AV1 and AVIF. Today, I believe, Mozilla has focused away from this research and the ex-Daala researchers have found new roles.


Daala's and Thor's features were supposed to be integrated into AV1, but in the end, they wanted to finish AV1 as fast as possible, so very little that wasn't in VP10 made it into AV1. I guess it will be in AV2, though.


> ... very little that wasn't in VP10 made it into AV1.

I am not sure I would say that is true.

The entire entropy coder, used by every tool, came from Daala (with changes in collaboration with others to reduce hardware complexity), as did some major tools like Chroma from Luma and the Constrained Directional Enhancement Filter (a merger of Daala's deringing and Thor's CLPF). There were also plenty of other improvements from the Daala team, such as structural things like pulling the entropy coder and other inter-frame state from reference frames instead of abstract "slots" like VP9 (important in real-time contexts where you can lose frames and not know what slots they would have updated) or better spatial prediction and coding for segment indices (important for block-level quantizer adjustments for better visual tuning). And that does not even touch on all of the contributions from other AOM members (scalable coding, the entire high-level syntax...).

Were there other things I wish we could have gotten in? Absolutely. But "done" is a feature.


Some "didn't make it in" things that looked promising were the perceptual vector quantization[1], and a butterfly transform that Monty was working on, IIRC as an occasional spectator to the process.

[1] https://jmvalin.ca/daala/pvq_demo/


Dropping PVQ was a hard choice. We did an initial integration into libaom, but due to substantial differences from the way that Daala was designed, the results were not outstanding [1]. Subsequent changes to the codebase made PVQ regress significantly from there, for reasons that were not entirely clear. When we sat down and detailed all of the work necessary for it to have a chance of being adopted, we concluded we would need to put the whole team on it for the entire remainder of the project. These were not straightforward engineering tasks, but open problems with no known solutions. Additional changes by other experiments getting adopted could have complicated the picture further. So we would have had to drop everything else, and the risk that something would not work out and PVQ would still not have gotten in was very high.

The primary benefit of PVQ is the side-information-free activity masking. That is the sort of thing that cannot be judged via PSNR and requires careful subjective testing with human viewers. Not something you want to be rushing at the last minute. After gauging the rest of AOM's enthusiasm for the work, we decided instead to improve the existing segmentation coding to make it easier for encoders to do visual tuning after standardization. That was a much simpler task with much less risk, and it was adopted relatively easily. I still think it was the right call.

[1] https://datatracker.ietf.org/doc/html/draft-cho-netvc-applyp...


I like to think that there might be an easy way to improve AV2 today — drop the whole keyframe coding and replace it with JPEG XL images as keyframes.


It feels like nowadays Mozilla is extremely shorthanded.

They probably gave up because they simply don’t have the money/resources to pursue this.


All those requests to revert the removal are funny: you want Chrome to re-add jxl behind a feature flag? Doesn't seem very useful.

Also, all those Chrome offshoots (Edge, Brave, Opera, etc) could easily add and enable it to distinguish themselves from Chrome ("faster page load", "less network use") and don't. Makes me wonder what's going on...


> you want Chrome to re-add jxl behind a feature flag? Doesn't seem very useful.

Chrome has a neat feature where some flags can be enabled by websites, so that websites can choose to cooperate in testing. They never did this for JXL, but if they re-added JXL behind a flag, they could do so but with such testing enabled. Then they could get real data from websites actually using it, without committing to supporting it if it isn't useful.

> Also, all those Chrome offshoots (Edge, Brave, Opera, etc) could easily add and enable it to distinguish themselves from Chrome ("faster page load", "less network use") and don't. Makes me wonder what's going on...

Edge doesn't use Chrome's own codec support. It uses Windows's media framework. JXL is being added to it next year.


> Edge doesn't use Chrome's own codec support. It uses Windows's media framework. JXL is being added to it next year.

Interesting!


Simply put these offshoots don't really seem to do browser code, and realize how expensive it would be for them to diverge at the core.


No, obviously to re-add jxl without a flag


"jxl without a flag" can't be re-added because that was never a thing.


It can, that's why you didn't say "re-add jxl", but had to mention the flag, 're-add' has no flag implication, that pedantic attempt to constraint is somehing you've made up, that's not what people want, just read those linked issues


It has a flag implication because jpeg-xl never came without being hidden behind a flag. Nothing was taken away from ordinary users at any point in time.

And I suppose the Chrome folks have the telemetry to know how many people set that damn flag.


> And I suppose the Chrome folks have the telemetry to know how many people set that damn flag.

How is that relevant? Flags are to allow testing, not to gauge interest from regular users.


>"But the plans were on display…”

> “On display? I eventually had to go down to the cellar to find them.”

> “That’s the display department.”

> “With a flashlight.”

> “Ah, well, the lights had probably gone.”

> “So had the stairs.”

> “But look, you found the notice, didn’t you?”

> “Yes,” said Arthur, “yes I did. It was on display in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying ‘Beware of the Leopard.’”


I guess you're referring to the idea that the flag made the previous implementation practically non-existent for users. And I agree!

But "implement something new!" is a very different demand from "you took that away from us, undo that!"


> No, obviously to re-add jxl without a flag

Is asking for the old thing to be re-added, but without the flag that sabotaged it. It is the same as "you took that away from us, undo that!" Removing a flag does not turn it into a magical, mystical new thing that has to be built from scratch. This is silly. The entire point of having flags is to provide a testing platform for code that may one day have the flag removed.


I suppose I'll trust the reality of what actual users are expressly asking for vs. your imagination that something different is implied


Actual users, perhaps. Or maybe concern trolls paid by a patent holder who's trying to prepare the ground for a patent-based extortion scheme. Or maybe Jon Sneyers with an army of sock puppets. These "actual users" are just as real to me as Chrome's telemetry.

That said: these actual users didn't demonstrate any hacker spirit or interest in using JXL in situations where they could. Where's the wide-spread use of jxl.js (https://github.com/niutech/jxl.js) to demonstrate that there are actual users desperate for native codec support? (aside: jxl.js is based on Squoosh, which is a product of GoogleChromeLabs) If JXL is sooo important, surely people would use whatever workaround they can employ, no matter if that convinces the Chrome team or not, simply because they benefit from using it, no?

Instead all I see is people _not_ exercising their freedom and initiative to support that best-thing-since-slices-bread-apparently format but whining that Chrome is oh-so-dominant and forces their choices of codecs upon everybody else.

Okay then...


We have been active on wasm implementations of jpeg xl but it doesn't really work with progressive rendering, HDR canvas was still not supported, threadpools and simd had hickups etc. etc. Browser wasn't and still isn't ready for high quality codecs as modules. We are continually giving gentle guidance for these but in the heart our small team is an algorithm and data format research group, not a technology lobbyist organization — so we haven't yet been successful there.

In the current scenario jpeg xl users are most likely to emerge outside of the web, in professional and prosumer photography, and then we will have — unnecessarily — two different format worlds. Jpeg xl for photography processing and a variety of web formats, each with their problems.


I tried jxl.js, it was very finicky on iPad, out of memory errors [0] and blurry images [1]. In the end I switched to a proxy server, that reencoded jxl images into png.

[0]: https://github.com/niutech/jxl.js/issues/6

[1]: https://github.com/niutech/jxl.js/issues/7


Both issues seem to have known workarounds that could have been integrated to support JXL on iOS properly earlier than by waiting on Apple (who integrated JXL in Safari 17 apparently), so if anything that's a success story for "provide polyfills to support features without relying on the browser vendor."


The blur issue is an easy fix, yes, but the memory one doesn't help that much.


Or (re-add jxl) (without a flag).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: