Hacker News new | past | comments | ask | show | jobs | submit login

Lossless WebP is very good indeed. The main problem is that it is not very future-proof since it only supports 8-bit. For SDR images that's fine, but for HDR this is a fundamental limitation that is about as bad as GIF's limitation to 256 colors.



Lossless WebP is also stuck with a low axis limit of 16383.

It is a good format when you can use it, but JPEG XL almost always compresses better anyway, and lacks color space and dimension limits.


This limit didn't exist in my first version as well as the 4 GB limit. These were artificially introduced to "match" the properties of lossy WebP. We could have done better there.


Were you involved in creating WebP? If so that's super cool! Why would they want to match webp's lossy compression though? To make it more uniform? And do you know why lossy WebP had such a limitation in the first place? Thank you!


I designed the WebP lossless format, wrote the spec, and implemented the first encoder.

The constraint was in WebP lossy to facilitate exact compatibility with VP8 specification and hoping that it would allow hardware decoding and encoding of WebP images using VP8 hardware.

Hardware encoding and decoding were never used, but the limitation stuck.

There was no serious plan to do hardware lossless, but the constraint was copied for "reducing confusion".

I didn't and don't like it that much as more PNG images couldn't be represented as WebP lossless as a result of that.


Wow, that really sucks. I appreciate the explanation as well as your frustration with it.

My desktop had a mixture of PNG and WebP files solely because of this limitation. I use the past tense because they've now all been converted to JPEG XL lossless.


Another group of incompatibility with PNG was 16 bit coding. I had a plan to add it as simply sending two 8 bit images where the second image containing the 8 least significant bits would be predicted to be the same as the first. That way it would not be perfect, but it would be 100x better than how PNG deals with it. WebP had another plan for layers and tiles that never realized, and as a consequence WebP is stuck at 8 bits.


Ah, I didn't know this and I agree this is a fairly big issue and increasingly so over time. I think smartphones in particular hastened the demand for HDR quite a bit, what was once a premium/enthusiast feature you only had to explicitly buy into.


HDR is also important for medical imaging applications (which have been moving to web)


I haven't ran across websites that serves up HDR images, I am not sure I would notice the difference. WebP seems appropriately named and optimized for image delivery on the web.

Maybe you are thinking of high bit depth for archival use? I can see some use cases there where 8-bit is not sufficient, though personally I store high bit depth images in whatever raw format was produced by my camera (which is usually some variant of TIFF).


8-bit can have banding even without "HDR". Definitely not enough. 10 bit HDR video is becoming more common, and popularity for images will follow. Adoption is hampered by the fact that Windows has bad HDR support, but it all works plenty well on macOS and mobile platforms.


unfortunately linux HDR is pretty much completely absent. That said, Wayland slowly looks like it's getting there.



X11 supports 10-bit SDR… which Wayland doesn’t


It would be nice if everything was non tone-mapped HDR all the time for software, and the windowing system or the monitor would do the (local) tone mapping.


No, you don't want that. You want your windowing system & UI/graphics layers working in display native range (see eg, Apple's EDR). The consistency of appearance of the SDR range is too important to leave it up to unknown tone mapping.

Also it's too power expensive to have your display in HDR if you're only showing SDR UIs, which not only is the common reality but will continue to be so for the foreseeable future.


I think both approaches have advantages.

Requiring every game, photo viewing app, drawing program, ... every application to decide how to do it's HDR->SDR seems unnecessary complication due to poor abstractions.

The locality of local tone mapping (the ideal approach to HDR->SDR mapping) would expose the window boundaries. Two photos or two halves of the same photo in different windows (as opposed to being in the same window) would create an artificial discontinuity for the correction fields being artificially contained within each window instead of spanning the users visual field as best as possible.

Every local tone mapping needs to make an assumption of the surrounding colors: is the window surrounded by black, gray, colored or bright light should influence how the tone mapping is done at borders. This information is not available for an app: it can only be done at the windowing system level or in the monitor.

The higher the quality of the HDR->SDR mapping in a system, the more opportunity there is to limit the maximum brightness, and thus also the opportunity for energy savings.


> Requiring every game [..] to decide how to do it's HDR->SDR seems unnecessary complication due to poor abstractions.

Games already do this regardless and it's part of their post processing pipelines that also govern aspects of their environmental look.

What's missing is HDR->Display which is why HDR games have you go through a clunky calibration process to attempt to reverse out what the display is going to the PQ signal, when what the game actually wants is what MacOS/iOS and now Android just give them - the exact amount of HDR headroom which the display doesn't manipulate.

As for your other examples, being able to target the display native range doesn't mean you have to. Every decent os has a compositor API that lets you offload this to the system.


Yes, if local tone mapping is done by the operating system (windowing system or the monitor itself), then there is a chance that borders of windows are done appropriately within their spatial context.


X11 does but many X11 applications and window managers don't.


For things where you actually care about lossless you probably also don't care about HDR.

HDR is (or can be) good for video & photography, but it's absolutely ass for UI.

Besides, you can just throw a gainmap approach at it if you really care. Works great with jpeg, and gainmaps are being added to heif & avif as well, no reason jpegxl couldn't get the same treatment. The lack of "true 10-bit" is significantly less impactful at that point


Gainmaps don't solve 8-bit mach banding. If anything you get more banding: two bandings, one banding from each of the two 8-bit fields multiplied together.

Gainmaps "solve" the problem of computing a local tone mapping by declaring that it needs to be done at server side or at image creation time rather than at viewing time.

My prediction: Gainmaps are going to be too complex of a solution for us as a community and we are going to find something else that is easier. Perhaps we could end up standardizing a small set of local tone mapping algorithms applied at viewing time.


> Gainmaps "solve" the problem of computing a local tone mapping by declaring that it needs to be done at server side or at image creation time rather than at viewing time.

Which was already the case. A huge amount of phone camera quality is from advanced processing, not sensor improveds. Trying to get that same level of investment in all downstream clients is both unrealistic and significantly harder. A big aspect of why DolbyVision looks better is just Dolby forces everyone to use their tone mapping, and client consistency is critical.

Gainmaps also avoid the proprietary metadata disaster that plaugues HLG/PQ video content.

> If anything you get more banding: two bandings, one banding from each of the two 8-bit fields multiplied together.

The math works out such that you get the equivalent of something like 9 bits of depth but you're also not wasting bits on colors and luminance ranges you aren't using like you are with bt2020 hlg or PQ


I didn't try it out, but I don't see the 9 bit coming. I feel it gives about 7.5 bits.

Mixing two independent quantization sources will lead to more error.

Some decoding systems such as traditional jpeg does not specify results exactly so bit-perfect quantization-aware compensation is not going to be possible.


The context of this thread is lossless webp, there aren't any compression artifacts to deal with.

Go try the math out, the theoretical is higher than 9 bits but 8.5-9 bit equivalent is very achievable in practice. With two lossless 8 bit sources this is absolutely adequate for current displays, especially since again you're not wasting bits on things like PQ's absurd range.

Will this be added to webp? Probably not, it seems like a dead format regardless. But 8 bit isn't the death knell for HDR support as is otherwise believed.


This depends on the quality you are aiming for.

You just cannot reach the best quality with 8 bits, not in SDR, not in HDR, not with gainmap HDR. Sometimes you don't care for a particular use case, and then 8 bits becomes acceptable. Many use cases remain where degradation by a compression system is unacceptable or creates too many complications.


> but it's absolutely ass for UI.

uhh, no it isn't?

And gainmaps suck, take lots of space, don't reduce banding. Even SDR needs 10-bit in a lot of situations to not have banding.


> > but it's absolutely ass for UI.

> uhh, no it isn't?

Find me a single example of a UI in HDR for the UI components, not for photos/videos.

> Even SDR needs 10-bit in a lot of situations to not have banding.

You've probably been looking at 10-bit HDR content on an 8-bit display panel anyway (true 10-bit displays don't exist in mobile yet, for example). 8-bits works fine with a bit of dithering


> 8-bits works fine with a bit of dithering

Yes, but the place where you want dithering is in the display, not in the image. Dithering doesn't work with compression because it's exactly the type of high frequency detail that compression removes. It's much better to have a 10 bit image which makes the information low frequency (and lets the compression do a better job since a 10 bit image will naturally be more continuous than an 8 bit image since there is less rounding error in the pixels), and let the display do the dithering at the end.


Gainmaps only take a lot of space if implemented poorly in the software creating the gainmap. On Pixel phones they take up 2% of the image size, by being stored in quarter resolution and scaling back up during decoding, which works fine on all gainmap-supported devices. They're more prone to banding, but it's definitely not a problem with all images.


I guess I should say, take up lots of space unless you're OK with lower quality. In any case, gainmaps are entirely unrelated to the 8 bit vs 10 bit question. The more range you have (gamut or brightness) the worse 8 bit is, regardless of whether you're using a gainmap or not. And you can use gainmaps with 10 bit images.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: