I'm not an expert, but I've dealt with my share of this working in film. Color problems in image processing are everywhere. Pre-multiplication and Alpha channels are another huge one. A lot of professionals don't even understand it and will brute-force compensate for problems instead of understanding the problem. Here's a posting from this week [1] from professionals who care about getting this stuff right trying to convert a tiff to jpeg without surprise colors shifts.
A summary is that you need to do almost all image modifications in a linear color space. So you first need to back out any color tweaks made (for aesthetic or technical reasons), modify the image, then re-apply any tweaks.
Unfortunately, image files often aren't tagged with their color space and might use a specific color space out of convention--but they could use something else. Also, because it's way harder to do the right thing, a lot of software just does the modifications ignoring color space. Just think about how much slower and more memory a web page full of thumbnails would take to draw if you added in color space transformations? For awhile, a lot of browsers added support but left them off by default (imagine if you designed your web page to look right in a broken color space and it was suddenly much slower and now looked wrong). But a lot of software is slowly catching up. Unfortunately, legacy software (like Photoshop) and legacy file formats might never change.
FYI You can get gamma correct resizing (and other operations) in recent versions of photoshop by first switching the image to 32-bit colour depth mode.
I had a really long reply typed up with some specific workflow problems. But you can see people complain elsewhere in this thread about how Adobe handles stuff like this.
The basic problem is that the color space for blending is an application setting (can't be specified in the file) and higher bit depths will double or triple your file size.
You could do a rough approximation of gamma correction with squares and square roots, which wouldn't add to the processing time like a full sRGB conversion would.
Going from 8-bit sRGB channels to 32-bit IEEE float channels is a simple lookup in a table with 256 values.
Going the other way is only slightly harder. Scale to the range 0 ... 4095, properly round the float to an integer, verify the range or mask with 0xfff to protect against NaN and Inf, and then to a table lookup in a table with 4096 values.
Tux Paint contains GPL code for it. It's been there for more than a decade. All you GIMP and Photoshop users having trouble with this should have been using Tux Paint. :-)
It's 5120 bytes for the both of them, the access is not at all random (due to color popularity), and the real cache killer is the image itself. You're dealing with a few megabytes for the image.
You're not running a multi MB image but a small block thereof. Not that 5-6KiB LUT matters, but the extra conditional multiply ops or power or logarithm function evaluations do.
Latter are need a few MACs for a good approximation (at least 12 bit precise, preferably 16 bit - like 5th order polynomial), former have masks, worse for SIMD.
I had this idea a long time ago but never actually tested it. Now I have: http://marksblog.com/gamma-resize/. The square/sqrt method gets at least 80% of the quality but should be quick to compute on a modern processor.
Those LUT or power or logarithm functions are not free... linear light scaling is about 2x slower in practice. Not that it matters until you get to 8K.
I first saw this post in 2009 or so, and it was eye-opening. I found it while googling for why colours shifted when I resized images, as back then there wasn't much awareness of this issue. But it's much less prevalent now. Related -- scaling algorithms: https://en.wikipedia.org/wiki/Image_scaling#Algorithms
If this isn't esoteric enough, I know of at least one major broadcast media company (Bell's crave.ca) that doesn't properly encode their HDTV 720p videos in MP4 to flag to the browser they're in Rec 709 colour space, and so the compression artifacts, designed to be invisible in Rec 709 are extremely visible on a wide gamut P3 display.
But colour space is increasingly something you have to pay attention to, as displays improve beyond sRGB -- yet ironically, it's also getting easier to ignore as the latest stepping stone towards Rec 2020, the DCI-P3 colour space, is shared between PC and movie industries. https://en.wikipedia.org/wiki/DCI-P3#2015-2016 (Personally I refuse to buy any display or device that plays back video if it doesn't have a P3 display, ideally OLED)
> back then [in 2009] there wasn't much awareness of this issue
I haven’t noticed too much change in the general awareness of this problem between the 1990s and today.
Experts have always known the right thing to do and people have been talking about these problems as long as there has been image processing, (e.g. Poynton’s Gamma FAQ dates from sometime before 1998 http://poynton.ca/notes/colour_and_gamma/GammaFAQ.html) but it’s new to everyone at some point, and newcomers perennially screw up.
The annoying thing is that there are folks who continue to screw up (like Adobe Photoshop) who should really know better. Backwards compatibility shouldn’t be an excuse to continue with broken implementations for decades.
Oh, the broken implementations don't stop at backwards compatibility -- Lightroom CC, their new Cloud product, last I checked, prefers to send sRGB to Photoshop (!) unlike the traditional Lightroom which fully supports preserving embedded colour profiles when exporting. It's been a couple weeks since I looked into this but the instructions in this post https://lightroomkillertips.com/keeping-your-color-consisten... apply to Classic Lightroom if I recall correctly, and the modern "cloud" Lightroom CC only exports wide gamut correctly when set to export as uncompressed TIFF 16-bit. Everything else tries to export as sRGB automatically, which sucks for print or DCI-P3 workflows.
Also, I'm trying to find details on Apple's P3 profiles in how they differ from standard projection profiles for the movie industry to maintain compatibility with sRGB gamma, but there's not as much online as I expected about it.
The same phenomenon also applies to color blending. In a modern context this phenomenon is especially visible in UI elements that blur the background: Up until recently many UIs did not get the blurring right, resulting in greyish dark spots between different colors. I think CSS blurring in most browsers still gets it wrong.
Well color blending is whole another can of worms; there are no easy answers there. Mixing RGB values even in linear space does not yield always "correct" results. More problematic, I'm not sure if there is even well-defined correct result for generalized color mixing/blending.
Even something as simple as a proper gradient between two colors is surprisingly difficult. I think I finally cracked that one though: https://stackoverflow.com/a/49321304/5987
Color mixing is easy if you're mixing light, just add together the linear intensities. Color mixing of pigments is a different order of complexity, because you have to consider that pigments are both transmissive and reflective and there are other effects like dot gain and spectral lines to consider.
EDIT: If everything is done correctly, the text in the upper div will remain unreadable.
If browsers were doing gamma correct scaling, the perceived brightness of the checker pattern would not change.
Even more problematic, when zooming the interpolation and blending also don't happen gamma correct. If mere nearest neighbor interpolation were applied, this would remain hidden.
This becomes really annoying on HiDPI screens. For some reason the "px" CSS unit has been "redefined" as an "average-ish" angular distance when viewed on a 96DPI screen at an arm's length or so. And for higher resolutions px is scaled accordingly. Whoever came up with that nonsense created a lot of issues down the road. sigh
EDIT: Also Opera (before it switched to WebKit) did some really funky business at the edges, when upscaling images. Interpolation would always clamp-to-edge instead of respecting the tiling mode.
Regardless if they had included the device relative px in the spec or not CSS doesn't define rendering, it defines layout. I.e. it's intentional CSS doesn't allow you to define things in terms of physical device pixels. Technically you could hack it for real world devices with ton of media queries but that's not their intended usage (hence needing a ton).
If you're intending to render something in the browser (calculating DOM elements, canvas, or anything else) they expect you to handle it in your rendering code (JS/WASM).
The whole intention of my little test hack in the first place was to show, that without being able to define layout with device native unit precision, the end result becomes unpredictable.
The idea was to apply a checkerboard background image to to body and div, but translate the checkerboard by 1px in the div. By their very nature pixel based images are defined in pixels. And the most naive way to display images is by a 1:1 mapping of image pixels to CSS px units. This is how things started out, and only later features like page zooming, responsive scaling, HiDPI and so on came to be. Of course that means that images must be scaled. But this scaling must be well defined, so that the output will only differ in resolution, but not layout or visual outcome after scaling.
And right now browsers fail to do this. Two years ago, when I wrote came up with that test Opera and Android WebView did even fail to properly translate the position of the div background; it looked like if somewhere in the scaling at some point the translation was coerced into the device native units grid.
A checkerboard pattern can be understood as a Haar wavelet; or in terms of spatial frequency space as ΣₖΣₗsin(x-k)/(x-k)·sin(x-l+π)/(x-l+π)
When applied to the pixel sampling grid a 2×2 checkerboard pattern is right at the Nyquist limit. Upscaling it with an ideal filtering kernel (sin(x)/x = sinc(x) = Lanczos) yields a single frequency (sinusoid).
Adding two such signals at exactly π phase difference gives perfect destructive interference. Add some phase difference and it becomes constructive. This little detour into signal theory should make it clear, that you have to take great care when scaling and positioning stuff in a layout. Scale corresponds to frequency, position corresponds to phase. And it should be noted, that in a visual signal, phase carries the bulk of information, hence image transformations should preserve the phase, where possible.
That it also shows, that interpolation and blending is broken, too (i.e. doesn't respect gamma) is a secondary outcome.
If by "only later" you mean the original CSS1 spec in '96 then yeah. The web has been relative long enough that anyone who has ever hit "print" didn't have to worry about images being 1/6th the size of the rest of the page.
I don't see how broken interpolation/blending is a secondary outcome. That it works only at "native" resolution is a result of interpolation being broken not the other way around. If it was fixed you wouldn't be looking to use device pixels (again except for something like canvas rendering of a web image editor).
Sorry, I don't know of one. You could start with the Wikipedia description at https://en.wikipedia.org/wiki/Lanczos_resampling but that is nowhere near enough detail to do an actual implementation. You might try searching for a Lanczos-2 or Lanczos-3 implementation.
I wrote my first image resizer in 1986, so I've been immersed in this for a long time. I'm considering writing a blog post to go into the finer points, but my blog is kind of empty right now and has been for a long time - don't get your hopes up.
A summary is that you need to do almost all image modifications in a linear color space. So you first need to back out any color tweaks made (for aesthetic or technical reasons), modify the image, then re-apply any tweaks.
Unfortunately, image files often aren't tagged with their color space and might use a specific color space out of convention--but they could use something else. Also, because it's way harder to do the right thing, a lot of software just does the modifications ignoring color space. Just think about how much slower and more memory a web page full of thumbnails would take to draw if you added in color space transformations? For awhile, a lot of browsers added support but left them off by default (imagine if you designed your web page to look right in a broken color space and it was suddenly much slower and now looked wrong). But a lot of software is slowly catching up. Unfortunately, legacy software (like Photoshop) and legacy file formats might never change.
[1] http://lists.openimageio.org/pipermail/oiio-dev-openimageio....