Hacker News new | past | comments | ask | show | jobs | submit login
New study overturns 100-year-old understanding of color perception (lanl.gov)
226 points by beefman on Aug 10, 2022 | hide | past | favorite | 76 comments



Another day, another hyperbolical title. 100 years ago, knowledge of perception was in its infancy. We still don't know much, but we do know that all attempts at a simple, mathematical description of perceptual processes {*} have failed. They're interesting, can serve as an introduction to a problem, may help you to build a better web app, etc., but come nowhere near the complexity of an actual brain. It takes a pretty large neural net to decypher phonemes even in clear speech. Viewing color perception as an independent function mapping one space onto another is already flawed, as the chessboard illusion already shows (https://en.wikipedia.org/wiki/Checker_shadow_illusion).

{*} This is not limited to perception. Attempts to described even small parts of reasoning, language and memory with simple models failed too, but that seems less surprising, since perception is closer to the physical reality.


While the title does sound a bit sensationalist the actual paper refers to a standard in the International Commission for Weights and Measures which is indeed (according to the paper) based on 100-year old paradigm introduced by Riemann and furthered by Helmholtz and Schrodinger.

From the abstract:

The scientific community generally agrees on the theory, introduced by Riemann and furthered by Helmholtz and Schrödinger, that perceived color space is not Euclidean but rather, a three-dimensional Riemannian space. We show that the principle of diminishing returns applies to human color perception. This means that large color differences cannot be derived by adding a series of small steps, and therefore, perceptual color space cannot be described by a Riemannian geometry.

https://www.pnas.org/doi/10.1073/pnas.2119753119


This makes sense to me. As I've said elsewhere below, the eye adapts dynamically to each situation. One's perception - the dynamic range of both color and luminance is situational and adaptive.

I know from personal experience of being fooled on hundreds occasions when trying to judge colors, color temperature, etc. despite my experience and the fact that my color vision is normal (I've never failed any Ishihara or related tests).

The law of diminishing returns is essentially what I experience as an observer, the more I grind my observation fine the more fooled and less objective I become. I now only believe my instruments.

Thus it makes sense that large color differences cannot be derived by adding a series of small steps simply for the reason they are in effect a form of noise in relation to large steps.

"therefore, perceptual color space cannot be described by a Riemannian geometry."

Perhaps Riemannian geometry should become just a starting point if for no other reason than it serves to illustrate the complexity of human color perception and the difficulty it poses in trying to analyze it.


>Another day, another hyperbolical title. 100 years ago, knowledge of perception was in its infancy.

That may well be, but the title doesn't point to the overturning of some infantile color theory people believed 100 years ago.

It points to the overturning of a still dominant theory, that was first developed 100 years ago, but continues to be considered as the standard.

The article might be wrong in that that's the case, but that's how the title reads.


Certain models do try to account for spatial localization and relativity too (iCAM), although they are also simple as those effects remain largely unexplored. Honestly, ML seems to be better suited to this task than complex analytical solutions, as long as you have the data to train on.


I thought it was already known that perceived large differences in colors were not geometrically consistent with small differences. For example CIECAM02-UCS is based on a compromise between fitting the small-difference data and large-difference data.


Yes, from the abstract of the article it is impossible to understand what is new in this study.

The "non-Riemannian nature of perceptual color space" has been known for decades.

For example the CIE color difference formulas from 1994 and from 2000 (CIEDE2000) are older attempts to model the color space, taking this into account, i.e. that the color space is not even a metric space (the triangle axiom is not true).

There are various newer attempts to better model the color space. Without being able to read the paper, I assume that it might refer to one such better model, but as I have said, neither the title nor the abstract give any clue about what is novel in it.

They say "Rethinking them outside of a Riemannian setting could provide a path to extending them to large differences". From the "could provide a path" I understand that they have not found yet any formula valid for large color differences, which would have been something novel.


Regarding what I have said, about the "non-Riemannian nature of perceptual color space" being known for decades, I have found an earlier paper with the same authors as the paper discussed here:

https://datascience.dsscale.org/wp-content/uploads/2019/01/C...

There, the authors themselves have written:

"Furthermore, human color perception is also non-Riemannian, due to the principle called diminishing returns [9]"

where the paper referenced as [9] is from 1968:

"D. B. Judd. Ideal color space: Curvature of color space and its implications for industrial color tolerances. Palette, 29(21-28):4–25, 1968."

Therefore the "non-Riemannian nature of perceptual color space" has been known at least from 1968, so whatever novelty is in the paper discussed here, this is not it, and the popular reporting about the paper is misleading.


Can you give an example of three colours where the triangle inequality is false? Would be interesting to look at.


One paper where some such experiments are discussed and which includes some color diagrams is:

https://psyarxiv.com/vtzrq/download?format=pdf

The violations of the triangle inequality are caused, as others have also mentioned, both by the nonlinearity of the dependence between the perceived intensity of a color and its corresponding radiant flux and by the interactions between the 3 color channels.

Therefore, while the 3 RGB or XYZ values are enough for being able to recognize if any pair of colors are the same or different, to be able to recognize whether in a set of 3 colors the 3rd is closer to the 1st or to the 2nd, one needs to apply some non-linear function of 3 arguments, providing 3 output numbers, in order to map the RGB/XYZ space to a space where a simple distance formula, e.g. the Euclidean distance or the Manhattan distance, can be used.

Alternatively the non-linear 3 to 3 transform can be combined with the distance formula in the transformed space into a non-linear complicated distance formula that can be used directly in some applications, on RGB/XYZ colors, but separating the space mapping transform can be useful in some other applications, e.g. in color interpolation (though color interpolation can also be done based on only a distance formula, by solving an equation for each intermediate point, which might be not slower than mapping the extreme points, interpolating in the uniform space and then reverse mapping the interpolated points).


> one needs to apply some non-linear function of 3 arguments, providing 3 output numbers, in order to map the RGB/XYZ space to a space where a simple distance formula, e.g. the Euclidean distance or the Manhattan distance, can be used.

If there were such a function then the triangle inequality would hold on the original space.


When any of the 3 output variables depends non-linearly of all 3 input variables that is not true.

Even on something as simple as a sphere, you can have a spherical triangle where one side is longer than the sum of the 2 other sides (e.g. when one side is on the equator and the 2 other sides go to one pole and the arc on the equator is greater than a half circle).

And you can map a (partial) sphere to a (partial) plane (mapping e.g. an equator segment and some meridian segments to straight lines, in order to map some plane triangles to some spherical triangles) and then use the distance in the plane as a difference function for pairs of points on the sphere.


Suppose we have a 'difference function' D on the space of colours, a simple metric d on the target space, and some nonlinear function f from the space of colours to the target space such that D(x,y) = d(f(x),f(y)) for all x and y.

Then the triangle inequality for D would be D(x,y) + D(y,z) ≥ D(x,z). By the equation above this is equivalent to d(f(x),f(y)) + d(f(y),f(z)) ≥ d(f(x),f(z)), which is true by the triangle inequality in the target space at the points f(x), f(y) and f(z). If the target space uses a simple metric like the Euclidean or Manhattan distance then the triangle inequality is always going to hold in the target space and hence in colour space.

Note that the above argument does not need linearity of f.

The sphere example doesn't hold up because if two points are further apart than half way around a great circle then the distance between them is actually the length of the path going the other way.


I think that you are right, and my answer had been too hasty.

What can be obtained experimentally is only the difference function D(x,y).

If it cannot be decomposed into a distance function and a space transformation function, i.e. as D(x,y) = d(f(x),f(y)), then that means that the quest for an uniform color space, which has seen a very large number of attempts during a century (the quoted Schroedinger paper is from 1920), will never produce a completely satisfactory result and that the best results for color interpolation or for color matching within tolerances can be obtained only by using a linear RGB or XYZ space together with a complicated non-linear color difference function.


> ...could yield more vibrant computer displays, TVs, printed materials, textiles and more

I'm sorry, how exactly? From the paper's abstract:

> Consequences of this apply to color metrics that are currently used in image and video processing, color mapping, and the paint and textile industries.

At best, I could imagine how computer-selected colors might be chosen differently, i.e. adding to the many ways in which gradients between two colors are computed.

But I can't for the life of me imagine how computer displays might become more vibrant -- this isn't going to affect the spectrum of backlights or LCD color filters or OLED phosphors or anything as far as I can tell? E.g. in the way that Macs shifted from sRGB screens to P3 screens for their color gamuts?

Similarly with paints and textiles -- this isn't going to lead to new paints or dyes.

At the end of the day, colors -- whether in a movie scene or a paint chip -- are ultimately chosen by eye, not by math. So seems like the headline is making claims that can't be substantiated?

Would love to know if I'm missing something here.


>But I can't for the life of me imagine how computer displays might become more vibrant

The perceived vibrancy of a display is not only dependent on the spectrum of light emitted but also on the way the colors of an image are mapped to the colors of the display. Better understanding of color would help with the latter.


Back in the early 90s I worked for a Mac company that made graphics cards for the publishing industry - color correction was a big deal (print magazines have/had 'white rooms' rooms with known color temperature where they looked at photos).

Marketing really wanted a game winning solution. One thing I learned - no two sets of eyes are the same - at an extreme 10% of the male population are red-green color blind but in reality we're just not all the same - there is just no perfect color matching solution


No two sets of eyes (which includes retinas plus all the processing that converts the input into internal scene representation) are the same.

This is especially frustrating with photography. No two people see a single scene the same way; the best you can do is take raw data and interpret it into the tiny, tiny display space the way you think best describes the scene. Yet do many think that the blandest possible JPEG render is the reference truth of what reality is actually like.


I wonder how important relatively speaking the individual differences (outside the extremes of color-blindness) are, compared to the fact that the customers are simply not sitting in such 'white rooms'.


well the whole point of trying to produce color corrected monitors was to avoid the whole process of printing things out and looking at them in white rooms as desktop publishing started to become a thing - high end publishers were (are?) very sensitive to their color fidelity, compelling color is very much part of what draws in their readers


"But I can't for the life of me imagine how computer displays might become more vibrant..."

You're most likely correct, that is for current display technologies. For years, I've hypothesized that we cannot solve the problem with tricolor displays and I've postulated that we need a 7-color system to do proper justice to our eyes/perception and even then it'd still be an approximation albeit a pretty good one.

Obviously, I can't argue this out here or we'd be at it for weeks. Suffice to say I'm far from being alone in thinking this way.

Keep your ears to the ground and no doubt you'll soon hear from better minds than mine on the subject.


Why 7? Our eyes have 4 different cells, so any system with more than 4 colors will lead to a system where we just have redundant colors that the eye can't tell apart


The possible colours form a sort of semi-circle known as a chromaticity diagram (https://en.wikipedia.org/wiki/Chromaticity). If you pick 3 points in the diagram then the triangle they form won't cover the entire shape. In fact since it has curved sides no finite amount of colours will allow you to fully cover the perimeter. But having more colours lets you more completely approach the perimeter.


That's one solution. Seven wide points on the CIE would cover it pretty well. Another is to have fewer points and dedicate extra channels to specular reflections, extra-spectral colors: https://en.m.wikipedia.org/wiki/Spectral_color#Extra-spectra... and imaginary, impossible and chimerical colors: en.m.wikipedia.org/wiki/Impossible_color and 'matrix' them with the color channels.

The challenges however for any new color system are formable and it would require new developments in technology before being practical.

The human eye is quite remarkable, it adapts dynamically to the situation in both its luminance and chrominance channels and has an effective dynamic luminance range of around 10^6 : 1.

To give you a sense of how truly adaptable the eye is here's an example: the now defunct Kodachrome - which was once the gold standard in film emulsions and in some limited ways still is - only had a luminance dynamic range of between about 250 and 500 : 1 (depending on how you measure it, 250 being more the norm), and its chromaticity was pretty limited especially in reproducing violet (violet being the least objectionable tradeoff) and yet it was capable of producing some quite wonderful photographs (and I've still many thousands of slides to prove the fact).

Now if you compare Kodachrome with what the eye is actually capable of then there's essentially no comparison, Kodachrome falls so far short you'd wonder why we'd bother using it at all. The fact is the eye adapts to that limited situation and does a remarkable job of fooling you into believing that the Kodachrome image is the 'true and real thing'.

This example also illustrates the very real difficulties in getting to the bottom of color theory. Keep in mind the issues here: real light consists of an almost continuous electromagnetic spectrum, the eye resolves this by matrixing RGB primaries to produce an 'imaginary' spectrum and it also has to deal with extra-spectral colors such as magenta, and there are other issues as well.

As I hinted I'm reticent about entering a heavy debate about this subject here as I can only do so superficially - just one word out of place and it'll be mistaken as wrong and it'd only derail the debate.

Look, it's a huge subject: (a) our existing color technologies - video/television, printing and color film - are already very sophisticated and are full engineering professions in their own right - and as we're discussing all have quite severe limitations in their ability to faithfully reproduce the luminance channel let alone the chrominance one, and (b) the theory of color vision is not only complex and quite difficult to understand - and now it's become even more complex.

I will however finish with an idea I've had for years that could overcome many of the limitations of the tricolor (or any multicolor system). It's completely theoretical as the tech still doesn't exist. The idea would work like wideband superhetrodyne radio receiver. A UV laser as the local oscillator would beat with incoming light in a mixer to produce an IF (intermediate frequencies). The more manageable IF would be bandwidth compressed and eventually reconverted up-band to visible light so the viewer is actually seeing a spectrum of visible light. Simply, a TV camera of such system would capture say light at 555nm and that's what the viewer would eventually see - and not some mixture of RGB primaries. Picking coordinates on the CIE would be a thing of the past.

Now, I know the objections will be many and all sorts of issues both technical and theoretical arise but there's little point in debating them here.

I've raised this for one reason. The whole problem we have with color reproduction starts with the fact that we cannot reproduce the eye's CIE with true accuracy and using any number of CIE coordinates will never completely solve the problem.

Eventually, we need to find a better way. Finding a way to accurately record the incoming wavelength of light in recorded images would also provide us with many other scientific and engineering advantages.


Metameric failure!


Agree that they haven't offered much in terms of how it could make displays more vibrant.

> At the end of the day, colors -- whether in a movie scene or a paint chip -- are ultimately chosen by eye, not by math

Well.... math is involved if you want to do even simple stuff like blending two colors in a paint program. There is subjective stuff, sure, but you aren't going to get very far in computer graphics, display manufacture or camera chip manufacture, if you just conclude that it is 100% subjective and there is no mathematical logic behind it.


It inherently doesn't make any sense because the limits, the "bounds of space" of displays is physical. Changing your indexing system doesn't sprout new colors


It very much does, as color is not a physical phenomenon. It's just perception - essentially a bunch of weights in your neural network that makes some internal conceptual sense. You can trivially go outside of those "physical limits" e.g. by exploiting temporal perception quirks to get impossible colors (not represented by any spectral combination in typical conditions).

The same can be said about all human perception in general, not just colors. Artists exploit this for centuries, and are trying to formalize this. Newer color appearance models try to precisely quantify perception effects to make better color appearance possible.


This doesn't have any effect on my claim, that displays have a gamut that's bounded by how they physically create lights.

I'm not sure how to explain why further than that, it's not even wrong, something I've definitely seen is the more people know about color, the more complex it seems when usually it's more simple when you start thinking across reference frames / viewing conditions

Source: builds color spaces using color appearance models


Is there any example of using color chirps(?) in mainstream use? Interesting concept.


It is about rendering colors of the scene within the bounds of physical limits of display device color space.

Rendering to the display device screen uses perceptual intent: a conversion of color space of the display under some assumptions about screen viewing environment etc. [1]

With better models we can improve this conversion. Article mentions that large color differences are not perceived as acute as the current models (Riemman etc) suggested so new model could boost the color difference to achieve that higher acuity.

[1] https://blog.colorgate.com/en/rendering-intent-explained


This does not create more colors or mean displays or fabrics could have more colors or anything like that

Mental model for color space that keeps things on track is it's just like any 3D space. Here, we could say "distance space" like "color space" and two "distance spaces" are miles and kilometers, (two color spaces are RGB and $ONE_FOLLOWING_FROM_PAPER)

If we found the distance equation in distance space differed from expectations, due to interesting things about physical geometry, that wouldn't create new places.

Why? The map is not the territory. The physical space still exists as it did.

Moving from distance to color: the physical display gamut is the territory. The map is the color space.

If we discover our distance metric in the space is flawed, we'd measure distance between colors as different than before. That wouldn't create new colors in color space.

Source: I create color spaces


For example, I would imagine you could come up with a more perfect list of what the "pastel" color is for any given hue. Another example I remember from my early 00s UX days -- if you take a gradient-based glass/glossy bar that is blue and try to hue shift it to red, it won't look properly glossy without tweaking the relationships between the colors in the gradient. Having a perfect understanding of how our eyes perceive these types of color differences I would think could yield a perfect approach for transposing visual phenomenona between different hues. I have no idea though I'm by no means an expert on this.


Side note: if our eyes did perceive color in a euclidean way, for example, the normal process of hue shifting something would actually work without needing to do any saturation/brightness/darkness adjustments afterwards.


i wonder if choices amongst metamers might have secondary effects we don't know about...

like given two renders of the same image with different metamer selections, are there possibly subconscious impacts to mood or feeling between them. do different spectral envelopes that produce the same percept result in different demands or representations in the ventral stream which then could (or may not) manifest in nonobvious ways?


I can't think of a mechanism by which this could happen, since the cone pigments will have the same absorption rates in either case. There's no signal that the brain could use to behave differently.


i just reread about the retina and i think you are correct.

i was under the false impression that the transfer function for each group of cones was a function of some processing in higher layers of the ventral stream.


Well this is a first for me. I don't think I've ever seen a paper published to PNAS that wasn't open access.


embargo should expire October 29, 2022: "PNAS is a delayed open access journal, with an embargo period of six months that can be bypassed for an author fee (hybrid open access). " https://en.wikipedia.org/wiki/Proceedings_of_the_National_Ac...


Huh. Well today I learned. I always thought it was just plain open access.


LOL I came to the comments to find the paper


hopefully somebody put it on scihub soon


So principally that the perceived luminance would be dependent on hue so that each hue in the gamut, and each interrelationship has such a relationship perceived intensity as a function of powered intensity. I can only assume that's what they mean by second-order.

Now, is this in equilibrium viewing? What's really gonna cook these guys noodle is the effect of pupillary action, e: and afterimages for that matter


"Now, is this in equilibrium viewing? What's really gonna cook these guys noodle is the effect of pupillary action."

Right - and more I reckon. In hindsight things seem so easy or fall easily into place and this is no exception.

The obvious thing with respect to this new 'color space' is its nonlinearity/'curvature'. We should have gleaned this (or have taken more cognizance from existing overvations) from the way the eye works in different circumstances - and for that matter from the fact that different eyes perceive colors in different ways - as Ishihara tests bear out. It also should have been obvious from the 'dynamical' way the eye perceives color.

Anyone who has done color balancing/grading or tried to match colors knows how easily it is to be fooled unless they've lots of calibration equipment around them, as the eye automatically adjusts color balance and such depending on lighting and circumstance (remember those shadow images that fool everyone).

There are other clues too: why do a small percentage of women have superior color resolution in the green spectrum (as if they had multiple green channels) and the fact that the degree of this capability also depends on conditions and circumstance (a seemingly nonlinear effect)? That's right, the signs were there and it's good that we've now picked them up and are now running with them.

That said, we cannot be too critical of that earlier work, in fact what was done was pretty monumental. After all, the 1931 CIE work was and still is quite remarkable and it's still serving us well today - 91 years on.

Like Newtonian mechanics and Relatively, every development inches us closer to the truth and or provides us with better accuracy.


> There are other clues too: why do a small percentage of women have superior color resolution in the green spectrum (as if they had multiple green channels)

Apparently a small number of women have Tetrachromacy, where they have a 4th type of cone in their eye and can see a much broader range of colors, thanks to their genes. The gene isn't super-rare (I think one article claimed ~12% of women have the genetic condition for it), but the functional ability to perceive with all 4 types of cones is very rare. https://www.bbc.com/future/article/20140905-the-women-with-s... has a story about one confirmed case.


Right tetrachromacy is proven and I've learned the hard way never to argue with women who insist there are shades of color that I cannot see. Moreover, these women make excellent color graders (in film work, etc.) as their output is more consistent.

My comment was made in the general sense from a modeling perspective. The trouble is - as you've implied - things get very complex once one scratches the surface especially so with seeing and perception as one's interpretation is also subjective and we don't have an adequate language to describe what we see. That's why calibration and measurement are so important.

These difficulties also likely account for why there's been a 91-year delay between the '31 CIE work and this discovery.

Incidentally, I've just noticed that Adrian_b makes the point that the "non-Riemannian nature of perceptual color space" has been known for decades. In hindsight, I should have strengthened my earlier comment about the evidence behind this new work having been known about for some considerable time.


This was also covered on Radiolab, colors, super interesting episode. https://radiolab.org/episodes/211119-colors


When they talk about Riemannian geometry, is that related to the Riemann curvature tensor from general relativity? What’s the significance of being non-Riemannian? Is it just that (as the abstract says) “the distance between two colors is [not] the length of the shortest path that connects them.”?


A point in the color space is determined by 3 numbers.

Given 2 colors, i.e. 2 points in the color space, one can define a distance function from them to a number corresponding to the perceived color difference.

One can try various formulas for the color distance and compare them in experiments with humans who must assess which colors are more similar or more different. Such experiments are extremely time consuming and require many test subjects, to average over individual variability.

The problem is that a formula for a very good approximation of the color differences over the entire color space has not been found yet, even if there are many formulas that give acceptable results in certain cases, usually when the color differences are small.

The Euclidean distance does not give a good approximation, so the color space is not Euclidean. The distances corresponding to various Riemannian spaces (i.e. "curved" spaces) give better approximations, but which are still not good enough.

In fact the color distance is not even a distance in the mathematical sense, i.e. it does not satisfy the triangle axiom, so the color space is not a metric space. Perhaps there exists a non-linear transformation from the color space to a metric space.


"Riemannian spaces (i.e. "curved" spaces) give better approximations, but which are still not good enough."

Right, as I said above this work shows we've nonlinear effects at work, whilst this was obvious in the past we should have taken more notice of the fact. Whilst Riemann provides say an ideal mathematical model (the ideal case) as we obserce Nature doesn't always comply, it insists on adding 'distortions' of its own.

For the same reason these 'transfer' curves have nonlinear characteristics it seems other significant interactions are at work, namely cross-coupling between color channels. We see this in color film emulsions where color bleeds from one layer to another and this effect is also nonlinear (if the bleed were linear across the transfer curve then we could simply bias it out but it's not thus in practice it's nigh on impossible to correct).

In effect this cross-coupling is essentially equivalent to cross-modulation distortion in electronic circuits and the mixing can be modeled mathematically in a similar way.

No doubt it remains to be seen whether this new work has any significant bearing on this or not.


After reading your comment, deep learning came to mind. A quick search comes up with "Deep Metric Learning for Color Differences" by F Zolotarev:

https://lutpub.lut.fi/bitstream/handle/10024/157102/Deep%20M...

Looks like deep learning works


A good question which I'd like to think about. Funny isn't it how things like Riemannian geometry, the Riemann curvature tensor, Ricci-Curbastro, etc. seem to pop up in disparate and unexpected places. I reckon Nature's trying to tell us something.


One hundred years ago, it would have been appealing to have a mathematical model of color perception that scientists could work from. The problem, of course, is that the systems of perception within humans and other mammals are very complex. There are large differences in color perception between individuals and, I suspect, even between genetic groups.

The visual system has many layers and a lot of neural processing goes on before we "see" what we are looking at. This system is not built out of a handful of components easily modeled by differential equations like an electronic circuit; it is built out of non-linear, stateful components (neurons) with at least millions of interconnects. A mathematical model that is intended to capture every nuance is bound to fail.

At best, in my uninformed opinion, we could simply do subjective surveys of perceptual color response and encode the averages and differences between individuals within standardized tables. It would then be simple for applications to make use of these tables for making and measuring systems intended for human use.

An illustration of the complexity of the visual system can be found in the wide range of visual illusions we are subject to, see [1] and [2]. Most of these are due to the visual system developing to interpret the visual world with very limited equipment, our human eyes. A purely mathematical model wouldn't be complete unless it too encompassed the strange optical illusions we all have. (I should point out that individuals have different responses to optical illusions.)

[1] https://www.sciencedirect.com/topics/neuroscience/visual-ill...

[2] https://en.wikipedia.org/wiki/Optical_illusion


I'm confused somebody even uses a mathematical model of color. Human perception of color is certainly a matter of biology. I'd have expected this paper to be a study of retinal cell response saturation or some such. Which is whatever it is, never mind famous mathematician's opinions.


> Human perception of color is certainly a matter of biology.

Of course it is. The mathematics is a model of human biology (in terms of the effects of interactions with different colours of light).

Mathematical models of the underlying electromagnetic phenomenon known as light already existed in the 19th century, but that's very different from the problem at hand.


Mathematical models of color can serve several useful purposes, even if they're not entirely accurate. As long as one keeps in mind that at best, they are descriptive, there can be valid use cases.


It sounds like all this paper says is that the metric of the color spaces is non-linear, i.e. the perceived distance between two colors doesn't really agree with the Euclidean distance between 3D points representing them. I think this is more or less well-known.


They're saying it's not just non-Euclidean but also non-Riemannian. The distance between two points is shorter than the length of the shortest path between them.

For a simple example of such a space, imagine a circle where we define the distance between two points to be measured 'as the crow flies', but we demand that all paths remain on the circle rather than passing through its interior.


That example would be similar to using as the crow flies for distances when driving on roads.


I've always found the printed book approximations of how people see colour fascinating. The amount of work a printer has to go through to achieve colour fidelity printing a continuously varying hue field in some specified geometry..

Now imagine doing this before PDF/Postscript.


Once again, we should never take a single study as proof of anything. Whether or not anything new has been learned will be demonstrated by repeatability, application of the new research, and time.


..Bernhard Riemann, Hermann von Helmholtz and Erwin Schrödinger — all giants in mathematics and physics — and proving one of them wrong is pretty much the dream of a scientist.

Nice :-)


"...and proving one of them wrong is pretty much the dream of a scientist."

What an odd and somewhat alarming statement!


Why do you think so?

People are famous throughout history because they leave behind (or are associated with) big achievements which form a "node" in the history of science. If you prove them wrong, the node you leave behind is also important. Scientists (and most people in general) want do important work. There's also the scientific benefit that overturning a generally held idea usually precipitates a large amount of new and fruitful study afterwards.


I have always found the XKCD color survey of 2010 to be a most excellent reference in color perception:

https://blog.xkcd.com/2010/05/03/color-survey-results/

And still, nobody can spell fuchsia.


Think of the word Nokia. Now say foo - ksee - a.

On color perception, it is worth noting that it is culture-depedent. I've read some indegenous people can name you tens of different shades of green, while grouping together everything we would call red-orange-etc into a single color


I didn't see any acknowledgment of the effect computer monitors could have had in muddying the data. Some monitors represent colors differently. In fact if you don't use a gamma-corrected monitor then you have no chance at all.


It's easier when you remember that is is named after Leonhard Fuchs.


This was amazing to read as a colorblind person. The graph showing simple names was a feast as I was trying to find out where my colour-blindness fls in the green/yellow region.

xkcd really rules.


Alternative link (looks like the original is down): https://phys.org/news/2022-08-math-error-overturns-year-old-...


I love that they fell down this rabbit hole while trying to choose colors for data visualizations. It supports my belief that 80% of all color science research is the result of extreme bikeshedding!


So Los Alamos is just going to rip off their own researchers now? Are reviewers at PNAS asleep?

Zeyen M, Post T, Hagen H, Ahrens J, Rogers D, Bujack R. Color interpolation for non-Euclidean color spaces. In 2018 IEEE Scientific Visualization Conference (SciVis)

> "Furthermore, human color perception is also non-Riemannian, due to the principle called diminishing returns [9], Figure 1"

These people in 2018 literally pointed out the same thing! Also from Los Alamos.

I'm starting to think HN should be renamed to "another sad day in science".


Can someone explain what impact this has on "perceptual" color spaces like Oklab? Is this just something they live with?


I hope the industry catches on quickly. New color spaces, new displays.


I am an ex-physicist and was always annoyed by the color ←→ wavelength mapping in school books. Then in biology class they come up with the retina cones that are supposed to detect only a specific wavelength (or "color").

First belly dancing exercise to break out of the mapping.

Unfortunately the cones are explained as e.g. "red color cone" - and this leads top the second belly dancing exercise and handweaving about other colors (the ones not covered by the cones), using paint mixing as examples.

Colors teaching at school is a complete shitshow. Fortunately this is not something that is important until you are in a relationship and have to paint a wall.

Or that you have this idiot eye doctor I had as a 10 yo kid, who told my father in front of me that I was colorblind and would never do science. Fuck you idiot who said words I remember till this day (as someone who loved science) - I would be so glad to make a copy of my engineering diploma and of my PhD in physics (and all the papers I wrote) and shove it into your ass.

I really resent such people who do not realize how easy it is to crush a kid's dream, similar to the doll vs truck.


403




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: