Hacker News new | past | comments | ask | show | jobs | submit login

I find it interesting that "according to my preferences" is pretty much exactly what you've trained. It _should_ be better according to your preferences, since you're not training it with actual contrast differences, but whatever it is that you prefer.



Depends on how you define "contrast". If you define it as "difference in amount of energy emitted", then neither method is using "contrast". The YIQ method is using a simple perceptual model that assigns different coefficients to red, green and blue because of how bright we perceive each of those colors to be. This has to do with the distribution of photoreceptors in the typical human eye.

The neural network approach goes a step further and accounts for how the brain processes color. There's no reason to consider that notion of "contrast" less valid than the brain-agnostic eye based model. In fact, in the context of readability, the psychovisual notion is far more useful.


One extra factor is that the YIQ method is presumably based on sRGB colours, whereas the neural network is using colours as the screen is actually displaying them (or, as they're being perceived), and most of us aren't using calibrated sRGB monitors.

W3C, in Web Content Accessibility Guidelines (WCAG) 2.0 [1] recommends a different and more complex algorithm [2] for calculating contrast between colours, which could be used instead for choosing white or black text. It would be interesting to see how the two approaches compare.

1. http://www.w3.org/TR/2008/REC-WCAG20-20081211/

2. http://www.w3.org/TR/2008/REC-WCAG20-20081211/#visual-audio-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: