Hacker News new | past | comments | ask | show | jobs | submit login

One extra factor is that the YIQ method is presumably based on sRGB colours, whereas the neural network is using colours as the screen is actually displaying them (or, as they're being perceived), and most of us aren't using calibrated sRGB monitors.

W3C, in Web Content Accessibility Guidelines (WCAG) 2.0 [1] recommends a different and more complex algorithm [2] for calculating contrast between colours, which could be used instead for choosing white or black text. It would be interesting to see how the two approaches compare.

1. http://www.w3.org/TR/2008/REC-WCAG20-20081211/

2. http://www.w3.org/TR/2008/REC-WCAG20-20081211/#visual-audio-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: