Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Context (2018): https://www.wired.com/story/when-it-comes-to-gorillas-google...

"Google promised a fix after its photo-categorization software labeled black people as gorillas in 2015. More than two years later, it hasn't found one."

Companies do seem to have developed greater sensitivity to blind spots with diversity in their datasets, so Parent might not be totally out of line to bring it up.

IBM offloaded their domestic surveillance and facial recognition services following the BLM protests when interest by law enforcement sparked concerns of racial profiling and abuse due in part to low accuracy in higher-melanin subjects, and Apple face unlock famously couldn't tell Asians apart.

It's not outlandish to assume that there's been some special effort made to ensure that datasets and evaluation in newer models don't ignite any more PR threads. That's not claiming Google's classification models have anything to do with OpenAI's multimodal models, just that we know that until relatively recently, models from more than one major US company struggled to correctly identify some individuals as individuals.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: