Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are the creators of these algorithms penalized for their false positives? I think the confidence levels returned by these algorithms would be quickly calibrated if there was some strict penalty for high-confidence false positives.


I would prefer that there is a strict penalty for people who rely too much on their tool that they can't even do their jobs correctly. I'd fire a carpenter if an AI told the carpenter to fuck up my cabinet, because I expect my carpenter to double-check. Similarly if the carpenter finds the AI consistently useless and an avenue to get their ass sued, then they'll decline to purchase shit AI.


I agree. The penalty on the user of AI would in turn incentivize the user to penalize/reward the AI developer. But I’m not sure if it’s reasonable to require every user to be able to assess the long run accuracy of a statistical system given their limited interaction. Maybe there’s a market for companies to reliably assess and certify AI systems or by creating a sort of a prediction market that penalize overconfident false positives.


Why should they be?

This system is saying, "of the 10 million people we have in our database, this one looks like the photo you gave us."

It is the police and prosecutor who are asserting, "this person did it and should be held in jail."

This tool is a visual equivalent of looking up someone's name and is the visual equivalent of the no-fly-list being used as though no two people share the same name.


That might be what it actually does, but if they are marketing their product as "facial recognition", then that is not what they are claiming it does.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: