Hacker News new | past | comments | ask | show | jobs | submit login
Western Academia Helps Build China’s Automated Racism (codastory.com)
59 points by sexy_seedbox on Aug 8, 2019 | hide | past | favorite | 21 comments



I hate to be that guy/girl, but aren't all facial recognition algorithms by definition racist? In the case of China, it seems like the real issue is that the racism seems to be an open feature of the product rather than something carefully cloaked in bland PR-speak.


Yes, race is a feature of facial recognition. Is that racist? This algorithm seems to be specifically built to detect non-ethnic Chinese faces to further discrimination.


China's problem has always been how to maintain control through the ebb and flows of history. When centralized governments fail in China, they fail HARD. (This in no way is a moral defense of what they do.)

With mounting economic pressure from internal growth slowing down and n-times-burned global trading partners, the current regime is up against the wall. The one BIG difference this time is that AI has such effective policing effectiveness. Previously marshaling paramilitaries and corralling local warlords were the (hard to scale) tools available. They were not the best tools, because when the central govt fails, the empowered local forces have enough strength to wreak havoc.

AI is so powerful in China because AI is mostly bound by dataset size (at our current stage of research). And if there's one thing that's cheap for the CCP to get, it's training data.

If there's a time for dynasty building, it is now. The CCP may be the most resilient thanks to AI.


Exactly, Chinese leads CVPR or any other CV conference. Not so long from now China will also leads AI


That mortgage has to be paid.


There are almost no ethical uses for facial recognition. It is a technology for criminals.


yet literally every one of us is born with this capability.


Can you remember millions of faces? Can you watch video from millions of cameras simultaneously and create a list of routes and profiles of people taken by them? Can you look at a person and find his/her social network profile in a second? Sorry, but I doubt your capabilities can be even remotely compared to software.


What about unlocking my front door with a facial recognition lock. Are you telling me that's not ethical?


Exactly so. A face is an identifier, not an authenticator.

No biometric should ever unlock any lock by itself. It should only determine which thing-you-know or thing-you-have is usable for unlocking that lock.

So if your front door recognizes your face, it decides that the RFID fob you typically keep in your pocket is good enough to unlock whenever that gets close enough to read, or that hearing the vocal sounds for "rezrov" or "open up" is an acceptable key.

Otherwise, a housebreaker could take some photos with a telescopic lens and print your face onto a mannequin head with hydro-dip printing or silicone-pad transfer printing, and unlock your house with it. Forever. Or at least until you get a new face.


The technology doesn't exist yet but I'm talking about a facial recognition technology that is on par if not better than the human eye.

Such a technology is on par with a lock. Because it is basically impossible for any human to imitate another human to a degree where they can murder someone and pretend to live his life. It's easier to copy a key than it is to reproduce a fully identical face.

Twins being the exception to all of this, but with facial recognition devices that beat humans this still remains a possibility as no two faces are alike.


Regardless, it's an ethical use case for facial recognition.


No, it is not because it does not work and presenting a non-working solution as a working one is not ethical.


Doubly moot.

A use case isn't a functional product. And, the unethical or mis- application of a technology does not obviate ethical use cases.

I concur that replacing usernames (and, by extension, hardware identifiers), rather than logins, is of relatively limited benefit, but it's a use case nonetheless.


Yes, the technology itself cannot be ethical or unethical. Ethical determinations require consideration of intent, which machines do not [currently] have. And they often also require analysis of failure modes due to deception or subversion.

It is clear to me that a door-security system that included facial-recognition as an optional security-multiplier, and that did not willingly share its face data with any other person, device, or system, and that allowed the owner of a face to remove it from the system at will, would be an ethical use of the technology. But then again, if the system has to still work without the facial recognition in order to be ethical, you might as well leave it out and save the cost.

Facial recognition is most useful when the intended use is least ethical.


Regardless of whether it's ethical or not, because I'm not qualified to answer that question:

Why do you want to treat your face and/or other biometric information as a password? Isn't it closer to a username?


Username that is waaaay more difficult (if not impossible) to reproduce than any kind of password, even if the latter is known to the attacker (assuming facial recognition is implemented properly with the whole 3D face scanning and not just a flat image).


There are not ethical uses for facial recognition that we couldn't do with better, older and younger technologies.


Good. You have found a rare example of a legitimate use for facial recognition.


Let's say I fully agree with you, now what?

Ban K-Nearest-Neighbor?


Make face models equivalent to PII and restrict collecting, selling, exchanging them. If you don't do this then there will be cameras on every corner collecting and selling information about your habits, because merchants are desperate to obtain them.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: