> And yet people seem to be hellbent to make it about AI.
Of course, because the alternative is to make it a human failure of critical actors in the justice system. Every story about mistaken identity has dozens of points where humans, police officers and magistrates, made a judgement call ... wrongly. And always obviously wrong.
That is the case here and the case in the linked article. You can clearly see the eyes don't match, the jaw is different, ... and in Steve's case you see the face is too small, his nose is much narrower than the robber's, the hairline doesn't match, Steve has a square face the robber doesn't, his body shape (the shoulders) are very different, and Steve's ears are almost pointy, whereas the robber's are much more rounded, and much shorter than Steve's. It is not a reasonable mistake for the 10-15 people involved to make.
In other words, the inevitable conclusion is that:
1) the police and prosecutor as well as at least 1 judge knew they had the wrong guy
2) they all cooperated to use their power to extract a wrongful confession from the guy, including that judge
3) they used "testimony" from someone with a clear grudge against him without question
4) which additionally was not reasonable given the bad quality images
5) they refused to believe testimony when it disagreed with their working hypothesis
6) instead, they used psychological torture to force a confession from an innocent
7) essentially, they refused to set a person free without being offered another victim
Clearly at the very least they've shown they, both as an organisation and all individuals in it, would much rather wrongfully convict an innocent person than to be left without suspects or traces. They weren't protecting the bank, society, or anyone, they were VERY clearly abusing and violating the law to protect themselves from embarrassment.
People just can't deal with this. That police just fight to get someone, anyone, convicted at all costs rather than making damn sure they got the right guy, as the law demands. That they do this disregarding all costs to the suspect and society is not something most people are willing to consider ...
It’s a failure in the sense that people put too much confidence in this kind of algorithm and put them above any eye witness when they should just be considered as another element.
Also it is not merely a failure of people (in putting too much confidence in to these algorithms). It is also often actively marketed as such (fine print about culpability not withstanding).
I would expect algorithms like that to put out likelihoods, not hard identifications. All it does is say "this person looks like that person". I wouldn't read that as "wrong".
Facial recognition systems are image classifiers where the classes are persons, represented as sets of images of their faces. Each person is assigned a numerical id as a class label and classification means that the system matches an image to a label.
Such systems are used in one of two modes, verification or identification.
Verificatiom means that the system is given as input an image and a class label and outputs positive if the input matches the label, and negative otherwise.
Identification means that the system is given as input an image and outputs a class label.
In either case, the system may not directly return a single label, but a set of labels each associated to a real-valued number, interpreted (by the operators of the system) as a likelihood. However, in that case the system has a threshold delimiting positive from negative identifications. That is, if the likelihood that the system assigns to a classification is above the threshold, that is considered a "positive identificetion", etc.
In other words, yes, a system that outputs a continuous distribution over classes representing sets of images of peoples' faces can still be "wrong".
Think about it this way: if a system could only ever signal uncertainty, how could we use it to make decisions?
At some point they even have a human compare the likeness, and the human also concludes it is the same person.
The article even features the sentence "teve Talley is hardly the first person to be arrested for the errors of a forensic evaluation."
And yet people seem to be hellbent to make it about AI.