It might output a much more detailed image than a human-drawn sketch which could be less useful or more damaging than the vague sketch.
Imagine that a police officer is looking for someone matching the image but doesn't know that it's hallucinated from a vague description, they could let the real suspect go or incorrectly arrest someone who happens to look like the AI generated image but otherwise doesn't have any reason to be a suspect.
Police are already greatly overestimating the accuracy of their own facial recognition tools because they don't realize the limits of the technology, and this would just be worse.
Accountability for what? I recall Procedure already requires approval of the final sketch by the witness. Witnesses could always make mistakes, but that's true even in the current process. Or is your argument sketches should never be used?
In fairness, with the ubiquity of cameras, sketches are much less required...
Our hypothetical AI won't make any decisions. It just makes sketches as described and approved by witnesses. The relevant racism here is the one any witnesses may have, that's true even with a human police sketch artist.
"as described" according to what? There is simply no way to create image from words without something closely resembling decisions. Maybe "it" won't "make" those decisions, but they will be made somewhere.
Yea, somebody will have to evaluate whether the image matches the word, and that is currently done by the witnesses themselves. How is it worse than the current state?
In the racism-as-individual-intentional-malice framework sure. But I'm a consequentialist on this one. If it causes disparate & unjust outcomes mediated by perceived race then describing it as racist makes sense. No intent necessary.
No one is arguing that the AI has some sort of intentional racism and inherent real intelligence - they aren't trying to anthropomorphize it.
The argument is that the output is racially discriminatory for a variety of reasons and it's easier to just say "it's racist" than "Many of the datasets that AI is trained on under- or over-represent many ethnic groups" and then dive into the details there.
Well, maybe it will be a plus then for minorities that all of the training data is of white people. I only joke, as this is a horrible idea all around, but I appreciate your creativity.
Not an expert, but my intuition is that most of the sketch artists job is asking the right questions. I would assume that most people would have trouble describing close friends or even their partners from memory.
Somewhat tangential: the "part of the brain" that is responsible for recognizing faces is incredibly well developed. That "peek-a-boo" game that you play with children? Every time you uncover your face millions of neurons in the childs brain suddenly fire giving them a jolt of "joy".
The face recognition is so developed that we tend to see faces were there are none (face pareidolia).
... the point being that the brain does a lot of unconscious work recognizing individuals. Describing those individuals later consciously is pretty error prone.
As I understand it most police departments already use some kind of computer aided facial composite software instead of a traditional sketch artist. I can think of several dystopian reasons throwing AI into the mix might not be great, but tbe larger problem with this is why does it need to be sketched in pen and why does it need to be cute.
Might make a neat like coin op charicature thing though.
I doubt it would change anything from what they do currently with police sketches; it would just be a faster, more accurate version. It's still just one piece of data they have to work from. The victim could describe the person to an AI, and it would update the 3D model on the fly.
"White Male, Curly hair, mole on face"
Generate.
"Good, but he had a larger nose, and blue eyes."
Generate.
"He was a bit more gaunt, and had some stubble."
Generate.
"Nearly there. More pronounced check bones, and make the jaw a bit softer"
Generate.
In 5 minutes or less, you could get a near exact picture of the potential criminal; something that might take up to an hour or more normally with a professional police sketch artist, and it could easily be in 3D too. There's tremendous value in that.
So, this is pretty much backwards from how police sketches actually work and it would likely obliterate any reliability from the system (which, as I understand, is very low already - and even worse for computer-generated imagery).
People have bad memories and bad perception in stressful situations. They don't actually know what the person looked like; they don't have a strong model in their brains. Police sketchers use clever questioning techniques to get details about features that people wouldn't otherwise think to describe or even realize they have knowledge about. The truth is that there is an absolute limitation to the effectiveness of any facial image reconstruction, which is the limits of human memory. Adding AI to the mix can't change that, but it's extremely likely to influence the witness to describing a less accurate face with higher confidence. In other words, a disaster.
There is probably some "wisdom in crowds" for identifying a suspect. For example one person usually can't estimate the number of gumballs in a jar, but some studies have shown that if you survey 100 people you get a very accurate number. Maybe you need far less than that.. 2-3 people + AI perhaps comes up with a reasonably accurate estimation of reality.
Sadly, events are far more complex than counting items. For example, during the Columbine shooting, many students thought there were 4 shooters (while there were only 2), because at some point one of them remove their hoodie and the other turned their baseball cap backwards. The police thought there were shooters on the second floor because of an optical illusion.
These types of problems are very widespread - it's not rare that people misremember details because of the stress and trauma, and it's also well known that the process of describing/asking questions can cause bias into the victim, as seen in the many cases of people admitting crimes they didn't commit after long interogations.
I've also heard that the quality of police sketches was highly related to the person making the sketch, some have high correlation rate but that is not the norm i.e.: the average sketch artist might not be reliable on average.
JCS Psychology on Youtube is a great channel showing the processes happening during interrogations, if you're interested.
This is basically the plot of "The Net" starring Sandra Bullock. A group of hackers steals her identity and creates a new one for her in various systems to cause the police to believe she is a wanted felon.
It's intriguing, because I wonder how this would affect police work. I'm imagining things here, but I assume that when a profile sketch is developed, all officers using that image know that it's "just a sketch" because, it looks like a drawing, because it is one.
So what happens if you now generate a photorealistic "sketch" based on a description? Are officers going to be sufficiently aware to know that's not a actual photo of the guy they are looking for, and act accordingly? Or is it going to heavily bias a manhunt effort? moreover, what happens when the photo randomly ends up close to someone present in the dataset?
The police already know those sketches are super fake lol. The point of those isn't to arrest the right person it's to have an excuse to hassle arrest or maybe kill a minority.
I know that would solve the problem. What I'm curious about is what happens if that isn't done, because I can see promoting the idea of a "photorealistic AI sketch artist" as a very attractive proposal for a certain type of manager. It's just a thought experiment.
3D printed case. Resting on a table. Witness describes the suspect. 10 seconds later it prints out an AI generated ink sketch.
I mean why not? Sell it through to every police station in your state. You can even put a cute little police badge emblem on the case.