Adding this to my running list of algorithms encoding and amplifying systemic bias, like:
* A hospital AI algorithm discriminating against Black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6
* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092
* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, Black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm...
And here's some good news though:
* When police wrongfully arrested a person based on faulty facial recognition match using grainy security camera footage, without any due diligence, asking for an alibi, or any other investigation. https://www.npr.org/2020/06/24/882683463/the-computer-got-it...
* When clinical algorithms include “corrections” for race which directly raise the bar for the need for interventions in people of color, such that they then receive less clinical screening, less surveillance, less diagnoses, and less treatment for everything, including cancer, organ transplants, birth interventions, urinary and blood, bone, and heart disease. https://www.nejm.org/doi/10.1056/NEJMms2004740
Maybe you should make multiple lists. There are some overt racist things, like redlining was, and gerrymandering can be, but a lot of those things appear to be natural accidents, or just looking at people without race before finding some physical (sickle-cell anemia) reason to.
For instance:
> * Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males.
It's obviously caused by studying the people who present, and then demographics changing. Nobody made a decision here with any ill intent. Both the hospital and the insurance company share the patient's interest in them getting better and are already looking for better data.
> * When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, Black first time offender than for an older white repeat felon.
A higher chance of any offense, or of being a burden to society? Kids are more likely to commit smaller crimes, and those are a gateway to larger crimes that then put you away until you're a hardened repeat felon. If black kids were being enticed into crime we certainly want to know about it. If it's a subpopulation rather than an area the causes are likely different and good decisions only come from good data.
I'm not sure how you think my list would bifurcate into multiple lists.
> It's obviously caused by studying the people who present, and then demographics changing. Nobody made a decision here with any ill intent. Both the hospital and the insurance company share the patient's interest in them getting better and are already looking for better data.
Not a single example I gave was the result of ill intent. That's literally the point of my list. There's a difference between systemic bias and intentional bias. These are examples of systemic bias.
> A higher chance of any offense, or of being a burden to society?
I provided the link which answers your question. In this case, it predicted a higher likelihood of future crime for a kid who attempted to steal someone's bike and scooter, than for an adult who had shoplifted, been previously convicted of armed robbery, and served 5 years in prison already.
> Kids are more likely to commit smaller crimes, and those are a gateway to larger crimes that then put you away until you're a hardened repeat felon.
Your logic seems to acknowledge that one of the large drivers of crime is being "put away" in prison. In this case, the algorithm predicted a higher likelihood of committing future crimes for a kid than for someone who had already served 5 years in prison for armed robbery. Even worse is that the algorithm was being used to determine the severity of their respective punishments. So, the act of predicting a higher likelihood of future crime for the kid becomes a self-fulfilling prophecy, giving her a harsher sentence, which in turn is more likely to drive her toward future crime. This is systemic bias in action.
> I'm not sure how you think my list would bifurcate into multiple lists.
Because the two things I pulled from your list aren't systematic. One was local to a specific health system and about a specific condition, and the other about kids from a certain area. Neither was just about black people in general. Your list is predominantly racial and I thought that was the point. This medical thing is really about any group not being recognized as different enough from the whole, women get this a lot, but even men do in areas where women are the primary sufferers of something. (Breast-cancer for instance.)
Also, to distinguish things that are intentional (redlining) vs absolutely unforeseen but still problematic. A few absolutely undeniable things, and they exist, would make the list more meaningful and perhaps give more weight to the rest which are a bit ambiguous.
> There's a difference between systemic bias and intentional bias. These are examples of systemic bias.
No, they're an example of non-systemic bias. The aren't broadly across any system, or symptomatic of a general attitude. Systematic bias doesn't mean that the majority gets more attention, it means that the entire system is biased against something, and you aren't showing that.
> Your logic seems to acknowledge that one of the large drivers of crime is being "put away" in prison.
It is a separate topic, but yes. Being put away unjustly, and in the wrong place or severity for your crime hurts. But not serving a sentence for crime is just as bad, if not worse. I'm not against jailing criminals, just doing it where it doesn't serve us or them.
> In this case, the algorithm predicted a higher likelihood of committing future crimes for a kid than for someone who had already served 5 years in prison for armed robbery.
Could be. Without knowing the severity of the crime predicted though it's kind of meaningless.
> Even worse is that the algorithm was being used to determine the severity of their respective punishments.
Well, without knowing the prediction you can't know if a stiffer sentence would have encouraged or deterred future crime. If they were predicted to be killers, a hard but fair sentence at the first crimes could turn them around. If they're predicted to steal a few cars then even a year in prison is probably going to make them worse.
> So, the act of predicting a higher likelihood of future crime for the kid becomes a self-fulfilling prophecy, giving her a harsher sentence, which in turn is more likely to drive her toward future crime. This is systemic bias in action.
It would be, if it were system wide. It was a specific population though, and there's still no indicator that it said black kids specifically other than because in those neighborhoods they were the ones at risk. fwiw, stealing scooters from children doesn't seem like the path to good behavior.
Systematic risk is AIs for this purpose in regard to anyone, not that they targeted black people more in some cases. That's essentially random.
> Your list is predominantly racial and I thought that was the point.
Ah, no, that wasn't the point. They were just examples of systemic bias. The fact that so many are racial is just a reflection of the unfortunate state we find ourselves in.
> A few absolutely undeniable things, and they exist, would make the list more meaningful and perhaps give more weight to the rest which are a bit ambiguous.
I honestly don't know how any of the examples posted are deniable, they're all well founded with research and evidence. But you're right that adding more to the list can only help.
> Systematic bias doesn't mean that the majority gets more attention, it means that the entire system is biased against something, and you aren't showing that.
That's not what systemic bias means.
"Systemic bias, also called institutional bias, and related to structural bias, is the inherent tendency of a process to support particular outcomes."
Also, examples are evidence which support a theory (of systemic bias in this case). Any one data point is rarely intended to be absolute proof of that theory, much less a description of its scope in its entirety. All of these examples do indeed provide evidence toward the existence of systemic bias within systems without fully describing or proving the extent of the entire scope of systemic bias within those systems.
> But not serving a sentence for crime is just as bad, if not worse.
The article I linked explains that this is not what the algorithm was used for. It was not used to decide whether or not to sentence them, it was used to help decide the severity of their sentence.
> If they were predicted to be killers, a hard but fair sentence at the first crimes could turn them around. If they're predicted to steal a few cars then even a year in prison is probably going to make them worse.
You seem to be agreeing that this was in fact a badly biased algorithm, since it was recommending a stiffer punishment for a kid who attempted to steal a bike than for a repeat felon, previously convicted of armed robbery, who shoplifted.
> It would be, if it were system wide. It was a specific population though, and there's still no indicator that it said black kids specifically other than because in those neighborhoods they were the ones at risk. fwiw, stealing scooters from children doesn't seem like the path to good behavior.
If something affects a population, that is system-wide by definition, since the definition of "system" is pretty broad; see the description above. Regardless of whether or not you think stealing scooters is a good prediction, the fact here is that it rated the likelihood of future crime as higher than for someone who was already a career criminal.
> Systematic risk is AIs for this purpose in regard to anyone, not that they targeted black people more in some cases. That's essentially random.
If only. Unfortunately, this is seen over and over again, more so than would happen by randomness. That's the point of the list.
> They were just examples of systemic bias. The fact that so many are racial is just a reflection of the unfortunate state we find ourselves in.
Do you think racial cases of bias outweigh other demographic biases? And what's your opinion of the general level of intent (or lack of, to fix) relative to other issues?
The medical one made me think about the similar case of differing heart-attack symptoms in men and women. In this case actually because doctors just focused on men.
> That's not what systemic bias means.
> "Systemic bias, also called institutional bias, and related to structural bias, is the inherent tendency of a process to support particular outcomes."
That just shifts the definitional question to the scale of the institution, structure, or process.
At one end, if everyone follows the same rules it's a system, but what about if one clinic got bad data, do we write that up as a systematic failure? That's okay if it's your definition, but cases at the other end seem a lot more important and a lot more amenable to systematic fixes.
> If something affects a population, that is system-wide by definition, since the definition of "system" is pretty broad
Is the definition, "a system is broad" or (I think you mean) "broad as in flexible", such that a system can be anything? Any micro or macro population?
I guess where I was going with this is that the list seemed to be about some things that are very broadly impactful but not specifically racist, like facial-recognition not recognizing anyone but white men very well, but then a related issue of how this is encoded in the system. Is usage required, or was it random, etc.
Such that maybe top-to-bottom it'd be sorted by breadth. How many people does each impact, and how deeply-mandated or intertwined are the issues. And then different populations and problem to both give contrast, but also to indicate (when complete) the demographics of the problems.
So I was wondering if you'd focused on black issues to make the list.
> Unfortunately, this is seen over and over again, more so than would happen by randomness. That's the point of the list.
So, statistically, how much subpopulation misrepresentation would you expect in various ways? And are you saying there's more in general, or more racial, than expected?
I interpreted the one about kidneys as an error because of changing demographics and how people had caught it, made note that this sort of thing happens, and started checking other research for similar sampling bias. It read as a success story where the one about facial-recognition was more actively black mirror.
> You seem to be agreeing that this was in fact a badly biased algorithm
No. But not a good one either. More that I'm wondering if it was an attempt to model, like for predictive policing, or a tool sold to simplify the sorting of people? Because models are good, even when they're wrong, but crappy predictive tools are worse than useless and - where I was ultimately going with this - perhaps fraudulent to sell.
The 'good' case would be if someone built a criminality model and the city was trying to work with police and communities to intervene in a predicted pattern. It's not unreasonable that the societal harm from a non-criminal becoming criminal could be worse than an existing criminal remaining that way. So modeling and discussing this isn't bad, even if the data has a racial component and some of the questions are of bias.
> Do you think racial cases of bias outweigh other demographic biases? And what's your opinion of the general level of intent (or lack of, to fix) relative to other issues?
When talking about systemic bias, I think it's less about intent, but that is one of the things that makes it so dangerous in terms of the level and longevity of impact; it's much more difficult to fix when there is no single bad actor at which to point the finger. This makes it easier for people to deny, either outright or at least in extent, and makes it harder for the systems to be fixed.
I don't know that I could place one type of demographic bias over another, but I think it's better for people to understand and acknowledge systemic bias of any form, if we have any hope to rectify any of them. I also think a lot of them are woefully entangled. For example, I've heard the argument that some bias isn't racial, it's economic, as in, a bias against poor people not people of color. But when there are already so many systemic economic biases against people of color, a lot of times it ends up being a distinction without a difference. I have plenty of examples of this, but don't want to dilute the impact of that last thought by going into detail.
> That just shifts the definitional question to the scale of the institution, structure, or process.
I agree. Systemic bias is everywhere, for almost all scales of systems.
> Is the definition, "a system is broad" or (I think you mean) "broad as in flexible", such that a system can be anything? Any micro or macro population?
Yes, you're right, I meant flexible. We can debate the severity of the impact of some form of systemic bias based on the size of the system (and the population or sub-population impacted), but to the individuals who are the victims of systemic bias, the result ends up being the same. At the end of the day, I don't think I'd fault any individual for taking up the cause of identifying and rectifying bias at any scale.
> I guess where I was going with this is that the list seemed to be about some things that are very broadly impactful but not specifically racist, like facial-recognition not recognizing anyone but white men very well, but then a related issue of how this is encoded in the system. Is usage required, or was it random, etc.
Yeah, I didn't really provide much context for why I specifically keep a list. It's not intended to single out racism, and in fact, it is intended to focus on unintentional codification of bias, because I think there are more software developers out there who are at risk for unintentionally codifying bias into their algorithms than there are those who may intentionally encode it.
The reason I keep the list is as a reminder of the level of impact you can unintentionally have in the systems you build without extremely deep thought and broad context. It's a reminder that what we do can have real impact on real people in ways we never imagined.
This is especially recurring as a theme for machine learning and AI systems which are taught based on limited datasets which often systemically under-represent those to whom it will end up being applied.
> Such that maybe top-to-bottom it'd be sorted by breadth. How many people does each impact, and how deeply-mandated or intertwined are the issues. And then different populations and problem to both give contrast, but also to indicate (when complete) the demographics of the problems.
I think this is a good idea.
> So I was wondering if you'd focused on black issues to make the list.
I did not specifically focus on that. I started tracking the list I think 2 or 3 years ago, and I think issues of systemic racism have been especially visible in the mainstream in this time period for reasons I won't get into. So, it's largely a reflection of the systems I've been made aware of through research and publication.
> So, statistically, how much subpopulation misrepresentation would you expect in various ways? And are you saying there's more in general, or more racial, than expected?
I have no idea. I just know that I've not yet been able to find any real systemic bias, at least in the US, against rich, caucasian males. If the question is, which slice of demographics is the most biased against, I think there are some studies which have looked into that, but I couldn't say myself.
> I interpreted the one about kidneys as an error because of changing demographics and how people had caught it, made note that this sort of thing happens, and started checking other research for similar sampling bias. It read as a success story where the one about facial-recognition was more actively black mirror.
There is certainly a bright side in these things being identified as problems which need to be solved. However, even with the kidney example, I don't know that it's a success as much as further evidence of how difficult these things are to rectify at a systemic level, since the vast majority of doctors using the still-biased eGFR formula have no idea that it has this problem.
> More that I'm wondering if it was an attempt to model, like for predictive policing, or a tool sold to simplify the sorting of people? Because models are good, even when they're wrong, but crappy predictive tools are worse than useless and - where I was ultimately going with this - perhaps fraudulent to sell.
I mostly agree with this sentiment. The biggest issue I see is that most of the models weren't built with the intention of being biased or badly predictive, and those selling them often don't know until after they've already been sold and broadly adopted. Fraud generally requires mal-intent, which most of these models and products didn't have, at least if we're being generous and optimistic. Negligence is probably closer to it, but I think difficult to prove in such a new frontier.
I just finished reading a book called, Weapons of Math Destruction, which talks about the damage models can do. The author posits a set of tests to tell whether the model is beneficial or destructive. One of the hallmarks, the author argues, of a good predictive modelling system is one which includes feedback into the system as a result of its predictions. Many examples, especially those which alter the outcome by making the prediction in the first place (e.g. criminal sentencing based on predicted criminal activity, excluding candidates from the hiring process, firing teachers based on opaque models), have no feedback mechanism for directly determining how successful their predictions were.
> But yeah, to recommend sentences, total crap.
One of the other primary tests is dependent on the context of how the model is used, so it's not really reasonable to try to determine the benefits of a model in isolation with no consideration for the appropriateness of its implementation and utility. In this case, it's hard to defend a model with bad predictive data based on the presumption that it could have been more beneficial with another use, when the reality is that it was only sold with one use and that one use was harmful. Though, even this ignores the fact that it had no built-in feedback mechanism to determine how good a job it was even doing with its predictions in the first place.
> When talking about systemic bias, I think it's less about intent, but that is one of the things that makes it so dangerous in terms of the level and longevity of impact; it's much more difficult to fix when there is no single bad actor at which to point the finger.
Thanks for approaching it this way. I think we often (societally, looking for blood) want to find a bad actor and are intentionally blind to bad things that just sort of happen at the edges, but need oversight to find and fix.
> For example, I've heard the argument that some bias isn't racial, it's economic, as in, a bias against poor people not people of color. But when there are already so many systemic economic biases against people of color, a lot of times it ends up being a distinction without a difference.
It doesn't help the sufferer, at the time, to be told that someone else would be suffering equally, but it does help fix the problem I think, to realize that it's circular via poverty or whatever, not simply racial, because in some areas and at some times it has simply been racial and it hugely changes how you fix it. If it's overt you can't just offer change, you have to prevent further damage during the repair process.
> The reason I keep the list is as a reminder of the level of impact you can unintentionally have in the systems you build without extremely deep thought and broad context.
Have you ever read comp.risks? I really like it as a source of Therac-25 type stories (across all fields) that engineering types should think about when building things.
> I just know that I've not yet been able to find any real systemic bias, at least in the US, against rich, caucasian males.
Is Twitter not a system? :D
It gets a bit fuzzy with bias against the majority. Every model that isn't right disadvantages everyone and the majority is part of everyone. So bad drug laws impact white people too. But because actual race is only encoded in one direction (affirmative action, "positive" directions) then anything that impacts white people also impacts everyone, whereas there are often specific laws (such as for constructing "The Projects" in the first place) that do directly exclude whites from the harm they caused. So subgroups definitely experience more exclusive problems, even aside from the amount of problem.
> how difficult these things are to rectify at a systemic level, since the vast majority of doctors using the still-biased eGFR formula have no idea that it has this problem.
I think it's just that the story is best told from that moment. That's the OMG. From there it improves, and I'm sure they sent a copy of the report worldwide asap. But no solution is ever 100% so there's no wrap-up party and it will never look done and solved. (Even one doctor who didn't check their email...)
> Fraud generally requires mal-intent, which most of these models and products didn't have, at least if we're being generous and optimistic.
Probably, but if they're saying "our product does X" maybe there's something to grab onto and investigate. Maybe they did misrepresent it.
> I just finished reading a book called, Weapons of Math Destruction, which talks about the damage models can do. The author posits a set of tests to tell whether the model is beneficial or destructive. One of the hallmarks, the author argues, of a good predictive modelling system is one which includes feedback into the system as a result of its predictions.
Good point, and thanks for the book recommendation.
A lot of things aren't amenable to that though, because the hypothetical city/community meeting can't take years to watch the outcomes and train continue to build a model, they've got to work from historical data up to that point and make policy decisions in the meeting.
> One of the other primary tests is dependent on the context of how the model is used, so it's not really reasonable to try to determine the benefits of a model in isolation with no consideration for the appropriateness of its implementation and utility.
I've been seeing that as the predictive vs directive use. City planning instead of sentencing guidelines.
> When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, Black first time offender than for an older white repeat felon.
Younger people are more likely to re-offend than older people. Remove race from the situation entirely, and this is still the expected result. There's zero reason to think this algorithm was based with respect to race.
If you read deeper than a one line description you'll see:
1. Even after accounting for criminal history, recidivism, age and gender, black defendants were still scored as much more likely to re-offend.
2. It incorrectly predicted that black defendants would re-offend much more frequently than white defendants.
3. It incorrectly predicted that white defendants would not re-offend much more frequently than black defendants.
Considering those three things and that they have refused to give any details about how the algorithm works, I see zero reason to give them the benefit of the doubt. Fire them until they can demonstrate that the algorithm is not in fact biased like it appears to be.
The score also takes into account answers to questions like whether the defendant's parents were separated or whether their parents were ever arrested. Those things are completely out of the defendant's control and are highly correlated with race. Even including those things in their score is damning.
Phrasing it in the way you do is misleading. The defendants labeled as high risk and low risk were just as likely to re-offend. To put this in simpler numbers
* Out of 100 white people, 5 were labeled high risk.
* Out of 100 black people, 20 were labeled high risk.
* Out of the 5 white people labeled high risk, 4 re-offended.
* Out of 20 black people labeled high risk, 16 re-offended.
In either case, someone labeled high risk had the same likelihood to re-offend: 80%.
"It incorrectly predicted that black defendants would re-offend more frequently than white defendants." This is technically correct, but not because the algorithm was bad a predicting rates if re-offending. It's because there was higher rates of re-offending. The likelihood of re-offending among someone labeled high risk is the same.
This kind of objection seems like a blanket rejection of any system that produces an inequitable outcome. But the reality is that rates of re-offending is not equal. Even a perfectly accurate prediction of re-offense is going to predict higher rates of re-offending among men. Because men re-offend at higher rates. This isn't sexism.
That doesn't mean we shouldn't recognize the disparate impact of incarceration on underprivileged people. But simply concluding bias due to inequitable outcomes is simplistic.
Your analysis above is a prime example of how dangerous statistics can be when not properly considered. Even if we took your numbers above as truth, it actually does not indicate that the system isn't biased. In order to determine that you'd need to know what percentage of each group _not_ labeled as high risk never re-offended. You're only analyzing the true positive rate without taking into account the false negative rate. You'd need to know how many people in each group that weren't labeled went on to re-offend.
Thankfully, the article I linked to looked at both:
White African American
Labeled Higher Risk, But Didn’t Re-Offend 23.5% 44.9%
Labeled Lower Risk, Yet Did Re-Offend 47.7% 28.0%
When you are doing crystal ball voodoo based on stuff that has no connection to the defendant's choices and no direct connection to criminality like whether or not their parents were separated and whether their parents were ever arrested, then you're damn right there should be a blanket rejection of any inequitable outcome.
I'm not even convinced that you should be allowed to make a decision based on stuff like that even if it is somehow equitable.
> no direct connection to criminality like whether or not their parents were separated and whether their parents were ever arrested
Again, not right at all. These things heavily correlate with crime. A broken home is the primary indicator of someone's future success, even over a 'better' but broken home.
I'm male, and not a rapist, but having a penis heavily correlates with rape. I suggest you do not pick me, or any male, to watch your children. Sure it's rude to the innocent, but oh well, a little rudeness vs potential harm.
> I'm not even convinced that you should be allowed to make a decision based on stuff like that even if it is somehow equitable.
Then nobody will ever follow the law. If it's illegal for me to reject a potentially bad babysitter for something I know about them that could risk my child's safety I'll happily lie and say I didn't like their haircut.
You'd get further if you tried to identify these people and treat them better - grants to move out of bad neighborhoods, to get educated, to get pardons for unrelated crimes, to get counseling for abuse, etc - than with this "wrong unless it's 100% identical, equity-of-outcome" thing.
We're talking about an algorithm being used as part of the court system to determine whether or not someone rots in jail, not about how you choose your babysitter.
It needs to be held to a higher standard and punish people based on their actions, not based on what their parents did.
No, you're talking about making it illegal to use the things you know about someone or something to make a fully qualified decision. About a babysitter, a potential criminal, an immigrant, whatever.
> It needs to be held to a higher standard and punish people based on their actions, not based on what their parents did.
And that's not what actually happened. A guy got arrested on bad data and then released, we know some kids might be at risk because they're from broken homes, etc. Nobody was jailed for their parent's actions, and nobody even proposed it. Some trends are worrying, but you act like they're the intent of the entire system not just some scammy products that a company is pushing.
Voting machines, for instance, should all be burned. But I understand that most people don't know why and support them for convenience. I think they're anti-democratic but I don't think people are evil for using them. You should try to get a similar perspective.
I am not talking about that. You are misinterpreting my statements. "stuff that has no connection to the defendant's choices". Defendants only exist within the court system.
> Nobody was jailed for their parent's actions
Not jailed, but they are being kept in jail because of them. That is the same thing.
I am not claiming that anyone is evil. Just that this is unjust and that they need to stop doing this. You are putting words in my mouth.
> Fire them until they can demonstrate that the algorithm is not in fact biased like it appears to be.
What do the non-black box findings from the area show? If blacks are a poor demographic in the area it might be true. (Crime tracks poverty, not race.)
But yeah, certainly don't pay anyone for, or use, a black-box algorithm.
> The score also takes into account answers to questions like whether the defendant's parents were separated or whether their parents were ever arrested. Those things are completely out of the defendant's control and are highly correlated with race. Even including those things in their score is damning.
Nope. That's perfectly fair to look at. For instance, a broken family is another predictor. It's not fair, but being a victim increases your chance to offend. Similarly, the number one predictor of child sex crimes is to have suffered them yourself. If you have a child and are picking a babysitter, skip the one who was molested.
fwiw, those things don't correlate with race, they correlate with poverty which correlates with race. Broken families are more likely to be poor and abusive.
* A hospital AI algorithm discriminating against Black people when providing additional healthcare outreach by amplifying racism already in the system. https://www.nature.com/articles/d41586-019-03228-6
* Misdiagnosing people of African decent with genomic variants misclassified as pathogenic due to most of our reference data coming from European/white males. https://www.nejm.org/doi/full/10.1056/NEJMsa1507092
* When the dangers of ML in diagnosing Melanoma exacerbating healthcare disparities for darker skinned people. https://jamanetwork.com/journals/jamadermatology/article-abs...
* When Google's hate speech detecting AI inadvertantly censored anyone who used vernacular referred to in this article as being "African American English". https://fortune.com/2019/08/16/google-jigsaw-perspective-rac...
* When Amazon's AI recruiting tool inadvertantly filtered out resumes from women. https://www.reuters.com/article/us-amazon-com-jobs-automatio...
* When AI criminal risk prediction software used by judges in deciding the severity of punishment for those convicted predicts a higher chance of future offence for a young, Black first time offender than for an older white repeat felon. https://www.propublica.org/article/machine-bias-risk-assessm... And here's some good news though:
* When police wrongfully arrested a person based on faulty facial recognition match using grainy security camera footage, without any due diligence, asking for an alibi, or any other investigation. https://www.npr.org/2020/06/24/882683463/the-computer-got-it...
* When the above is compounded for people of color according to studies which show that facial recognition systems misidentify dark-skinned women 40x more often than for light-skinned men. http://news.mit.edu/2018/study-finds-gender-skin-type-bias-a.... Another study showed false positives can be 10x to 100x more frequent for Asian and African American faces compared to Caucasian. https://www.nist.gov/news-events/news/2019/12/nist-study-eva...
* When an algorithm blocked kidney transplants for Black patients. https://www.wired.com/story/how-algorithm-blocked-kidney-tra...
* When clinical algorithms include “corrections” for race which directly raise the bar for the need for interventions in people of color, such that they then receive less clinical screening, less surveillance, less diagnoses, and less treatment for everything, including cancer, organ transplants, birth interventions, urinary and blood, bone, and heart disease. https://www.nejm.org/doi/10.1056/NEJMms2004740