> But ultimately having an explicit model allows us to actually interrogate and correct those biases in a permanent way.
Who is "us"? And does it?
Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.
Besides, principles shminciples. The way to judge these systems is by the actual impact they have. Will this result in a more just system? If not, we can put the principles right into /dev/null.
The article does seem to address this:
> Among the debates was how to balance accuracy and fairness, said Ojmarrh Mitchell, a professor at Arizona State University who served on the panel. An accurate algorithm would do a good job of predicting whether defendants showed up, and thus whether to recommend release, Mr. Mitchell said, and a fair algorithm wouldn’t result in more release recommendations for white defendants than for others.
> More than a year of tinkering ensued. To decrease the differences in outcomes for different races and ethnicities, the researchers excluded data on low-level marijuana offenses and “theft of service,” mainly subway turnstile jumping. Removing fare beating and marijuana arrests lowered the racial disparity in the tool by 0.4%, essentially making it slightly fairer but a little less accurate.
> Data released this month show the new tool’s recommendations didn’t have such racial disparities. From Nov. 12 through March 17, the algorithm recommended releasing without conditions 83.9% of Blacks, 83.5% of whites and 85.8% of Hispanics. Defendants with higher scores returned to court more often than those with lower scores, showing the algorithm seemingly made accurate predictions.
...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.
> Judges didn’t like it. “They said, ‘I was loving this, all this data-driven, evidence-based demonstration of what’s predictive. Now you’re putting a policy thumb on this,’ ” said Susan Sommer, general counsel at the Mayor’s Office of Criminal Justice.
> Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.
So don't accept contracts with those stipulations.
> ...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.
That's my point though. Bias we can actually inspect and understand and iterate on is better than bias that we have no insight into at all.
Who is "us"? And does it?
Does the firm writing the software actually allow this? Plenty of software sold to the government comes with contractual stipulations that it may not be reverse-engineered, inspected, or otherwise second-guessed.
Besides, principles shminciples. The way to judge these systems is by the actual impact they have. Will this result in a more just system? If not, we can put the principles right into /dev/null.
The article does seem to address this:
> Among the debates was how to balance accuracy and fairness, said Ojmarrh Mitchell, a professor at Arizona State University who served on the panel. An accurate algorithm would do a good job of predicting whether defendants showed up, and thus whether to recommend release, Mr. Mitchell said, and a fair algorithm wouldn’t result in more release recommendations for white defendants than for others.
> More than a year of tinkering ensued. To decrease the differences in outcomes for different races and ethnicities, the researchers excluded data on low-level marijuana offenses and “theft of service,” mainly subway turnstile jumping. Removing fare beating and marijuana arrests lowered the racial disparity in the tool by 0.4%, essentially making it slightly fairer but a little less accurate.
> Data released this month show the new tool’s recommendations didn’t have such racial disparities. From Nov. 12 through March 17, the algorithm recommended releasing without conditions 83.9% of Blacks, 83.5% of whites and 85.8% of Hispanics. Defendants with higher scores returned to court more often than those with lower scores, showing the algorithm seemingly made accurate predictions.
...Albeit with mixed results. You can't avoid bias. You can just choose whether you want to bias towards justice, or rigidity.
> Judges didn’t like it. “They said, ‘I was loving this, all this data-driven, evidence-based demonstration of what’s predictive. Now you’re putting a policy thumb on this,’ ” said Susan Sommer, general counsel at the Mayor’s Office of Criminal Justice.
> The researchers added those charges back in.