Sure. It just used to be harder to build in, and to do so precisely. In your example, dropping calls from an area code, that's a relatively coarse filter.
There's a whole strand of academic research [1] arguing about whether people can build prejudice into technological artifacts, eg one disputed example was an underpass with low clearance, supposedly so that white middle class people in cars could pass, but black poor people in buses could not [1, p123].
These days that dispute is ridiculous - obviously prejudice and bias can be built into technological artifacts. It's one "if statement" away.
>These days that dispute is ridiculous - obviously prejudice and bias can be built into technological artifacts. It's one "if statement" away.
And with machine learning we're building our biases into the algorithms explicitly; see all those "resume parsers are racist" articles.
This is clearly an issue that needs to be dealth with but the problem is that it's not a problem we can engineer our way of, we need to confront the biases that we have and try and build ways to expose biases we don't even realize we have.
This is obviously quite hard and if I had a solution to it I'd probably be implementing it instead of waxing poetic on HN.
> And with machine learning we're building our biases into the algorithms explicitly; see all those "resume parsers are racist" articles.
It's even worse than that, because it's so hard to tell if this is even happening.
Suppose guidance counselors in predominantly black schools are telling kids to focus on athletics and the ones in predominantly white schools tell them to focus on intellectual extracurriculars. Then a resume parser sorts people with athletics listed into physical jobs and people with intellectual pursuits listed into intellectual jobs, which of course results in the black applicants getting callbacks for the lower paying jobs.
This is pretty clearly the guidance counselors causing the disparity rather than the algorithm, but we only know that because it was stipulated in the hypothetical. In real life you may not have enough data to be able to discern the underlying cause. In other words, you don't know what the baseline racial disparity is based on all of the non-racial factors that correlate with race, so you don't know if the problem is in the algorithm or was caused somewhere upstream and the algorithm is only producing the accurate outputs for its inputs.
In theory you can evaluate this by checking up on how the candidates hired do compared to how the algorithm predicted they would do, but that's a noisy signal (what does "doing well" mean?), you might not have a large enough sample size to get meaningful results, and it has a lag time of potentially several years, by which point you may already be using a different algorithm. It's a hard problem.
you bring up a really good point. There's correlation and causation. Root cause analysis is really hard and rarely do the results align with simple politics because humans are complex beings in complex relationships.
There's a whole strand of academic research [1] arguing about whether people can build prejudice into technological artifacts, eg one disputed example was an underpass with low clearance, supposedly so that white middle class people in cars could pass, but black poor people in buses could not [1, p123].
These days that dispute is ridiculous - obviously prejudice and bias can be built into technological artifacts. It's one "if statement" away.
[1] "Do Artifacts have politics" https://www.cc.gatech.edu/~beki/cs4001/Winner.pdf