Hacker News new | past | comments | ask | show | jobs | submit login

I wholeheartedly agree with the linked page (we should encourage this kind of scrutiny in algorithmic judgements). However, where there seems to be disagreement is that an algorithmic judgement is a step forward compared to offline, human judgement. An algorithm has fixed code, and fixed code is traceable and auditable. Yes, it might be hard. Yes, legislation may not be there, but it's possible. Compare this to human judgement: 10 years ago, if HR threw out your resume, you have no recourse. Today, if an algorithm automatically rejects your resume, we at least have a path to potential recourse in the future, since you can analyze an algorithm's decision making. A human's judgment is only more opaque. Essentially, when you say:

> people can recognize normal discrimination

I don't see how you can reliably recognize discrimination in any way that can't also be applied to the decisions of a computer program.




Broadly, I agree with you. For discussion's sake, I think the counterargument would be to focus on the scale and culture points. Specifically,

1. The scale means we're applying many more judgments in many more places that would've slid by the human judgment radar. This is arguably dangerous in itself because if we hold judgments to generally be unreliable, we're just adding many more points of unreliability.

2. The culture could conceivably develop in the direction of blindly trusting automated judgments, such that the type of scrutiny you encourage will dwindle as a practice. This would put us in a much more vulnerable state with respect to bad judgments.

That said, I still side with you. I think the ultra-dystopian scenario outlined as a possibility by OP is unlikely precisely because power/control are decentralized and very difficult to wield intentionally. And there's massive inherent conflict between various actors that have greater means to try to affect that control.

However, I also don't think it's inconceivable to end in the ultra-dystopian scenario. Technological progress is generally good and generally can't be stopped, but it also continually introduces undesirable possibilities as well.


To be honest, I don't see how an increase in algorithmic judgments will necessarily lead to a culture where we are more blindly trusting of them. I think it's up in the air, and there are forces working both ways: the more present it is, the more that engineers and policymakers have to think about it, though perhaps end-users will start noticing it less (?). In any case, it seems like those latent forces are to blame, not "big data" in it of itself.

The scale point makes some sense, I'll admit. We're raising volatility in applying potentially-discriminating judgements in the first place. Perhaps indeed it's a tradeoff between the expected boons of algorithmic decision making and their increased risk of discrimination. That said, the scale argument would then not hold for replacing what are currently situations with opaque human judges with robots.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: