Hacker News new | past | comments | ask | show | jobs | submit login

What's an example of a product that would inadvertently discriminate against trans people?

What does it mean to inadvertently discriminate?

As an example, a lot of websites drop support for IE. If the makeup of IE users affected by it over-indexed on any particular type of race/gender/class/sexual orientation, would you classify that as inadvertent discrimination?

If a first version of a new website/product wasn't built to be perfectly compatible with accessibility standards, are they inadvertently discriminating against those with disabilities?

Are software bugs that may not be equally felt by all users an example of inadvertent discrimination?

Is the only way to not inadvertently discriminate to ensure products are built to be optimized for every single human and use case? Every edge case needs to be solved for before launch?




> As an example, a lot of websites drop support for IE. If the makeup of IE users affected by it over-indexed on any particular type of race/gender/class/sexual orientation, would you classify that as inadvertent discrimination?

It's likely that until recently (maybe), that IE users were more likely to use JAWS and accessibility tools than other groups, especially if they couldn't afford upgrades to newer releases of JAWS or were stuck on enterprise computers.

> If a first version of a new website/product wasn't built to be perfectly compatible with accessibility standards, are they inadvertently discriminating against those with disabilities?

Yes, if the site is inaccessible. That said it's harder to claim that a game designed for a touch screen is inaccessible and discriminates against accessible users who prefer keyboards, perhaps, due to the hardware they use. The game might work better on a touch screen, like Fruit Ninja and might be very hard to replicate without a touch screen.

> Software bugs that may not be equally felt

Maybe. If the bug was a recent introduction of a non-binary gender field and only those with non-binary genders were affected, it could be considered inadvertent discrimination. It can also be considered a bug. Its severity depends on how long the bug remains in the system. If it's pushing a year, that's more likely discrimination in addition to poor QA practices and a likely inability to listen to user feedback.

> Is the only way to not inadvertently discriminate to ensure products are built to be optimized for every single human and use case? Every edge case needs to be solved for before launch?

I think we need to keep in mind that just because something could be discrimination doesn't mean we can't forgive and move on. Mistakes are a fact of life, and nothing's perfect. That doesn't mean we shouldn't try to aim for shipping fewer bugs, it means when we can, we fix the bugs, we listen to users, etc. It's very possible that one user's perfect app will in fact be completely wrong for another user, so pleasing everyone is impossible. It's why ergonomics has been so hard, humans aren't all the same height, etc.

Sometimes you need settings for users to adjust software to suit their preferences, such as font size. And sometimes you make font size part of the game, and it's not adjustable. If allowable under law, being inaccessible can be a choice, or it can be inadvertent. It's true some folks can get really worked up on a topic they care deeply about. It's also true that it's just software, and new software will come along eventually with different features that may please some audiences more. Or less. It all depends. :)


[Edit: Didn't see that you started answering my last question before I wrote this, about the operational implications of trying to eliminate everything that could be considered inadvertent discrimination. Going to just leave this comment as is, as I do think the utilitarian/deontological thinking about product development processes is an interesting quandary in today's climate :) ]

Given this thinking about inadvertent discrimination, and that the word 'discrimination' is very much coded as a 'bad' thing, what should be done about it?

Should any website/product that launches without being perfectly operable for every single user be sanctioned in some way? If not, aren't we supporting inadvertent discrimination? Isn't any allowance of inadvertent discrimination a bad thing?

Should there be some sort of utilitarian calculation with it? Or is it strictly a deontological thing? There can be no discrimination, therefore, we must not allow or we should disincentivize product development processes which release versions before they are equally workable for everyone?


I'd also point out that organizations can have incentives align with accessibility and inclusion rather than discrimination, accidental or otherwise.

For example, having a textual version of a video or image makes your page more optimized for basic search engines.

Now, of course, the same incentives could lead to dark patterns, lack of privacy, or even dividing people further, perhaps becoming a platform for divisiveness.

Personally, I think that we can identify and mark certain dark patterns as illegal, outright. We can encourage plugins and browser settings to enhance privacy. The hardest question is how much we need everything to be perfectly accessible and to not cause further discrimination or harm. That last one is difficult. I think ultimately the best answer we have is to classify some bad actors like we would dark patterns. This forum is an example of how rules and community can lead to better civil discourse online, but it's not necessarily as diverse as it could be due to those same rules. I think there will always be an unregulated middle ground where no one takes responsibility until they (a) have to and (b) understand how to, and that's especially true of government services at local and regional levels.


Not necessarily trans people. But there are plenty of tech products that inadvertently discriminated against Black people because of bad training data. The best known case is Google’s AI recognizing Black people as “gorillas”.

To this day, our home security system sometimes recognizes my big Black stepson as an “animal.”

Facial recognition that was used by law enforcement, mis recognized minorities far more than Whites.


If we're using that example of inadvertent discrimination, I think it's certainly feasible to have non-political discussion around improving it.

What you described sounds like a product flaw. Customers/users won't want to use a product that delivers sub-optimal results. Any internal employee saying "hey, we have an error rate of X% for this Y segment, and they represent Z% of the user base" isn't engaging in controversial political discussion (in my opinion).

However, if an individual chose to describe this flaw in more loaded language, it could easily turn political and combative for the team.


If the “customers” were law enforcement - who already racially profile on the flimsiest of excuses - that might be seen as a feature not a bug.

If AI/ML says someone “fits the description” what better feature than being able to blame it on the computer? If law enforcement was willing to deal with the flaw, and if tech companies were willing to sell it to them, what’s to stop an unscrupulous company from continuing to sell it if they didn’t get push back?


> What you described sounds like a product flaw.

It would never have shipped with a product flaw that frequently caused it to classify white people as something insulting.


'Never' is a strong word and doing a lot of work here, and I don't think the counterfactual can be proved one way or another.

I do think your comment here, with an unprovable statement about an immutable characteristic, stated with absolute certainty, would invite toxic political discussions.

Speaking with humility, and honestly trying to improve processes to yield better results, is the type of communication that I advocate for on teams I'm involved with.


There is speaking with humility, then there is just being downright, purposefully naive.

Of course they never would have shipped it if it didn't recognize white faces. And the big isn't in the software, the bug is that the companies producing this stuff don't have a single solitary person of color either working on the product or testing it that would have certainly noticed that it doesn't work on them or said they were an animal, etc.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: