Hacker News new | past | comments | ask | show | jobs | submit login
Employers’ Use of AI Tools Can Violate the Americans with Disabilities Act (justice.gov)
257 points by pseudolus on May 13, 2022 | hide | past | favorite | 133 comments



One point that I think is under-discussed in the AI bias area:

While it is true that using an algorithmic process to select candidates may introduce discrimination against protected groups, it seems to me that it should be much easier to detect and prove than with previous processes with human judgement in the loop.

You can just subpoena the algorithm and then feed test data to it, and make observations. Even feed synthetic data like swapping in “stereotypically black” names for real resumes of other races, or in this case adding “uses a wheelchair” to a resume. (Of course in practice it’s more complex but hopefully this makes the point.)

With a human, you can’t really do an A/B test to determine if they would have prioritized a candidate if they hadn’t included some signal; it’s really easy to rationalize away discrimination at the margins.

So while most AI/ML developers are not currently strapping their models to a discrimination-tester, I think the end-state could be much better when they do.

(I think a concrete solution would be to regulate these models to require a certification with some standardized test framework to show that developers have actually attempted to control these potential sources of bias. Google has done some good work in this area: https://ai.google/responsibilities/responsible-ai-practices/... - though there is nothing stopping model-sellers from self-regulating and publishing this testing first, to try to get ahead of formal regulation.)


>With a human, you can’t really do an A/B test to determine if they would have prioritized a candidate if they hadn’t included some signal; it’s really easy to rationalize away discrimination at the margins.

Which is part of the reason that discrimination doesn't have to be intentional for it to be punishable. This is a concept known as "disparate impact". The Supreme Court has issued decisions[1] that a policy which negatively impacts a protected class and has no justifiable business related reason for existing can be deemed discrimination regardless of the motivations behind that policy.

[1] - https://en.wikipedia.org/wiki/Griggs_v._Duke_Power_Co.


Justifiable business reason is still a strong bar. For example, with no evidence in either direction for a claim there is no justifiable business reason even if the claim is somewhat intuitive. So if you want to require high-school diplomas because you think people who have them will do the job better you better track that data for years and be prepared to demonstrate it if sued. If you want to use IQ tests because you anticipate smarter people will do the job better you better have IQ tests done on your previous employee population demonstrating the correlation before imposing the requirement.


EDIT: my parent edited and replaced their entire comment, it originally said "you can't use IQ tests even if you prove they lead to better job performance". I leave my original comment below for posterity:

This is not true, IQ tests in the mentioned Griggs v. Duke Power Co. (and similar cases) were rejected as disparate impact specifically because the company provided no evidence they lead to better performance. To quote the majority opinion of Griggs:

> On the record before us, neither the high school completion requirement nor the general intelligence test is shown to bear a demonstrable relationship to successful performance of the jobs for which it was used. Both were adopted, as the Court of Appeals noted, without meaningful study of their relationship to job performance ability.


Wouldn't that be trivial if you have your training data set?


He didn't say anything about intention, though. He just talked about the counterfactual. Disparate impact is about the counterfactual scenario.


They said "it’s really easy to rationalize away discrimination at the margins." My reply was pointing out that there is little legal protection in rationalizing away discrimination at the margins because tests for disparate impact require the approach to also stand up holistically which can't easily be rationalized away.


I think perhaps you are looking at a different part of the funnel; disparate impact seems to be around the sort of requirements you are allowed to put in a job description. Like “must have a college degree”.

However the sort of insidious discrimination at the margin I was imagining are things like “equally-good resumes (meets all requirements), but one had a female/stereotypically-black name”. Interpreting resumes is not a science and humans apply judgement to pick which ones feel good, which leaves a lot of room for hidden bias to creep in.

My point was that I think algorithmic processes are more testable for these sorts of bias; do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing? (I’m aware of some large-scale research on name-bias on resumes but it seems hard to do in the context of a single company.)


>disparate impact seems to be around the sort of requirements you are allowed to put in a job description.

That is a common example, but it is much broader than what goes on a job ad. For example, I have heard occasional rumblings about how whiteboard interviews are a hiring practice that would not stand up to these laws (IANAL).

>My point was that I think algorithmic processes are more testable for these sorts of bias

Yes, this is true, but that doesn't really matter. If there is consistent discrimination happening at the margins, that will be evident holistically. If that is evident holistically and there is no justification for it, that is all we need. We don't need to run resumes through an algorithm to show that discrimination is happening at an individual level. We just need to show that a policy negatively impacts a protected group and that the policy is not related to job performance.

>do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing?

I think the bigger problem than the regulations is that there is an inherent bias against these type of cases actually being pursued. First, it is difficult to identify this as an individual so people don't know when it is happening. Additionally, people fear the retribution that would come from pursuing this legally. People don't want to be viewed as a pariah by future employers so they often will simply move on even if their accusations are valid.


Yes, but a holistic test requires a realistic counterfactual. That's the problem. There is no way to evaluate that counterfactual for a human interviewer.

It is true that extreme bias/discrimination will be evident, but smaller bias/discrimination, particularly in an environment where the pool is small (say, black women for engineering roles) is extremely hard to prove for a human interviewer. Your sample size is just going to be too small. On the other hand, if you have an ML algorithm, you can feed it arbitrary amounts of synthetic data, and get precise loadings on protected attributes.


Everything has a disparate impact, so now everything is illegal.



No, you’re wrong here, it’s not a term of art.


It sure looks like it is.

> Disparate impact in United States labor law refers to ...

https://en.wikipedia.org/wiki/Disparate_impact


Your link is evidence for my side. It uses the plain definition. The plain meaning of the words.


If you ever intent to study law, become involved in a situation dealing with disparate impact, or are at the receiving end of disparate impact, knowing the legal definition may be helpful too. The DoJ spells[1] out the legal definition of disparate impact as so:

    ELEMENTS TO ESTABLISH ADVERSE DISPARATE IMPACT UNDER TITLE VI

    Identify the specific policy or practice at issue; see Section C.3.a.
    Establish adversity/harm; see Section C.3.b.
    Establish disparity; see Section C.3.c.
    Establish causation; see Section C.3.d.
1. https://www.justice.gov/crt/fcs/T6Manual7#D


My point is that by the plain meaning of words you're right, disparate impact means any two groups impacted differently, regardless of anything else. In law, it means that an employment, housing, etc. policy has a disproportionately adverse impact on members of a protected class compared to non-members of that same class. It's much more specific and narrowly defined.


That is in fact not more narrow; Ferrotin's original claim that "everything has a disparate impact" is correct with "disparate impact" so defined.


I agree that discrimination would be a lot easier to objectively prove after the fact, but it also would be far easier to occur in the first place, since many hiring managers would blindly "trust the AI" without a second thought.


From my experience working on projects where we trained models, usually it’s obviously completely broken the first attempt and requires a lot of iteration to get to a decent state. “Trust the AI” is not a phrase anyone involved would utter. It’s more like: trust that it is wrong for any edge case we didn’t discover yet. Can we constrain the possibility space any more?


Most hiring managers wouldn't make it to the end of the phrase "constrain the possibility space"


"Trust the AI" could mean uploading a resume to a website and getting a "candidate score" from somebody else's model.

Because I'll tell you, there's millions of landlords and they blindly trust FICO when screening candidates. Maybe not as the only signal, but they do trust it without testing it for edge cases.


Definitely could be so, particularly in these early days where frameworks and best-practices are very immature. Inasmuch as you think this is likely, I suspect you should favor regulation of algorithmic processes instead of voluntary industry best-practices.


There is a very real danger of models being biased in a way that doesn't show up when you apply these crude hacks to inputs. It seems to me we have to be much more deliberate, much more analytical, and much more thorough in testing models if we want to substantially reduce or even eliminate discrimination.

Yes, you can A/B test the model if you can design reasonable experiments. You still don't have the general discrimination test because you have to define what a reasonable input distribution and what reasonable outputs are.

If an employer is looking to hire an engineer with a CS degree from a top-tier university, and they use an AI model to evaluate resumes and it returns a number of successes on black people very similar to the population distribution of graduates from those programs is the model discriminatory?

There are still hard problems here because any natural baseline you use for a model may in fact be wrong and designing a reasonable distribution of input data is almost impossibly hard as well.


Yes, in practice it’s actually way more complex than I gestured at. The Google bias toolkit I linked does discuss in much more detail, but I am not a data scientist and haven’t used it; I’d be interested in expert opinions. (They also have some very good non-technical articles discussing the general problems of defining “fairness” in the first place.)


I don’t think it’s adequate to attempt to prevent discrimination. Discrimination is core to our fundamental human rights. It’s necessary to succeed at preventing discrimination.

“We applied best practices in the field to limit discrimination” should not be an adequate legal defence if the model can be shown to discriminate.

To clarify further, just because you tried to prevent discrimination doesn’t mean you should be off the hook for the material harms of discrimination to a specific individual. Otherwise people don’t have a right to be protected against discrimination they only have a right to people ‘trying’ to prevent discrimination. We shouldn’t want to weaken rights that much even if it means we have to be cautious in how we adopt new technologies.


> With a human, you can’t really do an A/B test to determine if they would have prioritized a candidate if they hadn’t included some signal; it’s really easy to rationalize away discrimination at the margins.

Not for individual candidates, no. But you can introduce a parallel anonymized interview process and compare the results.


Actually you kind of can't. You don't have a legal basis for forcing the company to run that experiment.


The problem with AI is that when it does make discriminatory decisions on hiring, is that it does so systematically and mechanically. Incidentally, systematic and discrimination are two words you never want to see consecutively on a letter from the EEOC or OFCCP.


The reason you never want to see those words together is that isolated discrimination may result in a single lawsuit but systemic discrimination is a basis for class action.


It's under-discussed as with any discussion of an empirical study of ML systems, ie., treating them as targets of analysis.

As soon as you do this, they're revealed to exploit only statistical coincidences and highly fragile heuristics embedded within the data provided. And likewise, pretty universally discriminatory when human data is inovlved


The linked article gives some examples that I think are very useful clarifications:

https://www.eeoc.gov/tips-workers-americans-disabilities-act...

> The format of the employment test can screen out people with disabilities [for example:] A job application requires a timed math test using a keyboard. Angela has severe arthritis and cannot type quickly.

> The scoring of the test can screen out people with disabilities [for example:] An employer uses a computer program to test “problem-solving ability” based on speech patterns for a promotion. Sasha meets the requirements for the promotion. Sasha stutters so their speech patterns do not match what the computer program expects.

Interestingly, I think the second one is problematic for common software interview practices. If your candidate asked for an accommodation (say, no live rapid-fire coding) due to a recognized medical condition, you would be legally required to provide it.

This request hasn’t come up for me in all the (startup) hiring I’ve done, but it could be tough to honor this request fairly on short notice, so worth thinking about in advance.


I once interviewed with a cast and two different YC start-ups gave me speed coding problems. One even made me type with their laptop versus a split keyboard I had where I could actually reach all the keys. They used completion time as a metric even though I asked for an accommodation and it was obvious as I typed in front of them that the cast was major drag on me.

Pretend your colleague had a cast and couldn’t type for a few weeks. Is that person going to get put on the time-sensitive demo where 10k SLOC need to be written this week? Or the design / PoC project that much less SLOC but nobody knows if it will work? Or the process improvement projects that require a bunch of data mining, analysis, and presentation?

It’s not hard to find ways to not discriminate against disabilities on short notice. The problem is, at least in my experience with these YC start-ups who did not, there’s so much naïveté combined with self-righteousness that they’d rather just bulldoze through candidates like every other problem they have.


Amusingly, this might actually be an ADA infraction. See https://www.smithlaw.com/newsletter?item_id=75 - temporary injuries can be a disability. (IANAL)


If someone presented me with a speed-timed programming exercise, I'd walk out the door.


Any in-person coding exercise with a time-box (say, the standard one-hour slot) is “timed” in some sense. I don’t think we always consider it as such, but if you can’t type fast due to arthritis it could definitely be problematic.


I walk for any code monkey hoop jump exercises. Timed or not.

When you apply to be a carpenter they don’t make you hammer nails, when you apply to be a accountant they don’t have you prepare a spreadsheet for them, etc.

I don’t work (even in interviews) for free.


I don't mind them. I expect any company worth a damn to want to screen out people who can't code. When I work there and interview other candidates, I don't want my time to be wasted, and I don't want to work with people who can't do their job.

A quick coding test is something that any places where people should know how to code has to do, doing it through one of those platforms seems perfectly reasonable, and I'm happy to do it.

Writing fizzbuzz is not "working for free" any more than any other form of interviews.

And is the "when you apply to be a carpenter" sentence really true? I've heard of the interview process for welders being "here's a machine and two pieces of metal, I'll watch".


What if the job requires you to type quickly? Why would someone with arthritis even want a job where you have to type quickly? Is that really discrimination or is that the candidate simply not being able to perform the job?


What you are describing is called a Bona Fide Occupational Qualification (BFOQ). The specifics of what sort of attributes might be covered for what jobs is something courts hash out, but broadly: if you're hiring workers for a warehouse it's fine to require workers be able to lift boxes. If you're hiring airline pilots, it's fine to turn away blind people. Etc.


If the job actually requires typing quickly like a Court Recorder then there is a basis to require typing quickly. If the job doesn’t actually require it, like for example a programmer then enforcing the requirement anyway is discrimination.


Most jobs that involve typing benefit from being able to type quickly.

For example, I am a frequent customer of U-Haul. I learned to not use the branch that’s closest to me, because some employees there are really slow with computers, which makes checking out equipment very slow, and frequently results in a long line of waiting customers. Driving 5 extra minutes saves me 20 minutes of waiting for employees to type in everything and click through the system.

And this is freaking uhaul. If you’re a software engineer, slow typing is also a productivity drain: a 3 minutes email becomes 6 minutes one, a 20 minutes slack conversation becomes 30 minutes etc. It all adds up.


Small productivity drains on minority portions of the task are not a requirement of doing the job. Software developers generally spend more time thinking than typing. Typing is not the bottleneck of the job (at least for the vast majority of roles).


Sure, of course typing is not the biggest bottleneck in software engineer job. That doesn’t mean it’s irrelevant for productivity.

Consider another example: police officers need to do a lot of typing to create reports. A fast typing officer can spend less time writing up reports, and more time responding to calls. That makes him more productive, all else being equal. Of course it would be silly to consider typing speed as a sole qualification for a job of police officer (or, for that matter, a software engineer), but it is in no way unreasonable to take it into account when hiring.


> Most jobs that involve typing benefit from being able to type quickly.

Maybe. If you type 10,000 words per minute but your entire module gets refactored out of the codebase next week, is your productivity anything higher than 0?

Multiple times in my career, months or even years worth of my team's work was tossed in the trash because some middle manager decided to change directions. A friend of mine is about ready to quit at AMZN because the product he was supposed to launch last year keeps getting delayed so they can rewrite pieces of it. Maybe some people should have thought more and typed less.


> Maybe. If you type 10,000 words per minute but your entire module gets refactored out of the codebase next week, is your productivity anything higher than 0?

If you spent less time typing that module that later went to trash, you are, in aggregate, more productive than someone who spent more time typing the same module.

This sort of argument only makes sense if you assume that there is some sort of correlation, where people who are slower at typing are more likely to make better design or business decisions, all else being equal. I certainly have no reason to believe it to be true. Remember we are talking about the issue in context of someone who is slow at typing because of arthritis. Does arthritis make people better at software design, or communication? I don’t think so.


Dragon Naturally Speaking is the definition of a reasonable accommodation. Maybe not a court transcriptionist but almost all jobs with typing would be fine with it.


When job requirements actually match the job, then you can worry about this.


Think about what you just wrote. This is a programming job, not something like a transcriptionist gig. Why do you feel that your “what if” is appropriate?

Besides, the point seems to have been about interview practices. You know, those practices which are often quite removed from the actual on-the-job tasks.

What if I was disabled to the degree that I couldn’t leave the house, but I could work remotely (an office job)? That’s what accomodations are for.


Even the very requirement to "apply online" has been quite effective at making it very difficult for a sub-section of the working population to succeed at applying.

There are many (and I know quite a few) people who are quite capable at their jobs and entirely computer-ineffective. As they're forced more and more to deal with confusing two-factor requirements and other computer-related things that we're just "used" to, they get discouraged and give up.

For now you can often help them fill it out, but at some point that's going to be unwieldy or insufficient.


I've been disabled since 17, finally got approved for SSDI in 2017 after almost two decades of struggling to work and make ends meet. I can say definitively that employment in the US does not follow the guidelines as they exist and those guidelines are too weighted in the favor of the employer in the first place.

A simple example of this is that often part-time hours would allow me to continue working and there is nothing in the ADA/EEO or FMLA that guarantees a worker's right to keep their job at part-time indefinitely. The obligation to perform to the level of a typical worker is squarely on the shoulders of the disabled. Good luck finding an employer generous enough to deal with all of your needs.

85% of the people with my diagnosis are unemployed despite the fact most of us want to work. The algorithms will probably help make sure it's more like 90-95% of us. But at least the stakeholder involvement process makes people feel like they have a voice even as they are categorically excluded from ever being fully-functional human beings.


This doesn't matter until someone tries suing them for it, right?

And as I understand it, you don't really have a case without evidence that the hiring algorithm is discriminating against people with disabilities.

How would an individual even begin to gather that evidence?


The process of gathering evidence after the suit has started is called discovery.

There are three major kinds of evidence that would be useful here. Most useful but least likely: email inside the company in which someone says "make sure that this doesn't select too many people with disabilities" or "it's fine that the system isn't selecting people with disabilities, carry on".

Useful and very likely: prima facie evidence that the software doesn't make necessary reasonable accomodations - a video captcha without an audio alternative, things like that.

Fairly useful and of moderate likelihood: statistical evidence that whatever the company said or did, it has the effect of unfairly rejecting applicants with disabilities.


And one could go a step further: run the software itself and show that it discriminates. One doesn't just have to look at past performance of the software; it can be fed inputs tailored to bring out discriminatory performance. In this way software is more dangerous to the defendant than manual hiring practices; you can't do the same thing to an employee making hiring decisions.


How would you make sure that the supplied version has the same weights as the production version? And wouldn't the weights and architecture be refined over time anyway?


Perjury laws. Once a judge has commanded you to give the same AI, you either give the same AI, or truthfully explain that you can't. Any deviation from that and everyone complicit is risking jail time, not just money.

"this is the June 2020 version, this is the current version, we have no back ups in between" is acceptable if true. Destroying or omitting an existing version is not.


Not that not having backups is something that you can sue the company for as an investor. If you say we have the June 2020 version, but not the july one you asked for you are fine, (it is reasonable to have save daily backups for a month, monthly backups for a year, and then yearly backups). Though even then I might be able to sue you for not having version control of the code.


True, but if you really never had it, that's money, not jail time.


If a non-hired employee brings a criminal action, this may matter.

For a civil action, the burden of proof is "preponderance of evidence," which is a much lower standard than "beyond a reasonable doubt." "Maybe the weights are different now" is a reasonable doubt, but in a civil case the plaintiff could respond "Can the defendant prove the weights are different? For that matter, can the defendant even explain to this court how this machine works? How can the defendant know this machine doesn't just dress up discrimination with numbers?" And then it's a bad day for the defendant to the tune of a pile of money if they don't understand the machine they use.


Don't most production NN or DLN optimize to a maximum?

Seems like the behavior becomes predictable and then you have to retrain if you see unoptimal results.


> How would you make sure that the supplied version has the same weights as the production version?

You just run the same software (with the same state database, if applicable).

Oh wait, I forgot, nobody knows or cares what software they're running. As long as the website is pretty and we can outsource the sysop burden, well then, who needs representative testing or the ability to audit?


I am not sure, but if I remember correctly employer must prove they are not discriminating. And just because they are using AI they are not immune to litigation.


How can the employer prove a negative?

At most I imagine the plaintiff is allowed to do discovery, and then has to prove positive discrimination based on that.


If you read the document again (?) maybe you'll see it's not about proving a negative. Instead, it's a standard of due care. Did you check whether using some particular tool illegally discriminates and document that consideration? From the document itself:

"Clarifies that, when designing or choosing technological tools, employers must consider how their tools could impact different disabilities;

Explains employers’ obligations under the ADA when using algorithmic decision-making tools, including when an employer must provide a reasonable accommodation;"


If it's a civil case, it's just the preponderance of the evidence. The jury just has to decide who they think is more likely to be correct.


> I am not sure, but if I remember correctly employer must prove they are not discriminating.

That seems backwards, at least in the US.


This is what that demographic survey at the end of job applications is for. It can reveal changes in hiring trends, especially in the demographics of who doesn't get hired. I don't know how well it works in practice.


I am a person, not a statistic. I always decline to answer these surveys; I encourage others to do the same.


Those are for persuading people who do see you as a statistic. You can unilaterally disarm if you like, but they're going to keep discriminating until they see data that proves they're discriminating. Far too few people are persuaded by other means.


I also do this. But given the context of this post ("AI" models filtering resumes prior to ever getting in front of a human), maybe "decline to answer" comes with a hidden negative score adjustment that can't be (legally) challenged.

I think the Americans with Disabilities Act (ADA) requires notification. (i.e. I need to talk to HR/boss/whoever about any limitations and reasonable accommodations.) If I am correct, not-answering the question "Do you require accommodations according to the ADA? []yes []no []prefer not to answer" can legally come with a penalty, and the linked DoJ reasoning wouldn't stop it.


"Employers should have a process in place to provide reasonable accommodations when using algorithmic decision-making tools;"

"Without proper safeguards, workers with disabilities may be “screened out” from consideration in a job or promotion even if they can do the job with or without a reasonable accommodation; and"

"If the use of AI or algorithms results in applicants or employees having to provide information about disabilities or medical conditions, it may result in prohibited disability-related inquiries or medical exams."

This makes it sound like the employer needs to ensure their AI is allowing for reasonable accommodations. If an AI can assume reasonable accommodations then what benefit would they ever have to assume not supplying the reasonable accommodations that they are legally required to?


I’m trying to but my employer has said they will use “observer-identified” info to fill it in for me. I find it ridiculous that I can’t object to having someone guess my race and report that to the government.


That sounds broken. It's supposed to be voluntary.

PDF: https://www.eeoc.gov/sites/default/files/migrated_files/fede...

>> "Completion of this form is voluntary. No individual personnel selections are made based on this information. There will be no impact on your application if you choose not to answer any of these questions"

Your employer shouldn't even be able to know whether or not you filled it out.


My experience is the reporting on current employees, which I guess is not voluntary. It's not very clear though:

"Self-identification is the preferred method of identifying race/ethnicity information necessary for the EEO-1 Component 1 Report. Employers are required to attempt to allow employees to use self-identification to complete the EEO-1 Component 1 Report. However, if employees decline to self-identify their race/ethnicity, employment records or observer identification may be used. Where records are maintained, it is recommended that they be kept separately from the employee’s basic personnel file or other records available to those responsible for personnel decisions."

From: https://eeocdata.org/pdfs/201%20How%20to%20get%20Ready%20to%...


These days, disparate impact is taken as evidence of discrimination, so it's easy to find "discrimination".


What's the difference? Discrimination is an effect more than an intent. Most people are decent and well-intentioned and don't mean to discriminate, but it still happens. If there's a disparate impact, what do you imagine causes that if not discrimination? Remembering that we all have implicit bias and it doesn't make you a mustache-twirling villain.


>If there's a disparate impact, what do you imagine causes that if not discrimination?

20+ years of environmental differences, especially culture? The disabilities themselves? Genes? Nothing about human nature suggests that all demographics are equally competent in all fields, regardless of whether you group people by race, gender, political preferences, geography, religion, etc. To believe otherwise is fundamentally unscientific, though it's socially unacceptable to acknowledge this truth.

>Remembering that we all have implicit bias

This doesn't tell you anything about the direction of this bias, but the zeitgeist is such that it is nearly always assumed to go in one direction, and that's deeply problematic. It's an overcorrection that looks an awful lot like institutional discrimination.

>Remembering that we all have implicit bias and it doesn't make you a mustache-twirling villain.

Except pushing back against unilateral accusations of bias if you belong to one, and only one, specific demographic, you effectively are treated like a mustache-twirling villain. No one is openly complaining about "too much diversity" and keeping their job at the moment. That's bias.


There is no scientific literature which confirms that any specific demographic quality determine's an individuals capability at any job or task.

What does exist is, at best, shows mild correlation over large populations, but nothing binary or deterministic at an individual level.

To whit, even if your demographic group, on average, is slightly more or less successful in a specific metric, there is no scientific basis for individualized discrimination.

It's "not socially unacceptable to acknowledge this truth", it's socially unacceptable to pretend discrimination is justified.


>There is no scientific literature which confirms that any specific demographic quality determine's an individuals capability at any job or task

There absolutely is a mountain of research which unambiguously implies that different demographics are better or worse suited for certain industries. A trivial example would be average female vs male performance in physically demanding roles.

Now what is indeed missing is the research which takes the mountain of data and actually dares to draw these conclusions. Because the subject has been taboo for some 30-60 years.

>To whit, even if your demographic group, on average, is slightly more or less successful in a specific metric, there is no scientific basis for individualized discrimination

We are not discussing individual discrimination, I am explaining to you that statistically significant differences in demographic representation are extremely weak evidence for discrimination. Or are you trying to suggest that the NFL, NBA, etc are discriminating against non-blacks?

>It's "not socially unacceptable to acknowledge this truth", it's socially unacceptable to pretend discrimination is justified

See above, and I'm not sure if you're being dishonest by insinuating that I'm trying to justify discrimination or if you genuinely missed my point. Because that's how deeply rooted this completely unscientific blank slate bias is in western society.

Genes and culture influence behavior, choices, and outcomes. Pretending otherwise and forcing corrective discrimination for your pet minority is anti-meritocratic and is damaging our institutions. Evidenced by the insistence by politicized scientists that these differences are minor.

A single standard deviation difference in mean IQ between two demographics would neatly and obviously explain "lack of representation" among high paying white collar jobs; I just can't write a paper about it if I'm a professional researcher or I'll get the James Watson treatment for effectively stating that 2+2=4. This isn't science, our institutions have been thoroughly corrupted by such ideological dogma.


The usual view of meritocracy is this sports-like idea of wanting to see each person's inherent capability shine though.

Instead, we could give everyone the absolute best tech and social support, and only then evaluate performance, not of individuals, but of individuals+tech, the same way we evaluate a pilot's vision with their glasses on.


Please link any study which shows a deterministic property and not broad averages.


Broad averages of what? Difference in muscle characteristics and bone structure between males and females? Multiple consistent studies showing wide variance in average IQ among various demographics? The strong correlation between IQ and all manner of life outcomes, including technical achievements?

Or are you asking me to find a study which shows which specific cultural differences make large swaths of people more likely to, say, pursue sports and music versus academic achievement? Or invest in their children?

Again, the evidence is ubiquitous, overwhelming, and unambiguous. Synthesizing it into a paper would get a researcher fired in the current climate, if they could even find funding or a willing publisher; not because it would be factually incorrect, but because the politicized academic culture would find a title like "The Influence of Ghetto Black Cultural Norms on Professional Achievement" unpalatable if the paper didn't bend over backwards to blame "socioeconomic factors". Which is ironic because culture is the socio in socioeconomics, yet I would actually challenge YOU to find a single modern paper which examines negative cultural adaptations in any nonwhite first world group.

Further, my argument has been dishonestly framed (as is typical) as a false dichotomy, I'm not arguing that discrimination doesn't exist, but the opposition is viciously insisting, that all differences among groups are too minor to make a difference in a meritocracy, and anyone who questions otherwise is a bigot.


I did not call you a bigot. I never made any assumptions or aspersions as to your personal beliefs.

I am pointing that, despite your claim that your viewpoint is rooted in science, you have no scientific basis for your belief beyond your own synthesis of facts which you consider "ubiquitous, overwhelming, and unambiguous".

You have a belief unsupported by scientific literature. If you want to claim that the reason it is unsupported is because of a vast cultural conspiracy against the type of research which would prove your point, you're free to do so.


>You have a belief unsupported by scientific literature

I have repeatedly explained to you that the belief is indeed supported by a wealth of indirect scientific literature.

>You have a belief unsupported by scientific literature. If you want to claim that the reason it is unsupported is because of a vast cultural conspiracy against the type of research which would prove your point, you're free to do so.

Calling it a conspiracy theory is a dishonest deflection. It is not a conspiracy, it is a deeply rooted institutional bias. But I can play this game too: can you show me research which rigorously proves that genes and culture have negligible influence on social outcomes? Surely if this is such settled science, it will be easy to justify, right?

Except I bet you won't find any papers examining the genetic and/or cultural influences on professional success in various industries. It's like selective reporting, lying through omission with selective research instead.

But you will easily find a wealth of unfalsifiable and irreproducible grievance studies papers which completely sidestep genes and culture while dredging for their predetermined conclusions regarding the existence of discrimination. And because the socioeconomic factors of genes and culture are a forbidden topic, you end up with the preposterous implication that all discrepancies in representation must be the result of discrimination, as in the post that spawned this thread.


>If there's a disparate impact, what do you imagine causes that if not discrimination?

Disparate impact is often caused by discrimination upstream in the pipeline, not discrimination on the part of the hiring manager. Suppose that due to systematic discrimination, demographic X is much more likely than demographic Y to grow up malnourished in a house filled with lead paint. The corresponding cognitive decline amongst X people would mean they are less likely than Y people to succeed in (or even attend) elementary school, high school, college, and thus the workplace.

A far smaller fraction of X people will therefore ultimately be qualified for a job than Y people. This isn’t due to any discrimination on the part of the hiring manager.


The reason these two collide so often in American law is that the two historically overlap.

When a generation of Americans force all the people of one race to live in "the bad part of town" and refuse to do business with them in any other context, that's obviously discrimination. If a generation later, a bank looks at its numbers and decides borrowers from a particular zip code are higher risk (because historically their businesses were hit with periodic boycotts by the people who penned them in there, or big-money business simply refused to trade with them because they were the wrong skin color), draws a big red circle around their neighborhood on a map, and writes "Add 2 points to the cost" on that map... Discrimination or disparate impact? Those borrowers really are riskier according to the bank's numbers. But red-lining is illegal, and if 80% of that zip code is also Hispanic... Uh oh. Now the bank has to prove they don't just refuse Hispanic business.

And the problem with relying on ML to make these decisions is that ML is a correlation engine, not a human being with an understanding of nuance and historical context. If it finds that correlation organically (but lacks the context that, for example, maybe people in that neighborhood repay loans less often because their businesses fold because the other races in the neighborhood boycott those businesses for being "not our kind of people") and starts implementing de-facto red-lining, courts aren't going to be sympathetic to the argument "But the machine told us to discriminate!"


Quite a part from the fact that implicit bias doesn't replicate, if you have 80% male developers it is not because you are discriminating against women, it is because the pool you hire from is mostly men.

If you refuge to hire a woman because she is a woman, you are discriminating. Fortunately that is historically rare today.


It is a strange twist but when filling out the many prescreening questions at Google, I was immediately sent an email asking for an interview.

Sadly, they insist that they would only call me at an agreed time and not take any incoming call from me, even at their agreed upon timeframe.

Why is this an important stance of AI bias against people with disability? I need to line up a voice interpreter who would do American Sign Language so I can understand what a group of Google interviewers would be saying. And voice interpreters requires a 3-way call that I can help setup.

I am capable of understanding them perfectly through my lipreading given that I lost all high frequency discriminator due to a childhood high fever. I cannot tell the difference between B, V, P, D, Z, or E, but a lipreading skill can nails them all. I am an excellent speaker of the English language given being technically and governmentally assigned as Deaf.

So, have I been wronged? I still think so … to this day.


I encountered this recently on Facebook Marketplace. I post ads for houses for rent, and the ads say "no pets". This has been fine for 20+ years on craigslist, but on facebook marketplace the minute some guy writes that he "has a service animal" and you don't respond the right way, your ad gets blocked/banned.... You basically have to accept these people even though the law allows you to prohibit animals, service animals must be accepted otherwise you violate the ADA. I knew this guy when I was living in Sunnyvale he had a cat that was a registered service animal, and he would get kicked out of every hotel he went to, because they dont allow animals/pets, and then he would sue the owner under ADA laws and collect ~40k from each hotel owner. Its a real racket.


> I knew this guy when I was living in Sunnyvale he had a cat that was a registered service animal,

> Beginning on March 15, 2011, only dogs are recognized as service animals under titles II and III of the ADA.

https://www.ada.gov/service_animals_2010.htm


Is it actually possible to hire someone without some level of discrimination involved? Seems like this ideal world where candidates are hired purely on technical ability or merits without regard to any other aspects of their life is impossible.

For example, if I were hiring a programmer, and the programmer was technically competent but spoke with such a thick accent that I couldn't understand them very well, I'd be tempted to pass on that candidate even though they meet all the job requirements. And if it happened every time I interviewed someone from that particular region, I'd probably develop a bias against similar future candidates.


Probably not simply because we are human, but we can minimize some of it.

You wouldn't screen out a person who cannot speak or who cannot speak clearly due to a disability of some sort. You'd use a different method of communication as would everyone else and it could really be the same for them.

On the other hand, if communication was clearly impossible and/or they needed to be understood by the public (customers), the accent may very well mean they cannot do the job and not in the scope of things to teach someone like you can teach expectations about customer service.


No, it's not possible. Humans have all kinds of inherent biases.

The big difference is we can prove the bias in an AI. It's a very interesting curveball when it comes to demonstrating liability in the choice making process.


I think the general problem is the law says certain correlations are fair to use and others are not. If you can prove the AI model has no way to separate out which is which you have a fairly sizeable amount of evidence the AI is discriminating. Likely enough evidence for a civil case.

Usually showing that input data is biased in some way or contains a potentially bad field will result in winning a discrimination case.

If neither side can conclusively prove what the model is doing but the plaintiff shows it was trained on data that allows for discrimination and the model is designed to learn patterns in its training data then the defendant is on the hook for showing the model is unbiased. For the most part people design input data uncritically and some of the fields allow for discrimination.


> but the plaintiff shows it was trained on data that allows for discrimination

That's all data; there would be no need to show anything.

There was a paper a while ago by a team of doctors who wanted to use classifiers on X-ray images and FREAKED OUT when they realized that the first thing the classifier did was categorize every image by the race of the patient. As they note, it will always do this regardless of whether the race of the patient is present in the input data. (Because obviously, the race of the patient is part of the information conveyed by the structure of their body, which is what an X-ray shows.)


I'll bet the same AI used in hiring decisions could also be biased against older workers.


IMHO people are not cogs, you cannot masure them all with only a few parameters, we are so kuch more complex than that, and there should be a absolute ban in the use of AI and software tools to measure (and spy) employees, only allowing those softare tools that facilitate capturing and keeping record of those observations perfomed by other humans.

I am not saying we humans are perfect to handle that complexity, hence when humans are the ones behind such desition making it is rarely done by a single person


It is a bit of an aside, but not only do I find it interesting that 1) this issue is approaching an interesting nexus of computer based efficiency and human "adjustments" (I will just call them) like the ADA that are intentionally and even deliberately inefficient; and 2) that the efficiency and centralization based sector of computer "sciences"/development is so replete with extremely contrary types that demand all manner of exceptions, exemptions, and make special pleadings.

I find it all very interestingly paradoxical, regardless of everything else.


Current headline is a bit misleading, the point of the article as made clear in the very first paragraph is that this is about AI hiring tools causing potential discrimination. This has nothing to do with AI workers somehow replacing disabled humans, which is what it sounds like.


The first paragraph is exactly what I expected from the headline, ever since the amazon AI gender discrimination story a few years back.

https://www.theguardian.com/technology/2018/oct/10/amazon-hi...


I wonder if this government guidance focuses on imperfections in products that on the whole may be a significant improvement over biases in traditional human screening.


How would this even happen. Why would people put their disabilities on their resume?


The AI learns proxy signals. Name, work experience, skills (e.g., an emphasis on A11Y) ... all have some predictive power for gender, for some sorts of disabilities, ....

You can fix the problem by going nuclear and omitting any sort of data that could serve as a proxy for the discriminatory signals, but it's possible to explicitly feed the discriminatory signals into the model and enforce that no combination of other data amounting to knowledge about them can influence the model's predictions.

There was a great paper floating around for a bit about how you could actually manage that as a data augmentation step for broad classes of models (constructing a new data set which removed implicit biases assuming certain mild constraints on the model being trained on it). I'm having a bit of trouble finding the original while on mobile, but they described the problem as equivalent to "database reconstruction" in case that helps narrow down your search.


Oh, thank you, this was the question floating in my head as well, this explains it perfectly.


Because you can’t always hide it. Nothing like giving a presentation on a whiteboard when all of a sudden your writing turns to gibberish.

People have a limited tolerance. Then they start telling you to take care of yourself and strongly encouraging you to leave.

It’s why I switched to remote work before the pandemic. If a bad episode starts up I can cover for it much more easily.


Sorry, but that has nothing to do with AI resume review.


Many (most?) employers ask if you are disabled when filling out a job application. I personally don't consider myself disabled, but I have one of the conditions that is listed as a disability in this question. I never know what to put. I thought it wouldn't matter if I just said, yes, that I'm disabled, since I literally have one of the conditions listed, but people online who work in hiring say I will most likely be discriminated against if I do that. Sure, it's illegal, but companies do that anyway, apparently.

I wonder if the answer to the disability question is something the AI uses when evaluating candidates, and if it has learned to just toss out anyone who says yes?


e.g. if you knew one or two sign languages, wouldn't you list it under languages? What if the job involves in-person communication with masses of people?


Skills: Fluent in American Sign Language.

High School: Florida School for the Deaf and the Blind.

Other Experience: President of Yale Disability Awareness Club (2009-2011).


Isn't the entire point of an AI tool 'discrimination'?


Like many words in the English language, “discrimination” has multiple meanings.

From Webster:

1. The act of discriminating.

2. The ability or power to see or make fine distinctions; discernment.

3. Treatment or consideration based on class or category, such as race or gender, rather than individual merit; partiality or prejudice.

You are talking about 2. The article is talking about 3.

3. is illegal in hiring. 2. is not.


If you make a decision based on 2, you are doing 3.

It's just that simple. 2 creates categories implicitly.


That is not how the courts interpret it.


If the ultimate standard is disparate impact, that's where it goes.


ML is a technique for discovering and amplifying bias.

Applying ML to hiring shows a profound lack of awareness of both ML and HR. Especially using previous hiring decisions as a training set. Like using a chainsaw to fasten trees to the ground.


for instance, I find mass surveillance intolerable and it makes me completely uninterested in my work.



This sort of reminds me of the story about an HR algorithm that ended up being discriminatory because it was trained using existing/past hiring data.. so it was biased toward white men.

Was it Amazon?

Anyway, this feels different to me, IIRC you can't ask disability related questions in hiring aside from the "self identify" types at the end? So how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?

Or is that the advisory? "Don't do this?"


how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?

A few off the top of my head:

(1) Signals gained from ways that a CV is formatted or written (e.g. indicating dyslexia or other neurological variances, especially those comorbid with other physiological disabilities)

(2) If a CV reports short tenure at companies with long breaks in between (e.g. chronic illnesses or flare-ups leading to burnout or medical leave)

(3) There are probably many unintuitive correlates irt interests, roles acquired, skillsets. Consider what experiences, institutions, skillsets and roles are more or less accessible to disabled folk than others.

(4) Most importantly: Disability is associated with lower education and lower economic opportunity, therefore supposed markers of success ("merit") in CVs may only reflect existing societal inequities. *

* This is one of the reasons meritocratic "blind" hiring processes are not as equitable as they might seem; they can reflect + re-entrench the current inequitable distribution of "merit".


>If a CV reports short tenure at companies with long breaks in between (e.g. chronic illnesses or flare-ups leading to burnout or medical leave)

This is a case where it may benefit a candidate to disclose any disabilities leading to such an erratic employment pattern. I don’t proceed with candidates who cannot explain excessively frequent job hops because it signals that they can’t hold a job due to factors I’d want to avoid hiring, like incompetence or a difficult personality. It’s a totally different matter if the candidate justified their erratic employment due to past medical issues that have since been treated.


>medical issues that have since been treated

And what if they haven't been? Disability isn't usually a temporary thing or even necessarily medical in nature (crucial to see disability as a distinct axis from illness!). Hiring with biases against career fluctuations is, I'm afraid to point out, inherently ableist. And it should not be beholden on the individual to map their experienced inequities and difficulties across to every single employer.


I think the point of this guidance is that "hiring AI" is not actually intelligent and will not be able to read and understand a note about disability on a resume. It will just dumbly match date ranges to an ideal profile and throw out resumes that are too far off.


>* This is one of the reasons meritocratic "blind" hiring processes are not as equitable as they might seem; they can reflect + re-entrench the current inequitable distribution of "merit".

they are not meant to be "equitable". they're meant to provide equality of opportunity, not equality of outcome


Oh agreed! Sorry about mixed terminology. Though they don't really provide "equality of opportunity" either :/ People w/ more privelege, at the starting line, will have more supposed 'merit' and therefore the CV-blindness only reflects existing inequalities from wider society. A different approach might be quotas and affirmative action.


I think the poster is arguing that the things we call merit reflects the ability to do the job well. Any system of hiring has to consider the ability to hire the best person for the job. Quotas are an open-admission we can no longer do this. Affirmative action is trickier as some affirmative action can be useful in correcting bias and can actually improve hiring. Too much once again steers us away from the best person for the job.

This is important and tricky as if we have across the board decreases in hiring the best person for the job we end up with a less productive economy. This means our hiring practices directly compete against other aims like solving poverty.


> So how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?

In machine learning this happens all the time! Stopping models from learning this from the most surprising sources is an active area of research. Models are far more creative in finding these patterns than we are.

It can learn that people with disabilities tend to also work with accessibility teams. It can learn that you're more likely to have a disability if you went to certain schools (like a school for the blind, even if you and I wouldn't recognize the name). Or if you work at certain companies or colleges who specialize in this. Or if you publish an article and put it on your CV. Or if you link to your github and the software looks there as well for some keywords. Or if among the keywords and skills that you have you list something that is more likely to be related to accessibility. I'm sure these days software also looks at your linkedin, if you are connected with people who are disability advocates you are far more likely to have a disability.

> Or is that the advisory? "Don't do this?"

Not so easy. Algorithms learn this information internally and then use it in subtle ways. Like they might decide someone isn't a good fit and that decision may in part be correlated with disability. Disability need not exist anywhere in the system, but the system has still learned to discriminate against disabled people.


https://beta.ada.gov/ai-guidance/

> For example, some hiring technologies try to predict who will be a good employee by comparing applicants to current successful employees. Because people with disabilities have historically been excluded from many jobs and may not be a part of the employer’s current staff, this may result in discrimination.

> For example, if a county government uses facial and voice analysis technologies to evaluate applicants’ skills and abilities, people with disabilities like autism or speech impairments may be screened out, even if they are qualified for the job.

> For example, an applicant to a school district with a vision impairment may get passed over for a staff assistant job because they do poorly on a computer-based test that requires them to see, even though that applicant is able to do the job.

> For example, if a city government uses an online interview program that does not work with a blind applicant’s computer screen-reader program, the government must provide a reasonable accommodation for the interview, such as an accessible version of the program, unless it would create an undue hardship for the city government.


"Using AI tools for hiring"...this is when i like to remind myself that google, basically at some point in the last 12-24 months was like "OH CRAP, we forgot to tell our robots about black people!". Like, I'm not saying google is at the forefront of ML - maybe it is, but it sure as hell is out in front somewhere and more to the point most companies are likely not gonna be using cutting edge technology for this stuff. EVEN GOOGLE, admitted their ML for images is poorly optimized for POC's, i hate to think what some random ML algorithm used by company X thinks about differently abled peoples


This is unreasonable generalization. Dark skin does have direct impact on image processing. Nothing like this exists in hiring.



There are companies selling products which screen hiring candidates based on video of them talking. Ostensibly for determining personality traits or whatever. So yes, this literally exists in hiring.


While image processing dark skin may not be germane to AIs doing hiring, the idea that unintentional discrimination from ML models could occur in the context of hiring is certainly worth considering and I believe it's the entire point of the technical assistance document released today.


Mate. you ever hear about amazon's hiring ai?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: