Skip to content

What Is Adverse Impact?

Adverse impact (also called disparate impact) occurs when a recruitment or selection practice that appears neutral in policy disproportionately excludes members of a protected group — defined by race, gender, age, disability, or other characteristics. The EEOC's 4/5ths rule (or 80% rule) is the standard test: if the selection rate for a protected group is less than 80% of the rate for the highest-selected group, adverse impact is indicated. Employers must be able to demonstrate business necessity for any practice that produces adverse impact.

Compliance & Datacomplianceadverse-impactdisparate-impactselectionUpdated March 2026

TL;DR

Adverse impact (also called disparate impact) occurs when a facially neutral employment selection procedure -- a test, interview question, background check policy, or physical requirement -- results in a substantially lower selection rate for a protected group (defined by race, sex, religion, national origin, disability, or age) compared to the group with the highest selection rate. Under the Uniform Guidelines on Employee Selection Procedures (29 CFR Part 1607), the four-fifths (80%) rule is the primary operational standard for detecting adverse impact. A selection rate below 80% of the highest-rated group's rate is a signal requiring employer investigation and potential justification. Staffing agencies that design or administer selection processes for client companies share exposure if those processes produce adverse impact.

What Adverse Impact Requires Staffing Agencies to Do

The Uniform Guidelines on Employee Selection Procedures (29 CFR Part 1607), issued jointly by the EEOC, Department of Labor, Department of Justice, and Civil Service Commission, apply to any selection procedure used as a basis for any employment decision -- including hiring, referral, promotion, and assignment. Staffing agencies that administer their own selection procedures for placing workers, or that design client-directed selection processes, are covered by the Guidelines.

The obligation is twofold. First, agencies must monitor their own applicant flow data -- how many candidates from each protected group apply, advance through screening stages, and receive placement -- to identify statistically significant selection rate disparities. If a disparity is identified, the agency must investigate whether it is caused by a particular selection step, determine whether that step is job-related and consistent with business necessity, and explore whether an equally valid, less discriminatory alternative exists. Second, agencies must not implement client-directed selection criteria that they know or should know will produce adverse impact, without first determining that the criteria are job-related.

How to Measure Adverse Impact: The Four-Fifths Rule

The four-fifths rule provides the operational benchmark. Take the total number of candidates from each relevant group who are selected, divide by the total who applied, and compare. If the selection rate for any protected group is less than 80% of the selection rate of the highest-rated group, adverse impact is indicated. A worked example: 200 white applicants, 60 selected -- selection rate 30%. 100 Black applicants, 18 selected -- selection rate 18%. The ratio is 18/30 = 60%, which falls below the 80% threshold. Adverse impact is indicated for the Black applicant group.

The four-fifths rule is a starting point, not a conclusive legal finding. For small sample sizes, a statistically significant result by chi-square test or Fisher's exact test carries more weight than a raw ratio. Courts and the EEOC consider both measures, and agencies defending against adverse impact claims should be prepared to engage on both. The Guidelines require records to be maintained separately for race, sex, and ethnic group for each component of the selection process -- not just the final selection outcome -- making component-level analysis necessary for compliance.

Adverse Impact vs Indirect Discrimination (US vs UK)

The US adverse impact doctrine and the UK's indirect discrimination concept under the Equality Act 2010 (Section 19) are functionally parallel but legally distinct. In the US, adverse impact analysis is tied to the specific statistical framework of the Uniform Guidelines and the four-fifths rule. The employer can defend by demonstrating job-relatedness and business necessity; if it does, the burden shifts to the plaintiff to show a less discriminatory alternative. In the UK, indirect discrimination occurs when an employer applies a provision, criterion, or practice (PCP) that is neutral on its face but puts persons sharing a protected characteristic at a particular disadvantage compared to others. The employer can justify the PCP if it is a proportionate means of achieving a legitimate aim.

The UK protected characteristics that can give rise to indirect discrimination claims include age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. UK staffing agencies should treat indirect discrimination analysis as a continuous obligation -- reviewing job criteria, application questions, and assessment methods for PCPs that may disadvantage protected groups.

Enforcement and Liability

The EEOC investigates adverse impact through two pathways: individual charges (where a rejected candidate files a charge and statistical analysis supports a pattern) and systemic investigations (where the EEOC identifies potential pattern-or-practice discrimination based on its own data analysis or aggregated charges against an employer). EEOC systemic investigations can result in class-wide findings affecting hundreds or thousands of applicants, with back pay and make-whole remedies calculated across the class.

AI-driven hiring tools are the current frontier of adverse impact enforcement. The EEOC's 2023 guidance document on artificial intelligence and algorithmic hiring makes clear that employers using automated screening, video interview scoring, or predictive match algorithms are responsible for adverse impact the tools produce -- the fact that a vendor developed the tool does not shield the employer or staffing agency. Staffing agencies adopting AI sourcing or screening tools should contractually require vendors to provide bias audit data, and should independently test tool output against their own applicant flow data before deployment at scale.

Frequently Asked Questions

How do you calculate adverse impact using the four-fifths rule?
Divide the number of selected candidates from each group by the total who applied to get each group's selection rate. Then divide each group's rate by the highest group's rate. If the result for any protected group falls below 0.80 (80%), adverse impact is indicated for that group. For example, if 30% of white applicants are selected and 18% of Black applicants are selected, the ratio is 18/30 = 60% — below the 80% threshold. For small sample sizes, courts and the EEOC also consider chi-square tests or Fisher's exact test alongside the raw ratio.
Are staffing agencies liable for adverse impact caused by their clients' hiring criteria?
Yes. Staffing agencies that design or administer selection processes for client placements are covered by the Uniform Guidelines on Employee Selection Procedures. An agency cannot implement client-directed selection criteria it knows or should know will produce adverse impact without first determining those criteria are job-related. Accepting a discriminatory client order does not transfer liability to the client — the agency faces direct exposure for the selection processes it implements.
Do AI screening tools create adverse impact risk?
Yes, and regulators are increasingly focused on this. The EEOC's 2023 guidance on algorithmic hiring makes clear that employers and staffing agencies are responsible for adverse impact produced by automated screening tools, even when a vendor built the tool. New York City Local Law 144 (2023) requires annual bias audits for automated employment decision tools, with results published publicly. Agencies adopting AI sourcing or screening tools should require vendors to provide bias audit data and independently test tool output against their own applicant flow data.
What Is Adverse Impact? | Candidately Glossary | Candidately