AI, Disparate Impact, and EEOC Changes
Source: workforcebulletin.com
AI's Disparate Impact Liability
Title VII of the Civil Rights Act, the Age Discrimination in Employment Act, and the Americans with Disabilities Act all prevent employers from using seemingly neutral practices that unintentionally discriminate. These practices may impact individuals based on protected categories. The Equal Employment Opportunity Commission (EEOC) is the federal entity responsible for looking into unintentional discrimination claims, also known as disparate impact.
According to an internal memo obtained by Bloomberg Law, the EEOC intends to close all pending disparate impact discrimination charges towards the end of September 2025. After these closures, the EEOC is likely to issue right-to-sue letters, enabling claimants to pursue their cases in federal court. However, charges that combine both disparate impact and disparate treatment claims will likely remain with the EEOC.
The EEOC's stance follows President Donald J. Trump's Executive Order. The order characterized the disparate impact theory of discrimination as inconsistent with the Constitution, and as a threat to the American Dream's foundation of merit and equal opportunity. The order directs the EEOC and other federal agencies to deprioritize enforcing statutes and regulations tied to disparate impact liability and to reexamine current investigations and suits relying on it. We have previously covered this Order in depth.
Algorithmic Discrimination
To date, businesses using AI in the workplace have been focused on determining whether the AI's results unintentionally discriminate against individuals based on protected categories. For example, if an AI developer trains their tool using biased data, the tool might disproportionately and unintentionally subject employees or applicants to certain employment decisions based on race, gender, age, disability status, or other protected categories.
When workplace AI uses protected categories to produce results, it may be engaging in “algorithmic discrimination.” This can be defined as using an AI system that violates any relevant federal, state, or local anti-discrimination law. Employers may face liability for algorithmically discriminatory AI, even if the discrimination is unintentional.
The case of Mobley v. Workday, which is currently pending in the U.S. District Court for the Northern District of California, reminds us that AI tools used in employment decisions might be evaluated under a disparate impact theory. This is especially true if there's a reasonable inference that an AI algorithm relies on protected characteristics.
Even if the EEOC stops investigating unintentional discrimination, individuals can still file a charge with the EEOC, get a right-to-sue letter, file a court complaint, and potentially win disparate impact claims against employers. Employers, therefore, may still be liable for unintentional discrimination if a plaintiff successfully challenges discriminatory employment practices in federal court.
Furthermore, EEOC action will not impact disparate impact liability under the local and state laws. A number of current and pending laws specifically require employers to conduct disparate impact analyses. These analyses ensure that systems do not create disparate outcomes when using AI in employment-related decision-making.
As we have previously discussed, state and local jurisdictions are likely to take the lead in shaping AI's regulatory future. Employers must adhere to all applicable state and local laws prohibiting AI and automated employment decision-making tools that unintentionally discriminate. If you have questions regarding AI use or implementation at your workplace, please contact the authors of this blog or your Epstein Becker & Green, P.C. attorney.