TRYING TO KEEP UP WITH THE ROBOTS: The EEOC Releases Guidance on Artificial Intelligence and Title VII

Follow us on LinkedIn to see future News.

Patricia Tsipras

May 19, 2023

Yesterday, the United States Equal Employment Opportunity Commission (EEOC) released technical assistance entitled Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964 (Title VII).[1]

“As employers increasingly turn to AI and other automated systems, they must ensure that the use of these technologies aligns with the civil rights laws and our national values of fairness, justice and equality,” said EEOC Chair Charlotte A. Burrows.

In general, Title VII prohibits employment discrimination based on race, color, religion, sex, or national origin.

The EEOC’s new technical assistance focuses on disparate (or adverse) impact, a key concept under Title VII.  Disparate impact occurs when a neutral test or selection procedure, which is not job-related or consistent with business necessity, has the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin.

Many employers rely – at various states of the employment process – on software that incorporates algorithmic decision-making.  The EEOC cited some examples, including:

  • resume scanners that prioritize applications using certain keywords
  • virtual assistants or chatbots that ask job applicants about their qualifications and reject applicants who do not meet pre-defined criteria
  • video interviewing software that evaluates candidates based on their facial expressions and speech patterns

Use of such software could lead to disparate impact claims.  And employers can be liable on such claims even if the software was designed or administered by a third party.

So what is an employer to do to protect itself?
First, review the EEOC’s Uniform Guidelines on Employee Selection Procedures under Title VII.  The EEOC adopted the Uniform Guidelines in 1978, and they help employers determine if their tests and selection procedures are lawful under a Title VII disparate impact analysis.

Second, do your due diligence with your software developer or administrator to determine whether they have taken steps to evaluate the algorithmic decision-making tool.  If they indicate that the tool may result in a substantially lower selection rate for individuals of a particular race, color, religion, sex, or national origin, then determine (1) if use of the tool is job-related and consistent with business necessity; or (2) if alternatives to the tool exist that meet your needs and have less of a disparate impact.

Third, conduct self-audits on a regular basis to determine whether your employment practices have a disproportionately large negative effect on a protected group.  If they do, change that practice going forward.

We can help.

 

This article is designed to provide one perspective regarding recent legal developments, and is not intended to serve as legal advice.  Always consult an attorney with specific legal issues.

 

[1] The EEOC issued guidance regarding artificial intelligence and the Americans with Disabilities Act in May 2022.

 
© 2026 Rubin Fortunato. All rights reserved. Disclaimer | Privacy Policy | Sitemap
Lisi
Rubin Fortunato
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.