LawFlash

EEOC Releases Guidance on Algorithms, AI, and Disability Discrimination in Hiring

May 20, 2022

The US Equal Employment Opportunity Commission (EEOC) released guidance on May 12 addressing the application of the Americans with Disabilities Act (ADA) to employer use of algorithms and artificial intelligence (AI) during the hiring process. Produced as part of the Artificial Intelligence and Algorithmic Fairness Initiative launched in October 2021, the guidance reflects the agency’s growing interest in employer use of AI, including machine learning, natural language processing, and other emerging technologies in employment decisions.

The EEOC’s AI initiative is a key component of the agency’s efforts to advance its systemic work, according to an April 2022 testimony Chair Charlotte Burrows gave to the House Education and Labor Subcommittee on Civil Rights and Human Services. The initiative’s goal is to educate applicants, employees, employers, and technology vendors about the legal requirements in this area and to ensure that new hiring tools do not perpetuate discrimination.

These documents are the first substantive output of the initiative. The guidance provides key insights into the EEOC’s thinking on these tools and potential enforcement priorities in the area moving forward.

DEFINING AI AND ALGORITHMIC DECISION TOOLS

Definitions are crucial in this area due to the growth of the various technologies and their increasing use at different stages in the employment process. The EEOC’s guidance provides extended definitions of three key terms—software, algorithms, and artificial intelligence—along with analysis on how they can be used in the workplace. While this document focuses on the ADA, it is expected that the EEOC will apply these definitions when analyzing the impact of the tools in other areas of employment discrimination, such as race or gender bias.

The definitions employed by the EEOC are quite broad. They define “software” as information technology programs that tell computers how to perform a given task or function. Examples of “software” used in hiring include resume-screening software, hiring software, workflow and analytics software, video interviewing software, and chatbot software.

“Algorithms” encompass any set of instructions followed by a computer to accomplish an identified end. This can include any formula used by employers for ranking, evaluating, rating, or making other decisions about job applicants and employees.

“Artificial intelligence” refers to a “machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations or decisions influencing real or virtual environments.” This covers machine learning, computer vision, natural language processing and understanding, intelligent decision support systems, and other autonomous systems used to make employment decisions or set out the criteria for a human to make employment decisions.

The guidance notes that employers may use tools that include any combination of these three general terms. For instance, an employer may utilize resume screening software that relies on an algorithm created by human design or an algorithm that is supplemented or refined by AI analysis of data. 

POTENTIAL ADA VIOLATIONS FROM EMPLOYER USE OF AI AND ALGORITHMIC TOOLS

The guidance discusses three areas where an employer’s use of algorithmic or other technology decisionmaking tools could violate the ADA:

  • Use of tools that unlawfully screen out applicants or employees on the basis of disability
  • Failure to provide reasonable accommodation in relation to the tools
  • Use of tools that violate ADA restrictions on disability-related inquiries and medical examinations

Tools That Unlawfully Screen Out Persons With Disabilities

The guidance explains that a tool might “screen out” an individual on the basis of disability if the individual’s disability prevents the individual from meeting selection criteria implemented by the tool or results in a negative rating from the tool based on those criteria. If the individual loses a job opportunity as a result, a violation of the ADA may occur. Examples include screens that automatically eliminate applicants with significant gaps in their employment history (which may be the result of a disability) or measure and make assessments on physical or mental traits, such as speech patterns or the ability to solve certain games, which may be impacted by a disability.

The guidance notes, importantly, that employers may not rely on a vendor’s assessment that a tool is “bias free” for validation purposes. Such assessments may only focus on other protected characteristics, such as race or gender, and not properly evaluate impact on the basis of disability. Further, unlike other protected characteristics, each disability is unique in terms of the limitations it imposes. A general assessment of a tool is unlikely to cover all the potential ways a disability may interact with that tool. Finally, a vendor assessment may be invalid or poorly designed. As the ultimate decisionmaker, the employer is liable for the results produced by the tool and has the responsibility for ensuring legal compliance.

Duty to Provide Reasonable Accommodation

The guidance reiterates that employers must consider reasonable accommodations for applicants or employees who require them to be rated fairly or accurately by an evaluation tool. That can include accessibility accommodations for persons who have difficulty taking tests or using tools due to dexterity limitations or who require adaptive technologies, such as screen-readers or closed captioning, to effectively apply. This obligation applies to an employer even if it has outsourced the evaluation or operation of the tool to a third party or vendor.

The guidance further explains that the ADA’s reasonable accommodation requirement may necessitate waiving the use of these tools in certain situations. AI and algorithmic tools are designed to measure an individual’s suitability for a particular position. Employers will need to consider requests for accommodation, including waiver, from applicants who are unable to meet the criteria used by a particular tool to measure fit but are otherwise able to show that they can perform essential job functions. This is the case even when the tools are validated for certain traits. As discussed above, the EEOC believes that the unique nature of each disability makes it possible for an individual to show that a generally validated screen still unlawfully screens that individual out on the basis of the individual’s particular limitations.

Disability-Related Inquiries and Medical Exams

The guidance also reaffirms that AI or algorithmic tools may not involve unlawful disability-related inquiries or medical examinations. The ADA bars employers from making disability-related inquiries or requiring medical examinations of applicants prior to a job offer. Once an offer is made, the employer may only make such inquiries or require such exams if they are “job-related and consistent with business necessity.” The guidance reminds employers that an assessment or algorithmic decisionmaking tool that explicitly requests medical information from applicants or can be used to identify an applicant’s medical condition could violate the ADA. Tools that assess broad personal traits (such as personality tests), however, will usually not violate this prohibition if they are not designed to reveal a specific diagnosis or condition.

‘PROMISING PRACTICES’ TO PREVENT DISCRIMINATION

The guidance recommends several practices that employers may use to reduce the chances of an AI or algorithmic tool violating the ADA. These include:

  • Continuously evaluating whether a tool may screen out persons with disabilities
  • Ensuring the tools are accessible for persons with visual, hearing, speech, or dexterity impairments
  • Providing robust explanations to applicants or employees regarding the traits or characteristics measured by a particular tool, the methods it uses to measure those traits or characteristics, and the disabilities, if any, that might potentially lower an assessment or screen out an individual
  • Clearly advertising the availability of reasonable accommodation, including alternative formats, waivers, and tests, for persons with disabilities as well as providing clear instructions for requesting such accommodations

These “promising practices” reinforce that the key to ADA compliance in this area will be gathering sufficient information to identify potential areas of bias and providing applicants with the necessary resources to request alternative forms of evaluation if they believe a disability may prevent fair or accurate evaluation.

Unfortunately, however, the EEOC does not provide much guidance regarding how employers can assess these tools for potential disability bias.

LOOKING FORWARD

The EEOC is focused on the use of AI in employment, particularly in hiring, and additional guidance on this topic is expected as a result of its AI initiative. Increased EEOC interest in claims of discrimination is also anticipated based on the use of these tools and renewed systemic focus on this area.

The EEOC is just one of several regulatory bodies interested in the application of these tools. California’s Fair Employment and Housing Council released draft regulations concerning employer use of “automated-decision systems” in March 2022. In November 2021, New York City passed a law requiring annual “bias audits” for “automated-decision systems” used in hiring and several other jurisdictions are actively considered measures on this topic.

Employers will need to monitor developments in this area closely given this increased regulatory activity. They should also closely evaluate the ways that existing or proposed tools may create the risk of an ADA violation based on the EEOC’s guidance. This will only grow in importance as employers increase their reliance on these methods to find and select the best applicants for positions in a tight labor market.

CONTACTS

If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following Morgan Lewis lawyers:

Washington, DC
Jocelyn R. Cuttino
Sharon Perley Masling

Philadelphia
Michael S. Burkhardt
W. John Lee
Larry L. Turner

Silicon Valley
Kannan Narayanan

New York
Ashley J. Hale
Douglas T. Schwarz
Samuel S. Shaulson
Kenneth J. Turnbull

Chicago
Jonathan D. Lotsoff