Top Issues Employers Should Consider Before Using AI Recruiting Tools

Category: Federal & State Compliance

Written by Reed L. Russell and Raquel Ramirez Jefferson from Phelps Dunbar LLP on March 6, 2024

AI presents opportunities for employers to streamline the hiring process, but it can also lead to added risks. Businesses considering using AI tools in recruiting should start by addressing data privacy and bias monitoring concerns.

Data Privacy

Many states have passed or are proposing legislation to protect user data as AI applications expand. At least 10 states directly mention AI in their data privacy regulations. These laws generally require employers to:

  • Notify applicants that AI may be used.
  • Provide information explaining how the AI works and what general characteristics it will use to evaluate applicants.
  • Obtain the applicant’s consent to be evaluated by the AI.

Some jurisdictions, including Illinois, Maryland and New York City, also directly regulate AI in the hiring process, with more likely to follow. Illinois’ law applies to video recorded interviews. Maryland’s law applies to facial recognition templates. New York City requires bias audits and that the results of those audits be posted or linked to the employer’s website. The federal government is also evaluating AI regulations for data privacy concerns.

Employers with operations in multiple states should make sure their AI recruiting tools meet each state’s requirements. It’s also important to document the implementation of these requirements by requesting copies of explanation, consent and opt-out language from the software developers providing the tools.

Bias Monitoring

Employers should also consider developing a methodology to monitor bias. EEOC’s latest strategic enforcement plan established AI-related employment discrimination as a top priority, which could bring added scrutiny to companies using these tools in recruiting.

Certain AI recruiting systems, such as those that are test-based, face more risks for bias than others. Employers can use EEOC’s 4/5ths rule as a back-of-the-envelope method to make an initial determination whether the tool creates an adverse impact on a protected group. But EEOC cautions that this rule will not protect an employer if a proper statistical analysis demonstrates adverse impact, and employers considering use of an AI tool should do such a proper analysis before rolling out an AI tool for regular use.

EEOC recently issued guidance on auditing AI programs applied to hiring processes to prevent adverse impacts under Title VII and comply with the Americans with Disabilities Act. It advises employers to:

  • Ask AI tool vendors, at a minimum, if steps have been taken to evaluate whether use of the tool causes a substantially lower selection rate for individuals with a characteristic protected by Title VII.
  • Assess the hiring program to determine whether the selection rate for one group is “substantially” different than the selection rate of another group.
  • Ensure the AI tool does not unintentionally screen out individuals with disabilities.
  • Monitor AI outputs for adverse impacts on individuals of a particular race, color, religion, sex or national origin, or on individuals with a particular combination of those characteristics.
  • Make sure the AI tool does not violate rules on disability-related inquiries and medical examinations.
  • Provide reasonable accommodations for job applicants.

Once an AI recruiting tool is implemented, employers should regularly review the tool’s evaluation processes and output to stay on top of quickly evolving data privacy and discrimination protections.