Choose Site


Will AI Create More Problems Than It Solves in Recruiting?

February 04, 2021

Many businesses will need to make rapid hiring decisions as they reopen quickly during the pandemic. With limited time, applicants who may not be a good fit, and an increased focus on finding qualified diverse candidates, some companies are turning to an artificial intelligence algorithm to help field applications. While AI can help speed up the review process and reduce personal bias, it’s not without its own set of weaknesses. Here is a quick overview of the benefits and pitfalls of incorporating AI into recruiting.

  • Human resources departments can use AI in several stages of an employee’s career, including recruiting, selection, onboarding, training, performance management, promotion, retention, and benefits. AI can find candidates, including passive candidates, and review applications and resumes faster than people can. AI may also be able to predict which candidates are most likely to succeed in the job and remain employed by comparing data related to a job applicant to a model for a successful employee. AI can interview candidates and (sometimes using facial and voice recognition) rank them. AI can test candidates and compare their answers to the answers given by high-performing employees to determine who is likely to be a good employee. AI can communicate with candidates during the interview process. While an algorithm can determine how closely a candidate’s resume matches a job description, it may not be able to determine more subjective characteristics, such as a candidate’s grit or integrity.
  • An employee who would be good for one job at one company might not be good for another job at another company under a different manager working with a different team. So companies that look to this type of algorithm will need to customize it for their own needs. This means feeding the algorithm a lot of data, such as applicant tracking in the hiring process and performance reviews and compensation data for those who are hired, so it can continue to learn and refine its search parameters. The lack of data on those who were not hired makes it impossible to determine who was screened out but who might have made an excellent employee.
  • The use of algorithms to mine a candidate’s social media information can raise privacy concerns and an increased potential for discrimination. The use of AI in hiring may also allow applicants to game the system with their applications and resumes, and harm those who do not game the system.
  • Some AI companies claim that AI will reduce any implicit bias of interviewers, resulting in increased diversity. However, there is also a concern that AI can reproduce systemic patterns of discrimination. Since algorithms are backward-looking and learn from past data, the decisions made by AI programs may reflect and repeat past biases. If men or Caucasian employees had higher performance review scores in the past, when there were fewer women and minorities in the workplace, does that mean the algorithm thinks a company should hire men over women, and white people over minorities?
  • It’s important to make sure AI programs are audited and corrections made if unintended biases from the program are observed.
  • This concern also raises the potential to turn what is typically an individual failure-to-hire claim into a class action, with the algorithm serving as a common issue to unite the class. Vendors and companies that use AI need to be prepared to defend their use of algorithms in hiring to ensure that there is no implicit or unintended bias. Companies that hire AI vendors should carefully negotiate their contracts with the vendors, and try to obtain representations as to the product’s fairness and indemnification and cooperation provisions in the event of a lawsuit or government investigation.
  • There continues to be distrust by the public regarding the use of AI in hiring. Employees are increasingly seeking transparency, particularly in promotion and pay decisions, and if AI is used, will want to understand how it is used in making decisions.
  • Under the Biden administration, the Equal Employment Opportunity Commission (EEOC) will likely step up its enforcement efforts in the area of AI and machine-learning driven hiring tools. Companies are now watching to see if the EEOC will compel the details of both the proprietary algorithms and the underlying data sets.
  • Since it looks like the use of AI is on the rise, companies should employ a few best practices when employing the tool. Seek to understand an algorithm’s weaknesses to protect against its potential biases. Make sure the systems can be audited and corrections made. Consider adopting AI compliance policies with a view to bias prevention, the proper use of AI, and a plan to mitigate biases if they are uncovered. And consider arbitration agreements with class action waivers.

More information can be found in this webinar, originally part of Morgan Lewis’s AI Boot Camp.