Insight

How Employers Can Address Challenges of AI in the Workplace

June 07, 2023

As witnessed by the rapid adoption of ChatGPT, which garnered 100 million users two months after its November 2022 release, use of generative artificial intelligence (AI) is growing. Similar to traditional AI, generative AI can recognize patterns and make predictions, but it has the additional ability to create content such as text, images, music, videos, and other content in response to prompts. Employees are quickly finding uses for generative AI and other AI technology in the workplace, and as such employers should be prepared to foster this use in a way that protects the business and employees.

Benefits and Risks

AI can make time-consuming tasks more efficient. ChatGPT is increasingly used by employees to perform work-related tasks such as creating icebreakers and meeting agendas, making presentations, and generating reports. AI can also assist in the recruiting and hiring process by drafting job descriptions, reviewing resumes, running video interviews, and automating follow-up emails with candidates. What’s more, if programmed correctly, AI can also serve as a check against human bias in hiring.

However, employers should be aware that use of AI tools is not without risks, including the following:

  • Quality Control: Results produced from these AI products can vary in their accuracy. 
  • Contracts: Agreements with customers or clients may restrict the ability to share information with AI tools.
  • Privacy: Information used for prompts hosted by third parties could be made public and no longer considered confidential.
  • Intellectual Property: There are already questions about who owns content created by generative AI tools and use of proprietary data to train AI tools.
  • Potential Bias in Hiring: If poorly designed or not properly validated, AI may eliminate job candidates based on gender, race, age, or another attribute. People with disabilities may not be able to use certain AI tools used in recruiting or they may be negatively impacted by those tools, particularly those tracking speech patterns, facial expressions, or movements.

AI Best Practices

Rather than try to ban these tools, employers should instead find ways to use them wisely and take steps to protect themselves and their businesses, including the following:

  • Consider whether to inform job candidates and employees about the use of AI tools in the selection process or evaluations. Ask candidates to confirm they did not use AI to write a resume or cover letter.
  • Prepare to accommodate job candidates who disclose a disability that either prevents them from using an AI tool or might inadvertently cast that candidate in a negative light.
  • Have a diverse applicant pool before applying AI tools and consider hiring an industrial-organizational psychologist to do validation analysis. Check the results generated by the AI tool against the human decisionmakers’ results.
  • When looking at agreements with AI companies, seek indemnification or, at a minimum, representations that their tools have been tested for biases, and secure their cooperation in defending against claims.
  • Stay up to date on existing or potential laws, regulations, and guidance dealing with AI and consider writing company policies that address use of AI.

If you are interested in AI in the Workforce: Hiring Considerations and the Benefits and Pitfalls of Generative AI, as part of our Technology Marathon 2023, we invite you to subscribe to Morgan Lewis publications to receive updates on trends, legal developments, and other relevant areas.