Tech & Sourcing @ Morgan Lewis


Welcome to Part 3 of our Cracking AI and Outsourcing Conundrums series. In Part 1, we discussed at a high level the challenges of requiring outsourcing providers to drive innovation through the use of generative AI (GenAI) while at the same time complying with an outsourcing customer’s AI policies. In Part 2, we dove into the conundrum of balancing a company’s need for enhanced quality checks with the desire (by the company and the outsourcing provider) to drive productivity and realize savings.

In Part 3, we consider where and how a company’s service provider is using GenAI and whether there is full disclosure (intentionally or otherwise) by service providers of such use. Many companies’ AI policies state that service providers are not permitted to use GenAI tools, large language models (LLMs), or technologies (whether proprietary or licensed from or otherwise made available by a third party) in performing the services without the prior written consent of the company.

Some go further, requiring that the service provider must give the company prior notice of any capabilities added to or used to provide or administer the services that include GenAI technology and shall reasonably cooperate with the company’s reasonable requests for information regarding such capabilities. The disclosure requirements—driven by the company’s desire to know where GenAI is being used to understand the data and intellectual property implications as well as the quality and reliability of the output—are pretty clear.

So, all good then? The requirement is simply for the service provider to tell the company where the service provider plans to use GenAI in its services so that the company can make an informed decision as to whether the use meets the company’s standards and is acceptable. A question that is starting to emerge is whether service providers are performing the necessary diligence to identify all of the potential areas where GenAI is being used to support the services.

Determining GenAI Usage

A particular area that may have been overlooked in the past but that is starting to get attention involves back-office and productivity software solutions that have or are starting to deploy GenAI capabilities, including those that may process or receive company or company client data. Other examples include the use of GenAI in help desks or customer contact centers (when the service provider is providing support directly or indirectly for its services).

Some considerations to keep in mind include the following:

  • If a service provider uses a word processing tool that uses GenAI tools to produce documents or perform due diligence, are these solutions being disclosed?
  • What other third-party tools are in use that include GenAI and may leverage LLMs?
  • Is the company’s data being used to build or train an LLM?
  • Can the service provider say that the company owns the “deliverables” if these productivity tools were used to create them?
  • Does the use of such tools violate any of the other terms of the agreement regarding confidentiality, privacy, data usage, or intellectual property?

With the race to include GenAI in service and software offerings, it is important that the service provider demonstrate the due diligence it has performed with respect to the proprietary and third-party tools it uses, including those used on a dedicated basis for the company or ones used at an enterprise level.


This blog is the third in our Cracking AI and Outsourcing Conundrums series. Look out next week for our upcoming fourth and final blog in this series, where we will discuss data governance and ownership issues arising from the use of GenAI in outsourcing arrangements.

GenAI is exciting and has massive potential for enhancing the way we work, produce, and interact in a business environment. It promises to be powerful, and with power comes responsibility (and accountability). The purpose of this series is to embrace the promise while at the same time identifying the risks and to start a discussion on the best ways to mitigate the risks arising from the particular scenarios and use cases.