Tech & Sourcing @ Morgan Lewis


While the regulatory landscape around artificial intelligence (AI) continues to evolve, navigating contractual arrangements and apportioning risk for the use of AI may seem like stepping into the unknown. In this blog post, we consider how a few familiar concepts within commercial contracts may be applied to the provision and use of AI tools as part of services.

Defining ‘Artificial Intelligence’

The use cases for AI applications are increasing exponentially—from generative AI tools to analytical and reporting AI tools such as transaction monitoring and risk management visualization. The parties, particularly customers, may prefer to avoid a narrow definition of AI; a broad contractual definition would capture any algorithmic, interpretive, machine learning, or other AI processes.

Disclosure/Due Diligence of AI Use

A first step to contracting for use of AI is to understand how it may be used as part of services and/or products. Regulatory frameworks at the US federal and state level and in the United Kingdom expect companies to understand how operational decisions and processes utilize or are based on the outputs of AI.

  • General disclosure obligation: A service provider may be contractually obligated to keep a customer generally informed of AI usage as part of services and/or products.
  • Information requests: A customer may seek a right to receive specific information on the use of AI as part of the services, on request. A service provider might counterbalance such a right with protecting its commercially sensitive business information and the confidential information of other customers whose data sets are used to train the AI tool or that benefit from the AI tool.
  • Issue notification: If a party detects issues with the use or output of AI, then each party will likely seek a mutual obligation to be promptly notified. Key points of negotiation may include
    • the scope of “issues.” Aside from data breaches, these could include inaccurate, biased, or unrepresentative outputs;
    • time period and scope of notification; and
    • consequences of any issues. Could the parties agree to a remediation plan? Will the customer have a right to suspend the use of AI (or the services themselves)?

Performance Standard

Where services utilize AI, customers may expect service providers to ensure that a provider’s use of AI:

  • will not degrade the contractual standard for performance of the services;
  • produces accurate and representative outputs and does not take into consideration certain protected characteristics, unless the customer has provided preapproval; and
  • does not develop harmful or inappropriate behaviors.

The extent to which a service provider is able to meet these expectations will depend on various factors, including how the applicable AI tool is procured and how it is trained on data sets. For example, if such data sets are provided by the customer, then a service provider may seek to carve out errors or inaccuracies in the training data sets from its responsibility.

Compliance with Laws

Given the evolving layers of AI regulation, the contractual allocation of compliance responsibility within the AI ecosystem is becoming increasingly important. Broadly, responsibility for ensuring an AI tool does not violate applicable laws may fall on the party providing the dataset(s) that train the AI tool. A key negotiation point may be whether it is the service provider’s responsibility to not cause the customer itself to violate applicable laws through its use of the AI tool, or whether the customer alone is responsible for its own compliance obligations (e.g., sector-specific regulations).

Further, if an AI tool is used to collect or process personal information, then it is crucial to ensure that this data is handled in accordance with relevant privacy laws and regulations.

Ownership and Licensing of Intellectual Property Rights

The ownership of intellectual property rights (IPR) in the layers of input to the AI tool should be clearly delineated, since each party will expect the other to stand behind the IPR that it contributes, typically through indemnification against third-party claims of IPR infringement.

Contractual allocation of ownership of IPR in the outputs of the AI tool will be another key commercial consideration. Customers may consider that they should own IPR; however, if the underlying concern is the ability to use such outputs without any restriction, then this could be achieved through licensing terms. An ongoing conversation in several jurisdictions is whether the AI tool itself can own IPR in its generated content.

It is also important from both a legal and regulatory perspective to consider licensing arrangements in the event of a termination of use of the AI tool, whether planned or sudden, in order to minimize service disruption. The retention and portability of data analytics from a service utilizing an AI tool is another key commercial consideration.

These are just a few contract pointers when AI tools are used as part of the provision of services. We will continue to monitor developments around contracting for AI. Stay tuned!