As AI continues to reshape technology transactions, deal lawyers have been compelled to revisit longstanding allocations of risk, revisit boilerplate, and develop new contracting mechanics to address novel uncertainty. While the core goals of technology deals remain the same—facilitating commercial outcomes and protecting the business—AI introduces distinctive pressure points across intellectual property, data, regulatory exposure, and liability frameworks.
- Addressing ownership and license rights: In the realm of AI, questions that once focused on software and content now extend to models and outputs, including raw model outputs, post-processed or human-refined outputs, and deliverables incorporating outputs. Prompt libraries, templates, and evaluator tools can also embody significant value. Agreements should address ownership or license rights for each category, addressing derivative works treatment, assignment mechanics, and usage rights.
- Training data rights remain critical: Rights in training data are increasingly critical in AI contracts. Customers providing training data should consider required consents, use limitations, deletion and return rights, and data segregation. Vendors, on the other hand, may wish to consider licenses to such data to improve services, subject to regulatory and confidentiality limitations. Both parties will want to address issues regarding lawful data collection, use and commercialization, and restrictions relating to the sensitive or regulated data.
- Keep regulatory compliance top of mind: When negotiating agreements for the implementation and use of AI tools, regulatory compliance should be at the forefront during diligence and in negotiations. In addition to AI-specific regulations, these transactions require review of regulations relating to privacy generally, import/export, and fraudulent practices. Allocation of monitoring and compliance responsibility are critical deal points, and compliance programs, data privacy procedures, data governance controls, and security practices and certifications are central to diligence and solution evaluation assessments.
- Mitigating liability friction between parties: Liability frameworks are being tailored to AI-specific risks and uncertainties. The concept of damages caps are generally preserved, but carveouts are highly negotiated. There is friction between the Customer’s desire to hold the Vendor responsible for the accuracy and quality of output and the Vendor’s desire to align liability with revenue generation. This has led parties to get creative with potential remedies, including re-work obligations and service-level credits.
- From the outset, keep termination in mind: What happens at termination is often as important as addressing transition-in issues. Provisions may include ongoing access to the tool during winddown, measures for tool extraction if applicable, content and data access and return, and data deletion. These provisions help manage the transition smoothly and protect both parties’ interests as the contract concludes.
- Distinct vs. general AI disclaimers: As AI becomes embedded in nearly every technology deal, lawyers should resist the temptation to rely on generic AI disclaimers. Instead, AI risk should be integrated into the provisions throughout the contract, including IP, compliance, data governance and protection, and liability provisions—reflecting how the technology is actually used and the business impact it may have.
How We Can Help
Morgan Lewis’s technology transactions, outsourcing, and commercial contracts lawyers regularly advise clients on complex technology deals, global sourcing strategies, and evolving regulatory risk. If you have questions about the topics discussed above or would like to learn more, please reach out to any member of our team.