In Part 1 of this series, we discussed why artificial intelligence (AI) agents present unique challenges for technology and outsourcing contracts. As businesses move from development to deploying them in real-world operations, contracts must grapple with governance and accountability issues, such as how these tools are monitored, managed, and held accountable.
AI agents differ from traditional software because they adapt and evolve in real time. This dynamic pushes parties to explore new approaches to standard contracting mechanisms, such as the following:
- Service level obligations: Approaches include commitments to maintain audit logs, performance dashboards, and building obligations to step in if the AI’s behavior changes over time or drifts away from expected results. Unlike traditional service level agreement (SLA) monitoring, where metrics such as uptime or response time are relatively static, AI monitoring may require vendors to detect when outputs deviate from expected parameters, such as producing harmful, biased, or noncompliant results. This can involve obligations to pause or retrain models even if the system is technically “available” under standard SLAs.
- Escalation and remediation: What happens if an AI agent generates harmful, biased, or noncompliant outputs? Contracts may define severity tiers and response SLAs, but AI-specific provisions are emerging that go beyond usual service credits or escalation paths. For example:
- “Kill-switch” rights allow the customer to immediately suspend the AI agent if it behaves unpredictably or causes compliance risks, even if the vendor disagrees.
- Shadow-mode operation requires the AI to run parallel without influencing live decisions until its performance is validated, giving the customer a safe environment to test changes.
- Retraining windows commit the vendor to update or retrain models within a defined timeframe if material issues are detected, recognizing that AI behavior may drift or degrade in ways that cannot be fixed with a simple patch.
- Regulatory compliance and regulatory change: The legal framework for AI is moving quickly at both the state and federal levels in the United States, as well as in international markets such as the EU. Contracts increasingly need to address who is responsible for monitoring these developments and ensuring compliance. Customers may push vendors to (1) warrant that their AI agents comply with current law, (2) promptly update systems as regulations change, and (3) provide notice of material changes in the AI model or data sources. Vendors, in turn, may prefer a shared-responsibility approach, where obligations are divided depending on who controls the relevant process (e.g., training data, deployment environment, or end use). Some agreements also contemplate audit rights or compliance certifications to give customers greater assurance. Without clear allocation of these responsibilities, parties risk gaps in accountability if the AI later becomes subject to new rules or enforcement actions.
As businesses begin deploying AI agents into business-critical functions, the contractual terms will play a central role in shaping governance and accountability. While no agreement can eliminate every risk, thoughtful provisions around oversight, remediation, and compliance can set expectations and create a framework for managing issues whenever they arise. As with other transformative technologies, the key is balancing innovation while addressing the realities of evolving regulation and shared responsibilities.