AI-enabled tools are moving rapidly into healthcare delivery, quality improvement, operations, revenue cycle management, and patient engagement. As the technology becomes more deeply embedded, the legal, regulatory, contractual, and data governance stakes increase substantially. The checklist below synthesizes key considerations and common risks identified in AI deployments across healthcare systems, drawing from legal, compliance, contracting, and operational perspectives.
1. Establish Lawful Authority for Data Use and Verify Regulatory Pathways
Before data is processed by an AI system, the organization must determine whether it has lawful authority to use the information. For protected health information (PHI) regulated by the Health Information Portability and Accountability Act of 1996 (HIPAA), this may derive from HIPAA’s treatment, payment, or healthcare operations (TPO) pathways, from a valid authorization, or, where applicable, from the use of a limited data set for research, public health, or healthcare operations purposes pursuant to a data use agreement. If none apply, the data must be de-identified to HIPAA standards.
This determination should guide early architectural and contracting decisions. Too often, AI solutions are deployed based on business objectives and technical feasibility, with compliance determinations occurring late—after datasets have already moved into environments in contrast to regulatory constraints.
2. Clarify Data Practices and Align Them with Patient and Public Expectations
Organizations should not assume their existing privacy notices or internal documentation cover AI-enabled data processing. AI introduces new data handling methods, potentially new third parties, and often new data flows that may not align with the organization’s existing publicly stated practices. Consumers expect transparency and will attribute lack of clarity to mistrust or misconduct.
AI deployment requires not only regulatory compliance but clarity in what the organization actually does, what it promises not to do, and what rights individuals retain.
3. Treat Contracting as a Compliance Safeguard, Not a Procurement Box-Check
AI agreements are fundamentally compliance tools. Traditional technology contracting approaches tend to emphasize licensing and service availability, but AI demands detailed treatment of data rights, permitted uses, model training, derived work ownership, oversight rights, and liability allocation.
Contracts should:
- specify the limits and conditions under which the vendor can access or manipulate data;
- allocate who bears compliance responsibilities under HIPAA, consumer privacy laws, and emerging AI-specific obligations;
- require ongoing monitoring and system validation; and
- address data retention, portability, and exit planning.
Failure to negotiate these issues invites uncertainty and exposes the organization to potentially material regulatory and operational risks.
4. Execute BAAs That Are Purpose-Built for AI
When PHI is involved, organizations must determine whether the vendor meets the definition of a business associate. If so, a business associate agreement (BAA) is required. However, AI complicates the ordinary BAA structure.
Organizations should ensure BAAs:
- explicitly contemplate the AI use case;
- define allowed data flows in proprietary, hosted, or subcontracted AI environments;
- limit vendor reuse rights;
- prohibit PHI disclosure to environments beyond the intended design; and
- bind subcontractors through equivalent protections.
A generic BAA frequently will not satisfy these obligations and leave the organization materially exposed. In the AI context, particular attention should be paid to provisions addressing data aggregation, use of PHI for the business associate’s proper management and administration (as permitted under 45 CFR § 164.504(e)), and de-identification of PHI, all of which can significantly affect how data may be used, disclosed, or retained.
5. Implement Meaningful Data Governance and Quality Controls
AI output quality depends on input quality. Undisciplined ingestion of incomplete, biased, or inaccurate, inconsistent, or poorly structured data increases the risk of inaccurate or clinically unsafe outputs. Poor data quality also undermines the defensibility of outcomes if challenged.
Organizations should implement governance frameworks that include:
- validation of data prior to ingestion;
- traceability of source data used to generate recommendations;
- defined controls for error detection, escalation, and remediation; and
- rules for maintaining datasets used in model retraining.
Organizations should also exercise heightened caution when using unstructured data (such as free-text clinical notes), which may be incomplete, ambiguous, or difficult to validate. Where unstructured data is used, additional controls (such as preprocessing, normalization, and human review) should be implemented to mitigate risk.
AI should not be permitted to consume data without appropriate oversight, nor should its outputs be relied upon without meaningful validation and review.
6. Avoid Overreliance on Models with Unknown Limitations
AI systems do not guarantee accuracy. Overconfidence in algorithmic outputs, particularly when clinical or financial decisions are involved, creates persistent exposure. Organizations must understand the model’s intended use(s), limitations, and the circumstances in which its outputs should trigger human intervention.
This is not solely a technical requirement; it is a liability mitigation strategy.
7. Treat AI as a Continuous Governance Obligation, Not a One-Time Installation
AI systems are dynamic. They must be monitored, evaluated, and retrained. Contracts need to explicitly allocate post-deployment responsibilities, ensuring that:
- monitoring occurs at defined intervals;
- performance degradation is assessed and corrected;
- retraining occurs when dataset shifts materialize; and
- escalation pathways are fully defined.
Organizations that assume AI will function the same on day 300 as day 1 misunderstand the technology and are at increased risk.
8. Monitor Regulatory Developments
Healthcare AI is increasingly in the crosshairs of government oversight. Although there is still no comprehensive federal AI statute and recent federal policy emphasizes reducing regulatory barriers to innovation (while relying on existing agencies such as the Federal Trade Commission to police unfair or deceptive practices and evaluate the impact of state-level requirements), states continue to advance their own AI frameworks. This fragmented and evolving compliance landscape complicates national deployment strategies.
In practice, organizations should expect continued growth in state-level AI regulation, including (1) laws of general applicability governing AI use, (2) more comprehensive frameworks such as those emerging in states like Colorado, and (3) healthcare-specific requirements applicable to payors, providers, and health data. In addition, guidance from state regulators (such as departments of health and healthcare authorities) will continue to shape expectations for AI governance and use.
Organizations should maintain ongoing monitoring for:
- modifications to HIPAA guidance;
- emerging state laws, including those regulating health-related data flows;
- expanded federal agency focus on algorithmic fairness and data deception; and
- Food and Drug Administration oversight for systems approaching clinical decision support.
- AI compliance is not static; it is a living and evolving risk environment.
9. Address Ownership and Control Issues Up Front
Inputs, models, refinements, and outputs must be governed contractually. Parties should avoid ambiguity around:
- who owns refined model weights or parameters;
- who may leverage outputs for commercial benefit;
- what rights persist after termination; and
- how portability and transition will occur.
Clarity here prevents downstream disputes and strengthens defensibility.
This checklist is by no means comprehensive and applicable laws remains in flux. To address compliance questions, reach out to one of our skilled healthcare and privacy lawyers.
Continue exploring the series:
AI in Healthcare: Executive Summary
In this article series, our healthcare, privacy, and FDA lawyers are covering the fundamentals for what providers, physicians, hospitals, and the vendors who support them need to know about how to maximize the impact of AI in their organizations while protecting important patient data and maintaining regulatory compliance.
AI in Healthcare: Key Legal Questions to Address Before Deployment
This article outlines key questions and compliance concepts to consider based on common scenarios in which healthcare entities “feed the machine” with sensitive data.
Healthcare AI Deployment: Compliance Through Contracting, BAAs, and Data Governance
This article highlights key legal considerations for using AI systems with protected health information (PHI), with a focus on agreements, business associate obligations, and data governance.