Report

Healthcare AI Deployment: Compliance Through Contracting, BAAs, and Data Governance

As AI adoption accelerates, healthcare industry stakeholders must focus on how deployment is structured—particularly data use, contractual safeguards, and governance.
May 2026 6 minute read

This article highlights key legal considerations for using AI systems with protected health information (PHI), with a focus on agreements, business associate obligations, and data governance. Explore all articles in our AI in Healthcare series.

Key Takeaways

  • AI deployments involving protected health information often require business associate agreements, which should clearly describe permitted data access and use.
  • Organizations must map how data enters, moves through, and exits AI systems.
  • Contracts should define data ownership, use rights, retention, and restrictions on reuse.
  • Protected health information should not be used in public AI tools or to train general-purpose models.
  • State privacy laws may apply even where HIPAA does not.

Pivot to Compliance: BAAs and Key Agreements

Where the Health Insurance Portability and Accountability Act of 1996 (HIPAA) applies, healthcare organizations leveraging AI must ensure that any vendor or partner that creates, receives, maintains, or transmits protected health information (PHI) executes a business associate agreement (BAA) that explicitly permits the contemplated upstream and downstream data flows and access in the AI operating environment, separate and apart from considerations on how PHI is used in these tools and models.

Business associate agreements define how vendors managing PHI are permitted or are not permitted to use PHI on behalf of a HIPAA-regulated covered entity, such as a hospital or physician group. All parties need to carefully assess what their needs are, so that their BAAs are appropriately negotiated and contain adequate terms—such that their specific uses of health data for AI purposes are permitted. Covered entities also need to be mindful of the data permissions that are included in many BAAs these days with AI and other health tech vendors (e.g. de-identification, data aggregation, etc.).

Even where HIPAA does not apply, entities and users inputting health data into AI tools or systems need to be cognizant of state law requirements and the ever-present risk of common law privacy claims. Individuals have a reasonable expectation of privacy as to their personal information, which may include expectations about their health information and how their information is used and shared. Providing notice of data practices and explaining the purpose of data collection and processing is particularly important to set expectations, including offering either the ability to object to the processing of their information or the option to not share their information in the first instance. Certain states have consumer healthcare data restrictions, which should be considered as well. 

Where HIPAA does not apply, Data Use or Processing Agreements can help to identify and outline the responsibilities and obligations of each party when handling, transmitting, and using data.  Such agreements can outline who owns and may use inputs, intermediate artifacts, and outputs; who is responsible for securing consents and/or authorizations to use data, where necessary; and who is responsible for retaining and/or deleting data, etc.

Mapping Data Rights

Because data processors cannot put PHI into publicly available AI and cannot make PHI accessible to third parties absent a BAA, ownership and licensing clauses must align with these constraints and restrict vendor reuse, sharing, or commingling in ways that would breach HIPAA obligations.

In AI-enabled healthcare environments, clearly defining who owns, controls, and may use data—as well as mapping how that data will be used—is central to compliance with HIPAA’s Security Rule. This includes understanding what data is being inputted, how it is processed, and what rights and obligations there are related to data retention and disclosures. Covered entities and business associates are required to conduct an “accurate and thorough assessment of the potential risks and vulnerabilities to the confidentiality, integrity, and availability” of electronic protected health information (ePHI). 45 CFR § 164.308(a)(1)(ii)(A). That obligation cannot be satisfied without a clear, operational understanding of how AI systems actually interact with ePHI.

AI tools often move data in ways that are not obvious. Patient information may be entered directly, pulled in automatically from other systems, and temporarily stored, logged, or transformed into system outputs. For this reason, organizations should map where data comes from, how and when it enters an AI system, where it is stored (even briefly), who can access it, and what information is included in the system’s outputs. Different stages of this process may raise different privacy and security risks, all of which must be evaluated and addressed.

HIPAA also requires organizations to limit access to patient information and to monitor how it is used. See 45 CFR §§ 164.308, 164.312. As part of data mapping, organizations should identify which employees, vendors, cloud providers, or subcontractors can access patient data within the AI environment, and ensure that access is restricted, logged, and reviewed. Without this visibility, it is difficult to detect misuse, prevent unauthorized access, or respond to security incidents.

AI systems change over time. Software updates, configuration changes, and new integrations can alter how data flows and who has access to it. HIPAA requires organizations to regularly review and update their security measures. 45 CFR § 164.308(a)(8). Data mapping should therefore be an ongoing process, updated as AI tools evolve.

Separately, organizations must address “data rights” from a contractual perspective. While data mapping focuses on how information moves through systems, data rights govern who owns the data, who may use it, and whether vendors may retain, reuse, or disclose it. Clearly defining these rights in agreements (particularly with AI vendors) is essential to ensuring that patient information is not used in ways that would violate HIPAA, including impermissible reuse, sharing, or commingling.

In practical terms, effective data mapping, combined with clearly defined data rights, helps healthcare organizations understand how patient information is handled in AI systems, identify risks before they become problems, and demonstrate that they are taking HIPAA compliance seriously. In the AI context, knowing what is happening with patient data is the foundation of responsible and lawful use.

Using PHI with AI Systems: Practical Guardrails

  • Where PHI is used with AI systems, organizations should avoid or limit training models on PHI unless the model operates in a closed, proprietary environment developed by or for the covered entity and only after analyzing applicable HIPAA permissions, exceptions, and post-use data handling requirements.
  • Organizations should confirm that BAAs explicitly permit access to PHI by all entities involved in the AI operating environment, including subcontractors and hosting providers. PHI should not be entered into publicly available AI tools. Although possible to obtain individual authorizations permitting such use, doing so raises significant legal, operational, and reputational risks, including potential exposure under state privacy laws and common law claims, that should be carefully evaluated before proceeding.
  • When proprietary or vendor models are used, PHI should not be accessible to third parties absent appropriate HIPAA-compliant agreements and safeguards.
  • Organizations should also assess heightened privacy risks associated with AI, including re-identification (the risk of which increases with large, commingled data sets) and misuse, and account for state-law protections that may impose additional obligations beyond HIPAA. Clear, plain-language notices should describe how AI is used, the level of human oversight, and the system’s limitations, recognizing that responsible AI governance requires ongoing legal and operational evaluation.

Continue exploring the series: