Report

AI in Healthcare: Key Legal Questions to Address Before Deployment

AI is rapidly expanding across healthcare, but before deploying or testing AI tools, organizations must address key legal questions around patient data, privacy, and model training.
May 2026

This article outlines key questions and compliance concepts to consider based on common scenarios in which healthcare entities “feed the machine” with sensitive data. Explore all articles in our AI in Healthcare series.

Key Takeaways

  • AI tools using protected health information may trigger HIPAA obligations—even during testing. 
  • AI deployments often require updating business associate agreements to address data use and model training.
  • State privacy and AI laws may impose additional requirements beyond HIPAA.

1. Does HIPAA Apply?

The first question is simple: Are you subject to the Health Insurance Portability and Accountability Act of 1996 (HIPAA)?  See 42 USC § 1320d-6; 45 C.F.R. Parts 160, 162, and 164. If so, the HIPAA Privacy, Security, and Breach Notification Rules may restrict and govern your implementation and use of AI.

2. Is Protected Health Information (PHI) Being Processed?

HIPAA does not restrict all health data—only PHI. Thus, organizations must assess whether the AI tool will process, store, transmit, or train on data that qualifies as PHI.

Generally speaking, PHI is any information in an electronic health record or designated record set that can be used to identify an individual and that was created, used, or disclosed in the course of providing a healthcare service such as diagnosis or treatment, or used for billing or payment.  Essentially, does this information (1) identify the individual, and (2) does the information tell me something about the person’s health, either treatment or payment for the same?  If the answer to both questions is yes, the information is likely PHI.

This threshold assessment is crucial because processing healthcare data within an AI system may constitute a HIPAA-regulated “use,” thereby triggering various privacy, security, and other compliance obligations.

When PHI is involved, organizations should evaluate whether processing occurs in a closed-loop environment, whether data are segregated across ingestion and output stages, and whether any third parties can access the data. Importantly, PHI cannot be analyzed through publicly available AI models (though of course patients and insurance members can and have submitted their health and insurance information into these tools).

TIP: Entities receiving PHI should also evaluate any limits on the use of PHI, such as by any applicable patient or member authorization, relevant contracts, and state privacy or AI laws.  Likewise, in many states, state regulators have published guidance on or adopted national guidance on expected standards of use of AI in healthcare. 

3. Does a HIPAA treatment, payment, or operations (TPO) exception permit the use of PHI without an authorization?

Where PHI is being used, organizations should determine whether a HIPAA authorization is required or whether the use of such is permitted under HIPAA’s TPO exceptions.

TIP: A “HIPAA authorization” is not the same as a consent form. To be valid, an authorization form has several technical requirements and must satisfy 45 CFR 164.508. If in doubt as to whether your authorization satisfies the law, consult a lawyer.

HIPAA contains many exceptions to permit the acquisition, use, or disclosure of PHI. Among these are the TPO exceptions: 

  • Treatment is the provision, coordination, or management of healthcare and related services by one or more healthcare providers. 
  • Payment relates to activities to determine eligibility of coverage, conduct billing, claims, and collection activities, reviewing medical necessity and coverage, and others. 
  • Healthcare Operations includes performing quality assessments, analyzing population health data, or detecting fraud.

Common AI-enabled activities that may fall within TPO include:

  • Treatment: Conducting first-pass reviews of radiograph imaging to spot findings that warrant further investigation
  • Payment: Reviewing and processing billing information for accuracy or completeness
  • Healthcare Operations: Analyzing massive, aggregated datasets to identify trends and patterns through distinct population sets

Even if the initial use falls within a TPO exception, organizations cannot rely on PHI to train a general-purpose model. Training with PHI requires a proprietary closed-loop system, appropriate permissions, and adequate destruction protocols. Alternatively, organizations may engage vendors that specialize in masking or obfuscating healthcare data for use in AI models and tools, or, where feasible, utilize de-identified data for training purposes. 

4. Are Required Business Associate Agreements in Place?

AI implementations frequently involve multiple technology vendors. Organizations subject to HIPAA or working with HIPAA-regulated entities should evaluate whether a business associate agreement (BAA) is required to share PHI, and whether the BAA permits you to use the health data in your intended manner.  The BAA should expressly address AI processing, aggregation, de-identification, and data transmission, within the boundaries of the permissions for BAAs in 164.504.

5. Is the AI Tool a Public or Proprietary Resource?

AI deployment decisions increasingly depend on whether a system is publicly accessible or proprietary.  Key considerations include:

  • Whether the model retains or reuses data
  • Whether any upstream or downstream third parties can access PHI
  • Whether data is segregated between ingestion, training, and output
  • Whether data retention and destruction protocols are in place

6. What Other Privacy Laws Might Apply (Beyond HIPAA)?

AI use in healthcare is increasingly regulated outside HIPAA. Multiple state privacy laws, including those governing consumer health data, may apply even when HIPAA does not.  Examples include the California Consumer Privacy Act and Washington’s My Health My Data Act.  There are multiple states with more restrictive AI specific laws, like Colorado, where the restrictions on use have to be considered as well.  While the White House has issued an executive order directing the Federal Trade Commission (FTC) and others to review (and potentially unwind) state-level regulation, whether such efforts will ultimately be implemented remains uncertain. 

Additionally, some state privacy regimes impose heightened protections for genetic, behavioral health, or other particularly sensitive categories of health information, which also need to be considered.

Organizations operating in multiple states must evaluate varying compliance requirements and cannot assume HIPAA preempts state law.  HIPAA provides a regulatory floor, such that more restrictive state laws can impose additional or greater compliance obligations.

7. Are FDA or Other Federal Regulations Triggered?

AI tools that provide clinical decision support may be subject to US Food and Drug Administration (FDA) regulation. We will explore this in a future article.

So far, the federal government has embraced the use of AI in government programs. For instance, the Department of Health and Human Services (HHS) recently issued a publication on its forthcoming AI strategic initiative to expand AI use in the healthcare industry. We believe this pattern is likely to hold, at least for the remainder of this administration.

8. Who Owns the Data and Outputs?

Ownership and control of AI training data and model outputs should be clearly allocated by contract.  Many organizations have worked through creation of a standardized AI rider for either use of its data, ownership of data, and what happens to that data once the contract is terminated. 

TAKEAWAYS

Organizations need to stay informed regarding applicable statutes and emerging requirements affecting healthcare AI. By proactively addressing these considerations, organizations can responsibly leverage AI while safeguarding patient and member trust and compliance obligations.

Continue exploring the series: