Governor Greg Abbott recently signed into law the Texas Responsible Artificial Intelligence Governance Act (TRAIGA or Act), which takes effect on January 1, 2026. Texas joins California, Colorado, and Utah as one of the frontrunners in enacting comprehensive legislation governing the responsible use of AI across sectors. This LawFlash covers provisions of note from the Act and suggestions for how covered entities can prepare in advance of January 1, 2026 to ensure that their AI use complies with the Act.
ENTITIES COVERED BY THE LAW
The law is broad in its application and applies to any individual or entity, including government agencies developing AI systems in Texas; offering a product or service used by Texas residents; or promoting, advertising, or conducting business in Texas.
AI USE IN TEXAS
TRAIGA prohibits the development and use of AI systems for certain purposes, including the following:
- Harmful behavior – AI systems cannot be developed or deployed to intentionally incite or encourage self-harm or criminal activity.
- Infringement of Constitutional Rights – AI systems cannot be developed or deployed to restrict, impair or infringe upon an individual’s constitutional rights.
- Unlawful Discrimination – AI systems cannot be developed or deployed with the intention of discriminating against a protected class, for example by “social scoring” or using biometric data without an individual’s consent. Notably, a disparate impact by itself is not sufficient to give rise to a finding of intent.
- Sexually Explicit Content – AI systems cannot be developed or deployed with the intention of producing or distributing sexually explicit deepfakes or child pornography or generating text conversations describing sexual content while impersonating a child.
AI USE OF BIOMETRIC DATA
TRAIGA also impacts the use of AI with respect to biometrics, including fingerprint, voiceprint, eye retina, or iris, or “other unique biological pattern or characteristic that is used to identify a specific individual.” The Act amends the Texas Capture or Use of Biometric Identifier Act (CUBI) to clarify that individuals are not implicitly providing consent for the use of their biometric data through publicly available information, including information available on the internet, unless the individual made the information public.
The Act carves out specific exceptions for the consent requirement related to the use of biometric data under CUBI for the following:
- The use of biometric data in AI systems generally, unless the AI system is used for the purpose of identifying a specific individual
- Biometric data that is being used to train AI systems in fraud prevention efforts, security monitoring and to respond to cyber and criminal threats or similar malicious activity
- The Act does not alter the prior carveout for financial institutions using and retaining voiceprint data
AI USAGE DISCLOSURE
While TRAIGA does not require disclosure for all AI uses, it does require disclosures by government agencies, healthcare providers, and legal entities to “consumers” interacting with AI:
- Persons (defined to include various legal and commercial entities such as corporations, partnerships, business trusts) must disclose to consumers that they are interacting with an AI system before or at the time of interacting with the AI system.
- Government agencies must disclose to consumers that they are interacting with an AI system before or at the time of interacting with the AI system
- Healthcare service providers must disclose their use of AI if the AI system is used in the treatment of patients.
Any disclosure made by legal persons, government agencies, and healthcare providers must be clear, conspicuous and written in easily understandable language.
REGULATORY SANDBOX
TRAIGA specifies that the Department of Information Resources (DIR) will create a regulatory sandbox program, wherein entities can apply to test and develop their AI systems without fear of regulatory action for potential violations of TRAIGA. The purpose of the program is to encourage free and open innovation, along with using the reporting to make recommendations to the legislature regarding future legislation. Participants would have 36 months to test and train their systems and would be required to submit quarterly reporting to DIR regarding performance metrics, risk mitigation, and feedback from users and stakeholders.
ENFORCEMENT AND PENALTIES
Failure to comply with TRAIGA can result in significant penalties for each violation of TRAIGA. TRAIGA grants the Texas attorney general (AG) exclusive enforcement authority. While there is no private right of action under the Act, the AG is required to create an online reporting mechanism by which individuals can report potential TRAIGA violations.
TRAIGA takes a phased approach to enforcement and penalties.
- If the AG discovers a violation, the AG must provide written notice of the alleged violation to the AI system developer or deployer. The developer or deployer then has 60 days to cure the alleged violation, provide documentation detailing the cure and update their internal policies and procedures to prevent future violations.
- If the alleged violations are not cured within 60 days, the AG may bring an enforcement action and seek injunctive relief, attorney fees and civil penalties as follows:
- $10,000–$12,000 per curable violation
- $80,000–$200,000 per uncurable violation
- $2,000–$40,000 per day for on-going violations
- The AG may also recommend additional enforcement by other state agencies in the form of license penalties, including suspension or revocation, and fines up to $100,000.
DEFENSE AND SAFE HARBOR PROVISIONS
TRAIGA establishes a rebuttable presumption that an entity used reasonable care. Thus, the AG must first meet its burden of proving a violation of TRAIGA. TRAIGA further provides the following safe harbors and affirmative defenses to protect against liability under the Act:
- If a third party misuses the AI system in violation of TRAIGA
- A violation is discovered through good faith testing or audits
- The entity follows state established guidelines (this may include future guidelines published by the Texas Artificial Intelligence Council)
- The AI system substantially complies with the National Institute of Standards and Technology (NIST’s) AI Risk Management Framework or other similar frameworks
PRACTICAL CONSIDERATIONS AND TAKEAWAYS
In the roughly six months before TRAIGA goes into effect, individuals, companies and state agencies can take several steps to evaluate whether they are covered under the Act and set forth a plan to comply:
- Consider How You are Covered: Individuals and organizations should consider whether they are covered under the Act as an entity that develops or deploys AI systems in Texas, offers a product or service used by Texas residents, or promotes, advertises or conducts business in Texas. Even those businesses that are not technology companies must assess third-party AI tools, such as chatbots, to ensure compliance with TRAIGA.
- Assess the Need for Disclosure: Covered healthcare service providers and government agencies should assess whether their AI systems interact with consumers or patients in a way that requires conspicuous disclosure, and, if yes, develop and implement the appropriate disclosure language. Healthcare providers should further consider processes to ensure compliance with newly enacted provisions to Chapter 183 of the Texas Health and Safety Code, which becomes effective even sooner on September 1, 2025, such as:
- Ensuring that health records are stored in the United States or its territories
- Practitioners using AI review all AI-generated records “in a manner that is consistent with medical records standards developed by the Texas Medical Board”
- Any AI algorithm or decision assistance tool includes an individual’s biological sex
- Test AI Systems Developed or Deployed for Texas: Entities should develop robust testing for AI systems to assess whether any of their AI systems could be categorized as having features or producing content in violation of TRAIGA that encourages:
- Self-harm or criminal activity
- The infringement of Constitutional rights
- Unlawful discrimination of protected classes
- The creation or distribution of prohibited sexually explicit content
- Consider AI Compliance with NIST: Entities should evaluate their AI systems to assess compliance with NIST’s AI Risk Management Framework, as this could help entities spot trouble areas in their tools. Complying with NIST’s framework could protect against liability under the Act and support an entity’s showing of good faith efforts to use AI tools in a way that does not violate TRAIGA.
- Consider Entering the Sandbox: Companies should consider whether to apply to the DIR sandbox program to allow the free and fulsome testing of new AI products without the risk of training or development efforts violating TRAIGA.