White House Issues Executive Order to Establish Uniform National AI Standards
12 декабря 2025 г.The White House’s December 2025 executive order marks a major shift, seeking to preempt state artificial intelligence (AI) laws and move toward a unified national policy framework for artificial intelligence, with broad implications for technology companies, state governments, and regulated industries. This LawFlash examines the key provisions, background, and practical impact of the order with respect to businesses and in-house counsel navigating the rapidly evolving AI regulatory landscape.
SUMMARY
- The December 2025 executive order aims to establish a national standard for AI policy, limiting state-level regulatory efforts.
- Federal agencies are directed to evaluate, restrict funding for, and preempt state laws that affect national AI policy, with certain exceptions.
- The order creates a new AI Litigation Task Force and requires federal reporting and disclosure standards within 90 days.
- The asserted federal preemption authority is grounded in the Federal Trade Commission’s (FTC’s) prohibition on unfair or deceptive acts or practices under 15 USC. § 45.
- The order instructs the Federal Communications Commission (FCC) to determine whether it will adopt a “reporting and disclosure standard for AI models” that would preempt conflicting state laws.
- Businesses developing, deploying, or using AI should anticipate evolving federal oversight and potential changes to state law compliance obligations.
BACKGROUND
On December 11, 2025, President Donald Trump issued Ensuring a National Policy Framework for Artificial Intelligence, an order seeking to position the United States as the global leader in AI innovation and competitiveness. The US administration emphasized that “United States AI companies must be free to innovate without cumbersome regulation,” identifying state-level regulation as a barrier to national economic and security interests.
The order builds on prior federal efforts to coordinate AI policy and reflects growing concern about regulatory fragmentation across states, which could impede the development and deployment of AI technologies.
The order also dovetails with the release of the US Department of Health and Human Services’ (HHS) artificial intelligence strategy memorandum issued on December 4, 2025, which identified five pillars for implementation of AI across HHS.[1]
CORE PROVISIONS AND REGULATORY SHIFTS
The executive order sets forth several significant directives:
- National AI Policy Priority: The order establishes that “it is the policy of the United States to sustain and enhance the United States’ global AI dominance through” supportive federal action. The administration’s stated goal is to “win” by promoting innovation and minimizing regulatory burden on AI companies.
- AI Litigation Task Force: Within 30 days, the US Attorney General must create an AI Litigation Task Force to oversee legal challenges and coordinate federal responses to state-level AI laws and regulations that may conflict with national policy priorities.
- Evaluation of State AI Laws: Within 90 days, the US secretary of commerce must identify state laws that “obstruct, hinder, or otherwise contravene” the national AI policy. This evaluation serves as a foundation for further federal action to limit the impact of state regulation on AI development and deployment.
- Restrictions on State Funding: The order directs the secretary of commerce, via the assistant secretary for technology policy, to restrict certain federal funding to states with laws that obstruct national AI policy. Federal agencies must also assess discretionary grant programs to ensure alignment with the order’s objectives.
- Federal Reporting and Disclosure Standards: Within 90 days of identifying obstructive state laws, the chairman of the FTC, in consultation with the Attorney General and secretary of commerce, must develop new federal standards for AI reporting and disclosure.
- Preemption of State Laws Mandating Deceptive Conduct in AI Models: The FTC, in consultation with the Attorney General and secretary of commerce, is instructed to preempt state laws that require or mandate “deceptive conduct” in AI models, reinforcing the federal government’s intent to prevent inconsistent or conflicting regulatory requirements.
- Legislative Recommendations: The order calls for legislative proposals to further national AI standards but explicitly carves out exceptions where state law will not be preempted, including “child safety protections; AI compute and data center infrastructure, other than generally applicable permitting reforms; state government procurement and use of AI; and other topics as shall be determined.”
- General Provisions and Limitations: The order clarifies that nothing in its text impairs existing legal authorities of federal agencies or officials, nor does it create any right or benefit enforceable by law.
IMPLICATIONS OR RECOMMENDATIONS
The executive order’s emphasis on a unified, minimally burdensome national standard for AI regulation presents both opportunities and uncertainties for businesses, in-house counsel, and state governments.
- Federal Preemption and Compliance Uncertainty: Businesses operating in multiple states may see reduced compliance burdens as federal agencies move to preempt conflicting or onerous state laws. However, the timeline for evaluation and preemption, as well as the scope of federal standards, remains subject to further administrative and legislative action. At present, there are more than 1,200 state-level AI bills or laws, as well as interpretive guidance from state agencies, which will complicate any comprehensive review process on the federal side.
- State Law Carve-Outs: The order expressly preserves state authority in areas such as child safety, data center infrastructure permitting, and procurement, meaning companies in those sectors will need to continue monitoring both federal and state requirements.
- Federal Funding Risks: States with laws deemed obstructive to national AI policy face potential restrictions on federal grants and discretionary funding, which may incentivize legislative changes at the state level.
- AI Litigation and Enforcement: The creation of an AI Litigation Task Force signals increased federal coordination and potential intervention in legal disputes over state-level regulation. Companies facing litigation or compliance challenges should monitor guidance and enforcement actions from the task force and FTC.
- Reporting and Disclosure Changes: The forthcoming federal standards for AI reporting and disclosure may require updates to internal compliance programs, particularly for companies subject to FTC jurisdiction or involved in the deployment of consumer-facing AI models.
- Strategic Policy Engagement: Businesses should track ongoing legislative recommendations and participate in policy discussions to ensure that emerging federal standards reflect industry realities and promote innovation.
- Likelihood of Legal Challenges:
- The order is likely to face legal challenges from states asserting that the order exceeds executive authority, improperly conditions federal funding, or unlawfully preempts areas traditionally regulated by states.
- Until the US Congress passes a national standard or federal law, the Tenth Amendment reserves such other rights to the states. The order seeks to circumvent this limitation, by looking to expand the power of agencies like the FTC and FCC, pursuant to existing laws. But it remains to be seen how effective the push to expand agency power will be, particularly in light of the recent elimination of agency deference in Loper Bright.
- Legal challenges will likely delay implementation, result in injunctions affecting particular provisions, or produce divergent outcomes across federal courts. As a result, the durability and timing of federal preemption under the order remain uncertain.
WHAT DOES THIS MEAN FOR YOU:
For businesses including AI frontier model companies, hyperscalers, and enterprise users of AI, the executive order adds a new layer of complexity to an already evolving AI regulatory landscape. While it signals a broader shift by the US administration toward a more unified and predictable regulatory environment, the prospect of litigation and the continued absence of comprehensive federal regulation will, in the short term, increase uncertainty. This uncertainty may lessen the immediate compliance burdens that stem from divergent state AI laws.
Companies deploying or developing advanced AI systems should actively monitor the federal government’s identification of purportedly obstructive state laws and the rollout of new reporting and disclosure standards, as these may require updates to compliance programs and risk management frameworks. They should also monitor state attorney general actions challenging any effort to preempt as this may result in litigation which yields a lack of clarity over an extended period of time.
Hyperscalers and infrastructure providers should note the exceptions for data center infrastructure permitting and remain attentive to state-level requirements in those areas, as federal preemption will not apply universally. Users of AI, including enterprise legal departments, should anticipate increased federal oversight and be prepared to respond to new standards impacting procurement, deployment, and risk disclosure for AI systems.
Finally, the executive order’s attempt to preempt state law, including through conditions placed on congressionally authorized grants, is likely to prompt independent legal challenges from state attorneys general as well as affected advocacy groups. Clients may or may not wish to participate directly in this litigation. At minimum, clients should be prepared for a prolonged period in which federal policy, state enforcement, and judicial interpretation evolve in parallel, requiring ongoing monitoring and agile compliance strategies.
In-house legal departments should consider taking the following steps:
- Closely track federal agency actions and guidance related to the executive order, especially the identification of obstructive state laws and the development of new reporting and disclosure standards.
- Review and update internal compliance programs to ensure alignment with emerging federal requirements and prepare for changes in state law applicability.
- Assess risk exposure for existing and planned AI deployments, including litigation risks that may arise under evolving federal and state regimes.
- Engage with public policy teams and participate in industry consultations to advocate for practical standards that reflect business realities.
- Communicate with state-level stakeholders to remain informed of exceptions and carve-outs that may require continued compliance with specific state regulations.
- Monitor federal and state litigation challenges to the order, including requests for injunctive relief, as court decisions may affect the enforceability, timing, or scope of federal preemption and funding conditions.
CONCLUSION
The December 2025 order represents a deliberate step by the administration toward establishing a more coherent and harmonized federal approach to AI oversight, even though comprehensive federal legislation is still absent. At the same time, the order may raise more questions than it resolves. While many details remain to be determined through administrative rulemaking and legislative action, the order sets priorities for innovation, competitive leadership, and regulatory efficiency in AI. Companies and counsel should closely monitor federal developments, track legal challenges, and evaluate existing compliance strategies in light of the evolving policy landscape.
HOW WE CAN HELP
Our AI team stands ready to assist organizations in developing policies and protocols, understanding the application of federal and state laws on their business, protecting data, and conducting internal inquiries on the use of AI within their organization.
Stay Informed
Visit our US Administration Policies and Priorities resource center and subscribe to our mailing list for the latest on programming, guidance, and current legal and business developments.
Morgan Lewis Manager of Government Affairs David B. Mendelsohn contributed to this LawFlash.
Contacts
If you have any questions or would like more information on the issues discussed in this LawFlash, please contact any of the following:
[1] Artificial Intelligence (AI) Strategy, US Department of Health and Human Services (Dec. 4, 2025).