New York state lawmakers on June 12, 2025 passed the Responsible AI Safety and Education Act (the RAISE Act), which aims to safeguard against artificial intelligence (AI)-driven disaster scenarios by focusing on the largest AI model developers; the bill now heads to the governor’s desk for final approval. The RAISE Act is the latest legislative movement at the state level seeking to regulate AI, a movement that may continue to gain momentum after a 10-year moratorium on AI regulation was removed from the recently passed One Big Beautiful Bill.
Background and Core Provisions
Inspired by California’s SB 1047 bill, which was vetoed by California Governor Gavin Newsom in September 2024 over concerns that it could stifle innovation, the RAISE Act aims to prevent so-called “frontier AI models” from contributing to “critical harm.” For the purposes of the RAISE Act, “critical harm” is defined as events in which AI causes the death or injury of more than 100 people, or more than $1 billion in damages to rights in money or property caused or materially enabled by a large developer’s creation, use, storage, or release of frontier model, through either (1) the creation or use of a chemical, biological, radiological, or nuclear weapon or (2) an artificial intelligence model engaging in conduct that is both (a) done with limited human intervention and (b) would, if committed by a human, constitute a crime specified in the penal law that required intent, recklessness, or gross negligence or the soliciting or aiding and abetting of such crimes.
Unlike SB 1047, which faced criticism for casting too wide a net over general AI systems, the RAISE Act targets only “frontier” models developed by companies that meet both of the following criteria: (1) a training cost threshold where the applicable AI model was trained using more than $100 million in computing resources, or more than $5 million in computing resources where a smaller artificial model was trained on a larger artificial intelligence model and has similar capabilities to the larger artificial intelligence model; and (2) the model is made available to New York residents. To the extent the RAISE Act aligns with similar state-level regulations and restrictions, this would theoretically allow some room for innovation by entities (like startup companies and research organizations) less likely to cause such critical harm.
If a company meets both criteria and is therefore subject to the jurisdiction of the RAISE Act, it will need to comply with all the following before deploying any frontier AI model:
- Implement a written safety and security protocol
- Retain an unredacted version of such safety and security protocol for as long as the frontier model is deployed, plus five years
- Conspicuously publish a copy of the safety and security protocol and transmit such protocol to the division of homeland security and emergency services
- Record information on specific tests and test results used in any assessment of the frontier AI model
From a practical perspective, requirements such as recordation of information on testing of any frontier AI model may push smaller startups and research organizations out of the market to the extent the resources necessary to maintain such information present additional and costly overhead.
Enforcement and Exceptions
The RAISE Act empowers the New York attorney general to levy civil penalties of up to $10 million for initial violations and up to $30 million for subsequent violations by noncompliant covered companies. This includes penalties for violations of a developer’s transparency obligations as specified above or as required elsewhere in the RAISE Act, such as the requirement that covered companies retain an independent auditor annually to review compliance with the law. However, covered companies may make “appropriate redactions” to their safety protocols when necessary to protect public safety, safeguard trade secrets, maintain confidential information as required by law, or protect employee or customer privacy.
Looking Ahead
The bill’s fate remains uncertain. Our team is monitoring developments closely, including potential impacts on commercial contracting, compliance obligations, and technology adoption.