Policy

Hochul signs watered down AI regs, but lawmakers still got some wins

After Gov. Kathy Hochul proposed completely rewriting the bill with exact wording from a weaker California law, legislators negotiated back in measures that went beyond the West Coast version.

Gov. Kathy Hochul speaks at a press conference in Albany on Dec. 4, 2025.

Gov. Kathy Hochul speaks at a press conference in Albany on Dec. 4, 2025. Susan Watts/Office of Governor Kathy Hochul

New regulations for AI developers are on their way to New York after Gov. Kathy Hochul agreed to sign the Responsible AI Safety and Education Act, or RAISE Act. But she succeeded in getting legislators to agree to a number of substantial changes, essentially rewriting the measure using a California law as the base, with just a handful of changes to strengthen it.

Hochul opened negotiations on the bill with a proposal nearly verbatim to California SB 53, which became law in that state earlier this year after significant changes at the behest of the wealthy and powerful AI developers. The governor’s proposed chapter amendments represented a complete rewrite of the RAISE Act sponsored by state Sen. Andrew Gounardes and Assembly Member Alex Bores and passed by lawmakers earlier this year. Initially, both sides seemed to have reached a stalemate, with the governor treating her proposal as a best offer and legislators unable to accept such significant weakening.

Ultimately, legislative leaders were able to strengthen what Hochul proposed for the version signed on Friday, even though the law will be substantially rewritten next year largely to align with California’s language. “The federal government doesn’t care about protecting you from the harms of AI. I do,” Hochul wrote on X. “New York is setting the national standard for strong, sensible AI regulation.” 

One significant reversal means that AI companies will have 72 hours to report a critical safety incident, or if they reasonably believe one has occurred, to the government. The California law and Hochul’s proposal gave companies 15 days and only required reporting when an incident had definitively taken place. The legislative version of the RAISE shared the 72-hour timeframe, but went a little further to require companies to report if they believe a threat is imminent.

The agreed-upon changes also strengthen requirements for developers when creating their AI safety plans. Those plans must explain how the developer will handle – rather than approach – various risks and the developer will need to describe the measures in “detail” (an added word). And Gounardes pointed to the creation of a brand new office to oversee AI in the Department of Finance that has broader authority compared to California. “I think it's credible to say that, in the face of tech and venture capital trying to literally write their own AI safety laws for us, that we fought back, and we have passed now the strongest law in the country,” he told City & State.

But not everything that the sponsors wanted made it into the final compromise. Some major components ended up on the chopping block. One would have used computational cost rather than revenue to determine which companies would be subject to the law. Many high-tech AI developers attract millions or billions in venture capital to create advanced machine learning, but have little or no revenue. It also would have covered AI companies in other countries like China that generally operate at a lower revenue level.

Another component excluded from the deal would have prevented companies from releasing unsafe models. The agreed-upon language would only require that developers provide a warning as part of their safety plan. But despite the exclusion of key aspects of the original RAISE Act, the sponsors still treated the governor’s signature as a victory. “In effect, we moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety, as some in industry were trying to claim it was, but merely a first step,” Bores said.