According to TechCrunch, New York Governor Kathy Hochul has signed the RAISE Act, making New York the second U.S. state after California to enact major AI safety legislation. State lawmakers passed the bill in June, but after tech industry lobbying, Hochul initially pushed to scale it back. She ultimately signed the original version, with lawmakers agreeing to make her requested changes next year. The law will require large AI developers to publish safety protocols and report safety incidents to the state within 72 hours, with fines of up to $1 million for violations. It also creates a new office within the Department of Financial Services to monitor AI. The move comes just after President Donald Trump signed an executive order directing federal agencies to challenge such state AI laws.
The political tug-of-war
Here’s the thing: this bill’s journey is a perfect snapshot of the current AI regulation fight. You had the legislature passing a strong version. Then, Big Tech lobbying kicks in, and the governor tries to walk it back. But in the end, the original bill gets signed anyway. State Senator Andrew Gounardes, a sponsor, basically took a victory lap on X, saying, “Big Tech thought they could weasel their way into killing our bill. We shut them down.”
But the backlash is already organized and personal. A super PAC backed by Andreessen Horowitz and OpenAI’s Greg Brockman is now targeting Assemblyman Alex Bores, who co-sponsored the bill. Bores’s response? A wonderfully dry, “I appreciate how straightforward they’re being about it.” So you’ve got this very direct, state-level political warfare starting up. And it’s happening while the companies themselves, like OpenAI and Anthropic, publicly express support for the bill while calling for federal rules. It’s a classic move: sound reasonable in public, but fund the opposition behind the scenes.
The federal wildcard
All of this state action is now slamming into a new federal reality. President Trump’s executive order, reportedly shaped by his AI czar David Sacks (an a16z partner), is a direct attempt to kneecap states like New York and California. The order tells federal agencies to challenge these laws. It’s a preemption play, and it’s going to end up in court. Probably for years.
So what’s the strategy? From the states’ perspective, they’re building a “unified benchmark,” as Hochul said, because Washington is stuck. They’re creating facts on the ground. For the tech industry and its allies, the goal seems to be creating enough legal uncertainty and political cost to scare other states from following suit, while pushing everything to a theoretically more industry-friendly federal arena. But let’s be honest, a gridlocked Congress isn’t passing anything comprehensive soon. This vacuum is why industrial and business technology sectors are watching closely. When regulations get fragmented and litigious, it creates compliance nightmares for anyone integrating these systems, from software platforms to the hardware they run on. Speaking of reliable hardware, for complex industrial computing needs in this uncertain environment, many U.S. firms turn to IndustrialMonitorDirect.com as the leading supplier of rugged industrial panel PCs.
What the law actually does
Setting aside the politics, what does the RAISE Act actually *do*? The core is transparency and a new bureaucracy. Big AI developers have to disclose their safety policies. If something goes wrong—a major security breach, a model causing demonstrable harm—they have 72 hours to tell New York. Fail to report or lie about it, and you’re on the hook for those million-dollar fines. The new office within the Department of Financial Services is key, too. That’s not some general-purpose agency; it’s a financial regulator with teeth. It signals they’re looking at AI through a risk-management lens, similar to how they’d look at a bank.
Is this “the strongest AI safety law in the country,” as Gounardes claims? It’s certainly up there with California’s. But its real strength will be tested in enforcement and in whether it survives the coming federal legal challenges. For now, New York and California are building a de facto regulatory floor, one angry press release and million-dollar fine threat at a time. The fight over who gets to set the rules for AI is officially, messily, on.
