Artificial intelligence is evolving faster than any technology in human history. It’s driving groundbreaking scientific advances, developing life-changing medicines, unlocking new creative pathways and automating mundane tasks.
In the wrong hands, it also poses existential risks to humanity.
This isn’t hyperbole or the stuff of science fiction. Al developers, leading scientists and international bodies have all warned of an imminent future where advanced Al could be used to conduct devastating cyberattacks, aid in the production of bioweapons, or inflict severe financial harm on consumers and companies.
American AI models have been used in citizen surveillance in China, scams originating in Cambodia and as part of a “global cybercrime network.” OpenAI found that their latest model “can help experts with the operational planning of reproducing a known biological threat” and is “on the cusp” of being able to help novices. A recent International Al Safety Report identified an AI model capable of producing plans for biological weapons that were “rated superior to plans generated by experts with a PhD 72% of the time” and that included “details that expert evaluators could not find online.”
We’re only a few years away from a time when Al models will code themselves; already, over 25% of Google’s new code is written by Al. In a lab experiment, the firm Apollo Research found that AI models told to pursue a goal at all costs would try to make copies of themselves on new servers and lie to humans about their actions if they thought they would be shut down.
Increasingly, calls for regulation are coming from within the tech industry itself. In March 2023, over 1,000 tech leaders from across the political spectrum signed a letter calling for a temporary pause in AI advancement and warned that developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one –not even their creators – can understand, predict or reliably control.”
That was two years ago. More recently, leading AI company Anthropic warned that “the window for proactive risk prevention is closing fast” and called on governments to implement AI regulation by April 2026 at the latest. The company also warned that the federal legislative process might not be “fast enough to address risks on the timescale about which we're concerned” and “urgency may demand it is instead developed by individual states.”
Our laws haven’t kept up with this rapidly developing technology. In the absence of federal action, it’s up to states like New York to urgently implement smart, responsible safeguards to keep our communities safe and ensure the burgeoning AI industry amplifies the best of humanity, rather than its worst.
That’s why we’ve introduced the Responsible AI Safety and Education Act, or RAISE Act, which puts four simple responsibilities on the companies developing advanced AI models:
- Have a safety plan.
- Have that plan audited by a third party.
- Disclose critical safety incidents.
- Protect employees or contractors that flag risks.
These safeguards are clear, simple and commonsense. In fact, the RAISE Act codifies what some responsible AI companies have already promised to do. By writing these protections into law, we ensure no company has an economic incentive to cut corners or put profits over safety, as some are already starting to do. Our bill only applies to the largest AI companies that spend hundreds of millions of dollars annually developing the most advanced systems. It imposes no burden on any academic or startup. It also doesn’t attempt to be a catch-all for every potential issue raised by Al. Instead, it focuses on the most urgent, severe risks that could cause over $1 billion in damage or hundreds of deaths or injuries.
Smart AI legislation should be designed to safeguard us from those risks while allowing beneficial uses of Al to flourish. That’s why the RAISE Act takes a flexible approach to governing a rapidly changing industry. Our bill doesn’t create hyper-specific rules for research or establish a new regulatory entity. Instead, it holds companies to their own commitments, creates transparency around how AI companies are managing severe risks and protects whistleblowers who sound the alarm about dangerous development. Our bill also ensures smaller AI startups can continue to compete in the marketplace by requiring the biggest companies to play by the rules.
With commonsense safeguards, we can ensure a thriving, competitive AI industry that meets New Yorkers’ needs instead of putting our safety at risk. The RAISE Act is a key step into the future we all want and deserve.
Alex Bores is an Assembly member representing Assembly District 73 in Manhattan. Andrew Gounardes is a state senator representing the 26th Senate District in Brooklyn.
NEXT STORY: Opinion: Don't let politics obscure a health victory in New York