Technology
New York City will try (again) to regulate AI
The council passed a package of bills creating a new ‘office of algorithmic accountability.’ It’s not the first attempt to reign in AI use.

New York City Council Technology Committee Chair Jennifer Gutiérrez carried two of the bills in the chamber’s artificial intelligence regulation package. John McCarten/NYC Council Media Unit
Maybe this time they’ll be lucky.
A package of bills requiring reporting and some regulation of artificial intelligence passed unanimously in the New York City Council Tuesday, marking at least the third swing in the past six years to create a regulatory framework for how city government uses AI and other algorithmic tools.
The new bills, collectively called the Guaranteeing Unbiased AI Regulation and Disclosure, or GUARD, Act, would establish a new office of algorithmic accountability. The office would assess AI and other algorithmic tools used by city agencies, create rules for how agencies procure and deploy AI tools, and publish a list of the tools it reviews. It would also work with agencies to deploy new systems, analyze potential risks, investigate any relevant discrimination claims and engage the public in the process.
Two of the three bills are sponsored by Council Member Jennifer Gutiérrez, who chairs the council’s Committee on Technology. The third is sponsored by Council Member Julie Menin.
Privacy advocates have long had concerns about use of AI and algorithmic tools in government. The city’s Administration for Children’s Services, for example, has come under scrutiny for its use of an algorithmic tool – which incorporates variables correlated with socioeconomic status like neighborhood and family size – to determine which children could be at higher risk of violence.
This is far from the city’s first attempt at creating guardrails around AI. But the outcomes so far have fallen far short of a coherent or enforceable framework.
Under Mayor Bill de Blasio, the city launched a task force in 2018 studying automated decisions systems. It ended up being a slow-moving body, taking almost a year to hold its first public meeting and 18 months to release a report. Ultimately, at the task force’s suggestion, de Blasio created an algorithms management and policy officer position – a role which was cut in 2022 under Adams, with its responsibilities absorbed into the newly-established Office of Technology and Innovation. The City Council later passed a law requiring that each agency submit an annual report listing each automated decision system they have used at least once in the prior year.
Mayor Eric Adams has sought to make his own mark in creating an AI framework for the city. A year into his administration, Adams and his OTI hired a director of artificial intelligence and machine learning. Later in 2023, the administration published an “AI Action Plan” in 2023 that set out 37 goals around creating guidelines and reporting for how the city uses AI. But progress on the action plan – which came with target deadlines – has been muddy. A spokesperson for OTI said the agency has “addressed” 35 of 37 of the goals in the plan, but it’s unclear which have been started or fully completed. The city last published an annual report on its progress in October 2024, and said that the next report will be published in the coming weeks. (Adams has just over a month left in office).
Gutiérrez has suggested the new bills will bring a stronger enforcement mechanism to what the AI Action Plan only does through guidance. In a statement, Gutiérrez called the Adams administration’s plan “Words on a page.”
“You cannot govern technology with wishful thinking. You need people,” the lawmaker said in a separate statement. “A fully staffed AI accountability team is what will make these standards real – reviewing systems before they are deployed, supporting agencies that are struggling to keep up, and stepping in when tools harm New Yorkers.Without real staff, oversight becomes a slogan. With them, it becomes a system.”
But in an October City Council meeting on the bills, some advocates called for the legislation to go even further. “Despite the strong foundation, they would benefit from targeted amendments to become generally effective,” said Darío Maestro, a senior legal fellow at the Surveillance Technology Oversight Project. For example, Maestro said, the legislation doesn’t specify what use standards should be or lay out specific audit criteria.
John Paul Farmer, the former Chief Technology Officer under de Blasio, told City & State Tuesday that he’ll be watching for how efficiently the new office of algorithmic accountability can work. “Is the Office of Algorithmic Accountability empowered by City Hall, is it integrated into key decision making and approval processes, is the review process speedy enough to meet agencies’ operational needs(?)” Farmer said in an emailed statement. “Is the usage approval process flexible enough to allow for sandboxing and testing – especially for low-risk use cases?”
Gutierrez’s office is drafting a transition memo for Mayor-elect Zohran Mamdani, hoping to prevent any stalling in the algorithmic accountability office buildout.
A spokesperson for Adams did not respond to a request for comment on whether the mayor would sign the bills into law. (His veto pen has been active lately.) Gutiérrez’s office did not indicate that they expected a veto.
– Additional reporting by Annie McDonough
NEXT STORY: DSA recommends against endorsing Ossé’s congressional bid
