Technology

State comptroller: New York City agencies are using AI without guardrails

An audit by state Comptroller Tom DiNapoli found that New York City lacks “an effective AI governance framework.”

AI tools are often used to help sort through massive amounts of data, but they can have biases and flaws in accuracy – leading to calls for meticulous oversight.

AI tools are often used to help sort through massive amounts of data, but they can have biases and flaws in accuracy – leading to calls for meticulous oversight. Andriy Onufriyenko

New York City agencies are using artificial intelligence tools, but without standard rules in place for how to use them responsibly, according to a new audit by state Comptroller Tom DiNapoli.

 “The use of algorithmic tools, such as AI, to support agency decision making comes with significant risks, and the lack of a governance structure over these systems amplifies these risks,” the audit said.

AI tools are often used to help sort through massive amounts of data – something government agencies like police departments and school systems need help with. But the tools can have biases and flaws in accuracy – leading to calls for meticulous oversight.

“NYC does not have an effective AI governance framework,” the audit said, concluding that each city agency has ended up with their own methods for using AI. “These ad hoc and incomplete approaches to AI governance do not ensure that the City’s use of AI is transparent, accurate, and unbiased and avoids disparate impacts.”

Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project, agreed that the city has still yet to address the threats posed by AI tools. “New York City’s AI has been discriminating against communities of color for years, putting BIPOC New Yorkers at heightened risk of wrongful arrest and the loss of parental rights,” Cahn wrote in a text.

Incomplete public reporting

City agencies are required to publicly report some of the AI they use, but tools that are “in production” or in pilot phases can evade disclosure, the audit found. 

It also identified a risk that some tools could just go unreported. The city Department of Education, for example, did not report the use of Teach to One 360, a tool that uses artificial intelligence to create personalized lesson plans for students. In another instance, the Department of Probation reported algorithmic tools to the city in 2020 that the city did not disclose in its public report. According to the audit, the city advised that those tools didn't need to be disclosed. In total, agencies reported a combined 23 tools to the city in 2020, but only 16 were included in the final public report. 

The audit from DiNapoli’s office is the most in-depth look at the city’s attempts to regulate the use of artificial intelligence – and other algorithmic tools – since former Mayor Bill de Blasio’s Executive Order 50 in 2019. The executive order created an “algorithms management and policy officer” and required the development of policies for the ethical use of algorithmic tools. In the 25 months that Executive Order 50 was in effect, the city created a reporting framework for use of algorithmic tools, but not policies governing the “fair and responsible use” of those tools or a mechanism to deal with complaints.

The algorithms management and policy officer position was discontinued by Mayor Eric Adams last year when he created a new centralized technology office. 

“While much of this audit focused on the work of the prior administration and a different government structure, this administration's recent consolidation of technology agencies and entities under the (new Office of Technology and Innovation) umbrella puts the city in a strong position to approach AI in a more centralized, coordinated way,” a City Hall spokesperson wrote in an email on Thursday.

No roadmap for oversight

DiNapoli’s audit surveyed the governance and use of artificial intelligence at four city agencies – the Administration for Children’s Services, the New York City Police Department, the Department of Education and the Department of Buildings. The audit found that none of those agencies have formal policies for tracking the intended use or outcomes of their tools. Only the Administration for Children’s Services maintained detailed records of its AI tools.

Use of AI by city contractors is a gray area. For example, while the Department of Buildings said that the agency itself doesn’t use AI tools, it allows their use by approved facade inspectors. “In response to our findings, DOB indicated it did not believe it is responsible for overseeing the use of AI by facade inspectors,” the audit read. “As a result, DOB itself does not know or maintain documentation for AI tools that facade inspectors may use.”

The NYPD, whose use of facial recognition technology has been the subject of debate and opposition from police reform advocates, reported that their facial recognition tool was evaluated by the National Institute of Standards and Technology. But the audit said that there was no evidence the NYPD had reviewed NIST’s evaluation or determined what level of accuracy the NYPD would find acceptable. The NYPD has long said that facial recognition is used to assist in investigations as a complement to human analysis.

Despite the lack of formal oversight found in the audit, the comptroller’s office said that the Administration for Children’s Services showed evidence that it takes steps to address possible bias in its AI tools, which include a predictive model to decide which of its investigations go through an extra review to check for and address shortcomings in the agency’s practices. The model prioritizes cases involving children most likely to experience future severe harm.

Though Adams discontinued the position of the algorithms management and policy officer, the City Council passed a law last year that largely continues the reporting requirements for algorithmic tools. DiNapoli’s report comes as the Adams administration talks about creating an AI strategy. City & State reported last month that the city is hiring for the newly created role of director of artificial intelligence and machine learning. The position, under the Office of Technology and Innovation, will spearhead a comprehensive AI strategy, which officials said would include the creation of governing principles. As of Thursday afternoon, that job notice was still posted on the city’s careers website.