Technology

New York City embarks on creating an AI strategy

The Office of Technology and Innovation is hiring a director of artificial intelligence and machine learning.

The director of artificial intelligence and machine learning will be responsible for supporting agencies’ “productive and responsible” use of artificial intelligence and machine learning tools.

The director of artificial intelligence and machine learning will be responsible for supporting agencies’ “productive and responsible” use of artificial intelligence and machine learning tools. KATERYNA KON/SCIENCE PHOTO LIBRARY, GETTY

New York City is getting an artificial intelligence czar.

Well, it may not be technically billed as such, but the city Office of Technology and Innovation is hiring for the newly created role of director of artificial intelligence and machine learning, who will spearhead what the office calls a new “comprehensive AI strategy.”

The role, which lists a salary range of $75,000 to $140,000, will be responsible for supporting agencies’ “productive and responsible” use of artificial intelligence and machine learning tools.

If that sounds familiar, it’s because the role bears a striking resemblance to some of the responsibilities of the algorithms management and policy officer, a position created under then-Mayor Bill de Blasio in 2019 to guide the city and its agencies in the development, use and assessment of algorithmic tools, which often include AI and machine learning.

The algorithms officer role was discontinued last year when Mayor Eric Adams created the Office of Technology and Innovation, which consolidated the city’s many technology departments into one office. With that, the work of auditing the city’s use of algorithmic tools and guiding their responsible use fell to the new centralized office, and will now fall to this new director.

But an executive at the city’s tech office said the new position will expand beyond that, and “think about issues related to AI and machine learning more broadly,” including new use cases. Alex Foard, who was the algorithms management and policy officer under de Blasio and is now the technology office’s executive director for research and collaboration, suggested that the new role will be more empowered to take action than the position he previously held. Foard attributed that to the organization of the Office of Technology and Innovation, which he said has greater authority for oversight of how city agencies are using technology.

Asked for evidence of the Office of Technology and Innovation being a more empowered authority because of its centralized position, a spokesperson pointed to several new initiatives the office launched in its first year, including a “Cyber Academy” to train and certify the city’s workforce to manage cybersecurity incidents in their own agencies, and Big Apple Connect, a program that commits to subsidizing free internet and basic cable subscriptions for 300,000 public housing residents by the end of this year. The latter has been the administration’s priority focus on the issue of expanding internet access after it scrapped the Internet Master Plan from the de Blasio era.

Though details on the city’s AI strategy are still scarce, the Adams administration isn’t shying away from using these tools. “We certainly want to encourage agencies to consider places where AI could help be a solution and could help them do their work and serve the public better,” Foard said, when asked whether this strategy would involve an expanded use of AI tools by the city. But Foard said the other side of that coin was acknowledging the challenges and risks that sometimes come with AI solutions and ensuring that guardrails are in place.

Automated tools and algorithms can be biased in their design or in the way they’re used, and end up producing flawed results. New York City passed a law in 2021 requiring any employers who want to use algorithm- and AI-based decision-making tools in their hiring process to submit the tool to a bias audit before doing so. But the rule-making process is ongoing, and implementation of the law has been delayed until April.

Concerns about biases and privacy protections in tools that agencies rely on to make decisions prompted the city to create an automated decision systems task force that preceded the creation of the algorithms officer role. City agencies are still required by law to report on the algorithmic tools they use.

Previous audits showed that it wasn’t just the more widely publicized police use of facial recognition or gunshot detection technology that fall under that umbrella. It’s also the DNA analysis program used by the chief medical examiner’s office with crime scene evidence, or the algorithm used by the Department of Education that matches applicants to schools.

While the algorithms management and policy officer was tasked with establishing governing principles for how agencies can benefit from innovative tools while still using them ethically, Foard said that most of the early work while he was in the position was focused on creating transparency around the tools already in use. Creating governing principles for the responsible use of AI will be part of the new director’s role too, but the city’s technology office isn’t talking about what those standards would be yet.