How governments are approaching the regulation and use of AI

While the new tech may seem like science fiction, it’s already being employed by local and state governments in New York and elsewhere.

Governments can use AI to improve efficiency and solve problems, but there are many risks in deploying the technology.

Governments can use AI to improve efficiency and solve problems, but there are many risks in deploying the technology. AerialPerspective Images/Getty Images

Since OpenAI launched its chatbot in November 2022, artificial intelligence has seemed to shift from a far-fetched sci-fi premise – think HAL 9000 in “2001: A Space Odyssey” or the childlike android in “A.I. Artificial Intelligence” – to an inescapable, everyday reality. ChatGPT and other AI-powered tools have sprouted up overnight, spreading to search engines, social media platforms, virtual assistants and beyond.

The advent of the AI age holds plenty of promise, even as it threatens to disrupt entire professions, undermine and exploit the creative work of artists and writers, and unleash a flood of deceptive audio, videos and other imagery. Government officials are scrambling to draft legislation and regulations to rein in the risks, balanced against AI’s transformative potential to make work more efficient and improve our way of life.

Of course, some AI tools have been in use for years – and not just by tech startups, but just as often in the public sector. State and local governments have already been employing AI to complete a range of tasks more efficiently and effectively: sifting through vast amounts of data, automating bureaucratic processes, and identifying and responding more quickly to public safety threats.

Now, as the underlying technology continues to improve and its applications multiply, the question isn’t just how exactly governments will regulate artificial intelligence – it’s also how exactly governments will deploy it.

Governmental bodies use AI to help prevent cyberattacks and detect public health threats. AI is deployed by law enforcement to identify potential offenders and by local and state agencies as a screening and predictive tool – although there’s sharp debate about whether such efforts do more harm than good. And even as they work out their kinks, chatbots are being used to serve constituents, companies and other stakeholders that interact directly with government offices.

“New York City government is using AI in a variety of ways to benefit New Yorkers today,” Matthew Fraser, New York City’s chief technology officer, told City & State ahead of the rollout of the city’s new Artificial Intelligence Action Plan. “These efforts range from leveraging AI to prevent future public health outbreaks to empowering students in the classroom to maintaining our city’s cyber resilience. As an example, New York City Cyber Command tracks more than 90 billion cybersecurity incidents every week, and utilizes AI-based technology, among other tools, to filter the number down to 50 incidents for humans to investigate.”

When New York City Mayor Eric Adams released his AI action plan, one of the buzziest details was a new chatbot geared toward assisting small-business owners as they navigate the city’s codes, permitting process and other regulations. “We want to think more innovatively about challenges and using the right technology, the right way, and be responsible when we do it,” Adams said at the October press conference announcing the plan. “You can use or abuse anything, and if we stay away from moving forward because we’re afraid someone’s going to abuse it, you won’t get anything done – and this administration is not about sitting around, we’re about getting things done.”

While Adams billed the plan as the first of its kind for a major U.S. city, many other municipalities are ahead of the curve. In Florida, the city of Altamonte Springs has an entire team devoted to working on AI solutions. First up is expediting site plan reviews for development and landscaping plans. During a site plan review, a developer’s submission must comply with code requirements. By using, the city has streamlined a process that typically took weeks – but the AI-powered, automated site plan review doesn’t replace city employees, City Manager Frank Martz said.

“The system will produce comments rather than our staff having to sit down and do the rudimentary site plan review,” Martz said. “The beauty of artificial intelligence is that the machine learning is iterative. Every time we run software, it becomes smarter, it learns to review things based on our code and applies our code.” is the first adaptive learning program used by Altamonte Springs, but Martz said the city is open to similar tools and is aiming to implement AI for pedestrian safety and intersection operation applications. Additionally, the city is exploring the possibility of using AI for first-responder dispatching and embracing related technologies on a trial-and-error basis.

“Local governments need to be brave enough to try something new,” Martz said. “I’m asked often, what key performance indicators we look to before we start innovation pilots with partners, and I always say that the first thing I ask myself and our leaders here is, do we have the courage to see this through?”

One place where AI has been a more fraught topic is in schools, where it could be used to plagiarize. Yet over the past year, New York City’s public schools have come around to AI’s benefits. In January, the New York City Department of Education banned ChatGPT on district department devices. Four months later, the department reversed course. At the time, New York City schools Chancellor David Banks wrote in a Chalkbeat New York op-ed that the decision had been too hasty and overlooked the potential of generative AI to benefit students and teachers.

At a New York City Council hearing in September, city Department of Education officials and some elected officials argued that excluding AI entirely meant closing professional and educational doors to them in the future.

“New York City Public Schools is aspiring to be a global leader in embracing AI and expanding existing programs that build computer science and digital fluency skills as essential concepts layered across our core curricula and subject areas,” testified Melanie Mac, senior executive director of the Department of Education’s Office of Student Pathways. “To advance digital equity for all learners, we’re embracing AI as an important lever for us to continue dismantling inequitable systems, cultures, policies, mindsets and behaviors.”

As of August, 1 in 10 teachers reported using AI in the classroom in the past school year. New York City schools partnered with Microsoft to pilot an AI-powered teaching assistant that can answer students’ questions. According to Department of Education officials, local school officials are allowed to purchase and engage their own AI tools that they feel are needed, so long as they make it through the approval process other forms of technology are subjected to.

During the City Council hearing, Mac explained that New York City schools had begun aligning their objectives on AI to have a positive impact on students. The city plans to rely on experts and wants to create an AI policy lab to figure out the best path forward on the school district’s use of AI.

“Our collective AI capacity-building efforts will engage both our internal and our external partners alike, including our district and school leaders, our educators, our school staff, students, families and community members,” Mac said.

There is still some trepidation from constituents and lawmakers alike that revolves around the technology’s potential for plagiarism and bias. Some council members raised concerns that AI integration may be happening too fast to allow for effective safeguards.

“I think that the reality is if we aren’t teaching our students and our educators about AI and generative AI, they’re going to learn it and be exposed to it elsewhere,” New York City Schools Director of Digital Learning Initiatives Tara Carrozza said. “Or they’ll be locked out of opportunities because they’ve not built the digital fluency … or they haven’t learned the ethical considerations about AI. So there is an urgency.”

The domain where government use of artificial intelligence has perhaps generated the most alarm is in law enforcement, amid fears of ever-expanding surveillance capabilities. The maker of the New York City Police Department’s “Digidog” robots touts its AI capabilities, and the Metropolitan Transportation Authority recently began using AI-powered surveillance to track fare evasion in the city’s subways. Florida recently contracted with a California tech company to transcribe inmates’ phone conversations in the state’s prison system.

Police use of facial recognition has also prompted an outcry in many jurisdictions. In Pittsburgh, police used facial recognition technology during the 2020 Black Lives Matter protests without the knowledge or permission of police command staff, PublicSource reported in 2021. The episode was controversial for a variety of reasons. In May 2020, the Pittsburgh City Council voted to regulate the use of facial recognition and predictive policing technologies by city entities by mandating that council members sign off on any use of facial recognition technologies. The software was criticized on the grounds that it violated people’s privacy because it builds its database of images by procuring pictures collected without the subject’s consent. Finally, the software can locate photos of people they haven’t posted online themselves.

Today, AI recognition technology remains restricted in Pittsburgh. A draft proposal on Carnegie Mellon University’s campus in July 2022 considered a video surveillance policy allowing the university to use facial recognition technology during investigations, but it spurred a backlash from students and alumni alike on the basis that this would be an invasion of students’ privacy and normalize surveillance. Ultimately, the university nixed the policy in response to community concerns and claimed that the Carnegie Mellon University Police Department had never used facial recognition technology in the past.

“Based on feedback received from the community to the first draft of a video security policy that would have allowed for the potential use of facial recognition tools in criminal investigations, we have decided not to move forward with further consideration of this policy document,” the school said in a statement at the time.

One application that remains accessible to law enforcement throughout Pennsylvania is JNET, a database with facial recognition capabilities. In Allegheny County, the Office of the District Attorney announced in August 2022 that it would use NICE Justice, an AI-powered tool that facilitates the management, sharing and use of digital evidence, with photos, videos, PDFs and documents uploaded to a single portal.

According to the district attorney’s office, its 35,000 annual cases is beyond the capacity of the 127 attorneys it has on staff, and the software can improve efficiency and reduce delays. “We have a sizable number of cases that were postponed due to delays in processing digital evidence,” Rebecca D. Spangler, first assistant district attorney for the Allegheny County District Attorney’s Office, said in a statement. “With NICE Justice, we’ll be able to streamline the entire process of managing digital evidence, from intake to discovery. When we’re able to eliminate postponements by making the system more efficient as a whole, that’s good for everyone.”

NICE General Manager for Public Safety John Rennie said one consequence of the coronavirus pandemic was heightened demand for expediency. “COVID helped a lot of people realize that doing things slowly in an office was also not always the best way of doing things,” Rennie said. “The more technology-oriented DAs and the more progressive DAs in particular very much recognize that their staff needs these tools to be able to self-serve instead of having to send off to other people or do in different programs.”

Meanwhile, elected officials have been advancing measures to regulate artificial intelligence. New York City’s AI bias law, which requires employers using AI tools for hiring to perform annual audits of its technology, went into effect this summer, and the Adams administration’s new AI action plan envisions the creation of a “framework for AI governance” and the formation of an advisory group.

In Pennsylvania, Gov. Josh Shapiro issued an executive order in September to establish standards and a governance framework for generative artificial intelligence use by state agencies and employees. This included establishing an AI governing board along with establishing core values for generative AI use, among them accuracy, employee empowerment and equity.

“We can’t ignore new technology – we have to educate ourselves and be proactive to minimize the risks and maximize the benefits of innovation and that’s the approach my administration is taking here in Pennsylvania,” Shapiro said.

In July, U.S. Sen. Bob Casey of Pennsylvania introduced a bill that would restrict employers’ use of AI in hiring. Pennsylvania state Reps. Robert Merski and Chris Pielli have introduced legislation to make it a misdemeanor to use AI to impersonate a loved one without consent. In May, U.S. Rep. Joe Morelle of New York introduced legislation banning the nonconsensual sharing of intimate deepfake images, something domestic violence advocates have voiced concerns about.

State Sen. Jay Costa of Pennsylvania, who has been a vocal advocate of AI and the technology’s integration into society, has sponsored legislation to spur more study of the technology. He has teamed up with state Sen. Jimmy Dillon and other lawmakers on legislation establishing an advisory committee to conduct a study on AI, including how to distinguish between AI- and human-generated content.

“AI is not just about the technology; it’s going to impact aspects of our daily lives that we can’t even imagine right now,” Dillon said. “We need to really concentrate on and address the ethics and the regulations and the workforce concerns and take a comprehensive approach toward it.”

Yet advocates for responsible AI use said transparency is often easier said than done. In the Pittsburgh area, the Allegheny County Department of Human Services has used the Allegheny Family Screening Tool since 2016 to help social workers predict when children may face harm, ideally as a way to check individual biases of staffers. Yet the practice has drawn scrutiny from the U.S. Department of Justice following complaints that the technology could result in unfairly targeting parents with disabilities or mental health disorders. The Associated Press reported that the tool draws on Supplemental Security Income data as well as records of diagnoses such as schizophrenia or mood disorders. Disability rights advocates argued that this unfairly targets parents with disabilities and mental illness and punishes them for accessing country resources.

The parents at the center of a case that resulted in a Justice Department investigation weren’t allowed to see their risk score after their daughter was removed from their care. An Allegheny County spokesperson said via email that the county doesn’t consider the tool to be AI – but not everyone sees it that way.

Experts said being able to access information about what kind of data goes into training an AI model is crucial. But Julia Stoyanovich, director of the Center for Responsible AI, said regulating AI can be tricky because entities – including local and state governments – can be cagey about disclosing when they’re using it.

“We do need to think about how we can create an environment where people can have a positive conversation where it’s not all just sticks, but also carrots,” Stoyanovich said, where “government representatives are able to come to the table and actually use the expertise of external folks to try and help them do things better rather than the worried about negative publicity.”

U.S. Senate Majority Leader Chuck Schumer has made headlines for exploring increased oversight of AI and met recently with tech leaders to discuss how to regulate AI. However, Stoyanovich lamented that the first meetings were with industry leaders, even though input was taken from a wide range of constituents and stakeholder groups.

“Essentially, we’re going to be relying on industry to say that they’re not going to be evil,” Stoyanovich said. “I think that’s really a mistake … because industry incentives do not align very often with societal incentives. And we do absolutely need legal and regulatory instruments here that are going to protect citizens in terms of both, what data about them is being used, how it’s collected, and also the decisions that were subjected to.”