News & Politics

2025 is the year New York political campaigns embrace AI

From uncanny music videos to policies researched by ChatGPT, the AI revolution has come for the political campaign industry, for better or worse.

Former mayoral candidate Whitney Tilson’s campaign made an AI video criticizing Zohran Mamdani for his thin resume.

Former mayoral candidate Whitney Tilson’s campaign made an AI video criticizing Zohran Mamdani for his thin resume. Whitney Tilson campaign

In the thick of the Democratic mayoral primary this spring, then-leading candidate Andrew Cuomo’s release of a housing policy was overshadowed by a footnote in the 29-page document. A citation of a Gothamist article included “ChatGPT” in its URL, revealing that his campaign had relied on the generative artificial intelligence chatbot to, at the least, get background on parts of the plan.

The Cuomo campaign later copped to using ChatGPT as a “research tool” for the policy, but denied that it had been used to write the plan in any way. By that time, the incident, first reported by Hell Gate, had prompted a news cycle’s worth of attacks from his opponents, who argued that his campaign wasn’t just sloppily run, but peddling AI slop. (It also elicited at least one defense of using this kind of technology on campaigns.)

But the Cuomo ChatGPT debacle was hardly the only instance of campaigns this cycle using artificial intelligence tools – not only behind the scenes, but in some newly public-facing ways too. Among them, in the final days of the campaign, a trailing candidate for mayor threw a hail mary against eventual Democratic nominee Zohran Mamdani – in the form of an AI-generated music video. In another instance, a City Council candidate appeared to use AI-generated photos to populate background images on his campaign website.

Publicly-facing uses of AI generated content – as opposed to somewhat less detectable applications like using chatbots to help write fundraising emails – are continuing into the general election. Many people posting on X lately about the remaining mayoral candidates will likely have noticed replies from an automated anti-Mamdani chatbot called @CityDeskNYC. (That tool is run by an individual who told Courthouse News he’s hoping to sell it to one of Mamdani’s opponents.) Candidates elsewhere in the state are using AI too – like Democratic congressional candidate Blake Gendebien, who made a music video featuring an AI-generated version of his would-be opponent, Republican Rep. Elise Stefanik. (Though President Donald Trump and his administration are some of the more prolific high-profile posters of AI fakes, campaigns on both sides of the aisle are using AI tools to reach voters.)

But as campaigns increasingly embrace the quickly evolving technology, there’s a long way to go in figuring out how to use it well. One major question to answer is the extent to which AI can – or should – replace tasks that would otherwise be done by a human. “There’s a huge difference between using AI for smart things and using AI for dumb things. It all comes down to how much of this could a human have done better,” said political consultant Ryan Adams, who supports uses of AI that augment human decision-making rather than outright replacing it. 

“The best use of AI is actually a very basic use of AI. It’s actually to write press releases and advisories,” said Democratic strategist Trip Yang, adding that a human should still be copy editing that content.

While neither Yang nor Adams expressed major concerns about AI replacing political consulting jobs en masse – campaigns are still relationship-based, both noted – they acknowledged that there’s a risk of roles like graphic designers having their work replaced by AI. Adams said he was told by one candidate for office, who he declined to name, that he didn’t need to hire him to make a logo because he could use ChatGPT to do so. 

If campaigns’ uses of AI are still in their infancy, so is regulation of the technology. The state budget passed last year included a requirement that political communications include disclosures of any “materially deceptive media” – like AI deep fakes, for example. But that doesn’t cover all uses of AI.

Some campaigns disclose when AI has been used. Others don’t. That can leave close watchers guessing – and makes our above list of recent uses very likely non-exhaustive.

A series of cartoons that Assembly Member Jenifer Rajkumar posted on X attacking Public Advocate Jumaane Williams in her primary challenge against him this spring led some users on the platform to assume the images were AI generated. Rajkumar and her campaign insisted that they were designed by a cartoonist who volunteered for the campaign. When asked to be connected to the cartoonist to verify that was the case, Rajkumar declined, writing in a text, “He is not public.”

Mayor Eric Adams’ campaign account on X has also posted a cartoon of one of his opponents this November, Democratic nominee Zohran Mamdani. A spokesperson for Mayor Adams’ campaign did not respond when asked whether the cartoon was AI generated. When City & State ran both Rajkumar’s and Mayor Adams’ cartoons through several generative AI detection tools, the results varied. The tools more often suggested that Mayor Adams’ was likely AI-generated, while they concluded that Rajkumar’s were likely not AI-generated. But two academics consulted about the images suggested that there are currently no AI detection tools that can reliably analyze cartoon images – as opposed to photorealistic images, on which most of those tools are trained.

There are other downsides for campaigns to consider in deploying AI-generated content. As the Cuomo campaign’s experience with ChatGPT illustrated, the messenger can overshadow the message. Even if the content succeeds in capturing voters’ attention, there’s a risk of an adverse effect for voters who are wary of AI for ethical reasons, like the large electricity demands of AI data centers. 

If campaigns use the technology to replace human jobs, that could leave a candidate looking like a hypocrite. “You can’t be like, ‘I’m a friend of the working man,’ but when it came time to pay an artist, I said, ‘Fuck that artist. I’m gonna have a robot do a shitty job,’” consultant Adams said.

And then there’s voters’ keen eye for authenticity. “Things that look inauthentic or that look fake turn voters off, in part because voters are already predisposed to be turned off by politicians because they think politicians are all fake,” said Peter Loge, an associate professor and director of the Project on Ethics in Political Communication at George Washington University. That doesn’t mean that some AI-generated images can’t also be compelling, he said. 

Loge has studied uses of AI in campaigns across the country: an AI campaign “volunteer” in a Pennsylvania congressional race, a mayoral candidate in Cheyenne, Wyoming who filed alongside an AI chatbot and promised to let it co-run the government. “It’s so prominent that campaign trade associations give out awards for use of AI in campaigns,” he said. “If you can get a plaque for it, it’s pretty much gone mainstream.”