NEW YORK — A Manhattan Democratic Party boss let loose in a profanity-laced tirade against a sitting elected official.
“I dug her grave and she rolled into it,” Keith Wright, a fixture in New York politics, could be heard saying. Punctured by profanities, he described a rival as “lazy, incompetent — if it wasn’t for her, I’d be in Congress.”
The 10-second clip spread quickly among Harlem political players — a seemingly stunning hot mic moment for the influential leader. But there was a problem: It was faked.
The audio was generated by artificial intelligence to sound like Wright, and shared anonymously to cause political chaos. Wright quickly denounced it.
The episode, first reported by POLITICO, marks one of the latest and most egregious uses of deepfake audio yet in the 2024 election year, and it serves to expose the growing use of AI as a nefarious tool in American politics.
The incident also alarmed law enforcement officials and AI experts, who warned it forebodes the dissemination of misinformation in elections across the country in coming months and years. And there’s little the nation can do about it.
“The regulatory landscape is wholly insufficient,” Ilya Mouzykantskii, a political consultant who’s using AI-generated audio to phonebank voters, told POLITICO. “This will be the dominant tech story of this election year.”
The faked Wright audio marked the first instance POLITICO could identify of AI-generated content being used against a political opponent in New York, coming on the heels of a manipulated Joe Biden robocall ahead of the New Hampshire primary.
The art of persuasion and trickery are nothing new in politics. From the famed Watergate scandal of the Nixon White House, to the “Swiftboat” attacks against John Kerry’s 2004 presidential bid, to Russian interference the 2016 election of Donald Trump, incendiary tactics have long been a part of American elections. And they are not reserved for White House occupants and hopefuls — hyper-local races, too, have been marked by misinformation.
Now those forms of manipulation — from the customary to the epic — are being eclipsed by the availability of AI technology that’s credible enough to easily mislead or misinform the public. And it’s happening at a time when disinformation is prevalent and trust in traditional media is dwindling.
While big states including California and Texas have passed bills addressing malicious uses of deepfakes in politics, New York and most other states are just starting to confront the issue.
“There’s a scalability to it that is terrifying,” said Mike Nellis, a political consultant with Authentic Campaigns that’s using generative AI to write candidates’ fundraising emails. Faked audio in New York politics may not have made it into the headlines yet, Nellis said, but “I’m certain that in smaller circles, things like this have been happening.”
A robocall impersonating Biden in January told people not to vote in the New Hampshire primary, and Democratic challenger Dean Phillips’ campaign was blocked by the AI platform OpenAI for using its technology to create an audio chatbot with the candidate’s voice.
AI wasn’t being used to hurt an opponent in that case, but to aid a politician. Mayor Eric Adams did something similar in 2023, creating AI generated audio of his own voice to make public service announcements in languages he doesn’t speak, like Spanish and Yiddish.
Regulation is limited across the country.
A House bill introduced by Rep. Yvette Clarke (D-N.Y.) has no momentum. Three states enacted laws on political deepfakes in 2023, NBC News reported, and more than a dozen states have relevant bills introduced.
In New York, the Political Artificial Intelligence Disclaimer, or PAID Act, would require campaigns to say when they use AI in communications like radio ads or mailers.
The Wright audio was “yet another example of why we need to regulate deepfakes in campaigns,” Democratic state Assemblymember Alex Bores, the lead sponsor of the bill, posted on X. “It's (past) time to take this threat seriously.”
The issue is popular among voters, and has bipartisan support — Republican state Sen. Jake Ashby carries almost identical legislation in the other chamber — but the bills only cover a small portion of the potential use of AI.
The voice cloning of Wright was created anonymously and wasn’t tied to a specific campaign, so the PAID Act wouldn’t apply.
“This is a first step,” Bores said in an interview. “I don’t think this is the last thing we need to do about this, but we need to start with disclosure, and the already most-regulated entities, which are campaigns.”
At least a dozen more bills introduced in the New York state legislature deal with regulating the use of AI, but most have to do with commercial uses of the technology, rather than politics. One would block films from getting a tax credit if the production used AI to displace human jobs.
Gov. Kathy Hochul has said AI is a priority of hers this year, but is focused on cultivating the economic benefits.
New York does have at least one law dealing with deepfakes on the books, though. Legislation criminalizing the sharing of sexually explicit images without consent was updated in 2023 to make sure AI-generated images were covered too.
And in the New York City Council, a nonbinding resolution has been introduced urging the Federal Elections Commission to take action against deceptive deepfakes in political communications ahead of the 2024 election.
The FEC has been reviewing the issue, and promises to make rules “by early summer,” the Washington Post reported.
That would come amid the 2024 presidential election year — and in New York City, the police department is already thinking a lot about AI and its public safety implications, NYPD Deputy Commissioner of Intelligence and Counterterrorism Rebecca Weiner said.
“The specter of the election is galvanizing all sorts of threats. And the technology overlay just complicates everything,” Weiner said in an interview.
And of course, the NYPD’s actions are limited by the right to free speech.
“It’s not inherently illegal to create disinformation,” she said. The ability for the NYPD to arrest anyone for deepfakes “would really depend on what the content is and how it’s being used.” That could mean using AI-generated content in propaganda for terrorist organizations or simply violating tech companies’ terms of service around use of AI.
As AI-generated audio becomes more commonplace, it will bring all clips into question — even real ones. “This whole issue of plausible deniability is actually one of the biggest problems with this technology,” Nitin Verma, a post-doctoral fellow researching AI with the New York Academy of Sciences, told POLITICO. “Anybody who wants to shed any charges, they have a target to point to: this is not me, this is AI.”
That could be the case with recently reported audio of former Trump campaign adviser Roger Stone allegedly saying he’d like to see Reps. Eric Swalwell (D-Calif.) or Jerry Nadler (D–N.Y.) dead. Stone, a notorious political trickster, has said the clip was faked, and AI-generated.
Some political players have been warning about deepfakes for years. But as the technology becomes mainstream, its quality is rapidly improving.
“They’re 90 percent of the way there to ultra-realistic. … If you asked me a year ago, I would say we’re 50 percent of the way there,” said Mouzykantskii, the political consultant.
Experts and online tools can usually tell when audio is generated, but you can’t be entirely sure, said Mouzykantskii, “unless you sat there and watched his voice exit his mouth. That’s the way to verify.”