Utah County Commissioner Amelia Powers Gardner was surprised when she saw a video of Governor Spencer Cox circulating on X. The video, which was published by the account @PsyReport on X in June, shows Cox admitting to fraudulently gathering signatures.
Powers Gardner knew at a glance the video was fake. Cox’s eyes twitch, his blinks are forced, and his skin coloration appears warped. And looking further, there was a whole web of deepfakes underneath it. @PsyReport had posted images showing Taylor Swift waving a Trump flag, fabricated conversations between Donald Trump’s would-be assassin Thomas Crooks and the FBI, and indeed more artificially generated clips of Cox. These videos are all still up, and @PsyReport remains a verified X user.
While the deepfake wasn’t of the highest quality, Powers Gardner specifically worried about the voice. “That was the most disturbing part to me,” she says. “Knowing that somebody else could hear it, not necessarily see it on the screen, and believe that it was true.”
In fact, humans aren’t great at spotting audio fakes. A University College London study found that participants could only determine artificially generated audio 73% of the time. In Cox’s case, that’s compounded by the fact that only one in three Americans know who their governor is, according to a Johns Hopkins report. That statistic drops even further for state senators, mayors, and local officials. As the wave of AI deepfakes crashes down on the 2024 election cycle, it seems that local and down-ballot races may be the most vulnerable.
‘There’s an asymmetry in the landscape’
Deepfakers are also finding more creative ways to sow misinformation. In the case of Governor Cox, the video was edited to look like a live news segment. This, county commissioner Powers Gardner says, makes the segment especially frightening.
“People are starting to try to give credibility to AI content [by] reporting the content as news,” Powers Gardner says. “It actually isn’t a real news report.”
Swapping the means of spreading deepfakes may also heighten their influence. In his 2024 primary, Texas House Speaker Dade Phelan faced an ad portraying himself hugging Nancy Pelosi. The ad, which was paid for by the Club for Growth Action PAC, wasn’t spread via online channels, as is usually the case with deepfakes; it was printed in a mailer.
The deepfake appears to superimpose Phelan’s body onto an image of Representative Hakeem Jeffries: Whether it was done with AI or Photoshop is still unknown. The incident spurred the Texas state government to action. In an April hearing, attorney Andrew Cates recommended updating Senate Bill 751, which makes spreading a political deepfake video a Class A misdemeanor.
“Confidence in our elections is the foundation of a free and democratic society,” Phelan writes in an email to Fast Company. “While emerging technologies like AI have the potential to offer voters benefits like easily accessible information, the nature of this technology also makes it a plausible tool for election interference and falsifications intended to deceive voters.”
Adrian Perkins’s 2022 deepfake wasn’t meant to evoke the same sense of realism. It was a parody: The ad showed Perkins’ face transposed on the body of a student called into the principal’s office. Paid for by a rival PAC People Over Politics, the ad even had a disclaimer that it was made with “deep learning computer technology.” But, in a tight re-election campaign for the mayorship of Shreveport, Louisiana, Perkins believes that the deepfake materially impacted his race.
“Because it was a parody, at the time I was like, ‘Oh, it’s not too big of a deal,’” Perkins says. “But that’s a very powerful image for people to see. I do think that it 100% had an impact and influence on the race.”
Perkins was alerted to the ad by his friend Stephen Benjamin, the former mayor of Columbia, South Carolina, and one of Joe Biden’s senior advisors. Ultimately he lost his race; with the deepfake technology so nascent, he didn’t know how to counteract the ad. This, he thinks, is an especially big problem for those running in local elections like himself.
“There’s an asymmetry in the landscape right now because of the technical component,” Perkins says. “You run an ad like that in rural areas and you still have populations and demographics that are going to be more susceptible.”
‘They’re most concerned about the ability to mimic voices’
Voice mimicry is especially hot-button in political deepfakes. Some see it as a net positive: New York mayor Eric Adams is using the tool to send out rallying calls in different languages. It’s also seen the most coverage on the federal level, after robocalls impersonating President Joe Biden urged New Hampshire Democrats not to vote in the state primary.
AI audio can have specifically damning effects, though. Experts believe that listening may be the greatest form of deepfake receptibility, and that detection tools are the least effective at catching them. The effect is then magnified in local elections, where listeners may not have heard the voice of their elected officials before.
Zelly Martin, senior research fellow at the University of Texas at Austin’s Propaganda Research Lab, interviewed over 20 political insiders to survey the state of deepfakes. Audio, she says, was their primary concern. “The biggest thing that our interviewees told us over and over again is that they’re most concerned about the ability to mimic voices,” Martin says. “We all know that there are scams that do that, but it’s still hard to turn that off in a way that’s different from visual disinformation.”
Artificially generated audio is already disrupting municipal operations. A 2023 City Council meeting in College Place, Washington, was halted by a barrage of calls spewing racial slurs and antisemitic language. A similar disruption occurred in Beaverton, Oregon. Both point to AI bot-calling as a point of blame.
‘For months, I’ve been raising the alarm’
Nineteen states have now passed laws regulating deepfakes in elections, and that list continues to grow. Just how these state laws crack down on deepfakes, from outright criminalization to mandatory advisories, remains mixed.
Craig Holman, a government affairs lobbyist for the progressive advocacy group Public Citizen, has tried to instigate change from the top. He filed a petition with the Federal Elections Commission (FEC) in 2023, asking the body to mandate a fix for pervasive political deepfakes. Progress within the FEC has stalled, while the Federal Communications Commission (FCC), he points out, has filled some of the void. Still, while federal progress on regulating deepfakes remains slow, he’s turning his attention to the states. “They’ve got different approaches; Minnesota and Texas just banned deepfakes,” Holman says, adding that he prefers a “disclosure regime,” meaning the artificially generated content would have to include a watermark or callout. (Powers Gardner is also a fan of disclosure: She’s working with SureMark Digital and Utah Valley University to run a watermark pilot.)
Meanwhile, a bipartisan committee of senators introduced the NO FAKES Act, which would hold deepfake producers liable for any damages. While not directly involved with the bill, Virginia Senator Mark Warner, chairman of the Senate Intelligence Committee, has been particularly vocal about the impact of deepfakes on local and down ballot races.
“For months, I’ve been raising the alarm that our adversaries could use that capability to directly undermine the security and integrity of our elections,” Senator Warner wrote to Fast Company in a statement. “That risk is further magnified in local elections, where there’s even less capacity to track, debunk, and expose disinformation.”
Without a federal bill making its way into law, many are left with a more libertarian approach to deepfakes: It’s up to the viewers to figure out their truth. And some tech companies are now popping up to help verify the authenticity of digital content. TrueMedia.org, a start-up fighting AI-generated political disinformation, gives users a red, yellow, or green light to verify how likely it is that the content is deepfaked.
“The beauty of it is that it’s highly democratic,” says Oren Etzioni, the company’s founder. “If there’s information about a candidate that looks to be false, their staff or their supporters can . . . get an assessment that, yes, this is highly suspicious content.”
Neither Cox, Phelan, nor Perkins’s deepfakes represent the existential threat we’ve come to expect from the nascent technology. Still, they mark a major step forward into our misinformation-filled reality. And as the technology grows and the federal legislators doddle, local candidates are bracing for the wave to crest.
Login to add comment
Other posts in this group
DoorDash is expanding its portable benefits pilot program to certain gig workers in Georgia starting next year, the food-delivery giant tells Fast Company.
Dashers (which is wha
To get from 0 to 60 in Formula 1 engine design while competing against organizations with much more experience, Red Bull Ford Powertrains will need extra help (and, no, that boost won’t come in th
I am not what you would call a finely tuned athletic machine. I am, if anything, an outdated lawnmower engine held together by duct tape and rust. So when I was offered the opportunity to let AI h
It’s hard to remember now, as you scroll through a thicket of porn bots, anti-trans activists, and AI slop
There’s been plenty of speculation about whether generative AI could replace—or perh
When Meta established its Oversight Board to adjudicate on decisions it made about removing content from its platforms in 2020, the goal was for the select group of individuals from the media, civ
When a devastating wildfire hit California in November 2018, a powerful CEO went on Twitter to ask how his company could help. That