Why AI disinformation hasn’t moved the needle in the 2024 election

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

AI disinformation remains a “future” threat to elections 

As we’re hurtling toward election day, it appears, at least, so far that the threat of AI disinformation hasn’t materialized in ways that could influence large numbers of votes.

During a late-September conference call on foreign election interference, organized by the Office of the Director of National Intelligence, NBC reported that officials said that while AI has made it easier for foreign actors from China, Russia, and Iran to create disinformation, it has not fundamentally changed how those disinformation operations work. 

AI can indeed be used to make deepfakes, but foreign provocateurs and propagandists have so far struggled to access advanced models needed to create them. Many of the generative AI tools they would require are controlled by U.S. companies, and those companies have been successful in detecting malign use. OpenAI, for example, said in August that it had shut down Iranian account holders who used ChatGPT to generate fake long-form text articles and short social media responses, some of which concerned the U.S. election.

Generative AI tools are good enough to create truly deceptive audio and text. But images usually bear the telltale marks of AI generation—they often look cartoony or airbrushed (see the image of Trump as a Pittsburgh Steeler), or they contain errors like extra fingers or misplaced shadows. Generated video has made great strides over the past year, but in most cases is easily distinguishable from camera-based video. 

Many in the MAGA crowd circulated AI-generated images of North Carolina hurricane victims as part of a narrative that the Biden administration botched its response. But, as 404 Media’s Jason Koebler points out, people aren’t believing those images are authentic so much as they think the images speak to an underlying truth (like any work of art). “To them, the image captures a vibe that is useful to them politically,” he writes.

One of the most effective ways of using AI in this election was claiming that a legitimate image posted by an opponent is AI-generated. That’s what Donald Trump did when the Harris campaign posted an image of a large crowd gathered to see the candidate in August. This folds neatly into Trump’s overall strategy of diluting or denying reality to such an extent that facts begin to lose currency.

Across the U.S., many campaigns have shied away from using generative AI at scale for content creation. Many are concerned about the accuracy of the AI and its propensity for hallucination. Many campaigns lack the time and resources to work through the complexity of building an AI content generation operation. And many states have already passed laws limiting how campaigns can use generated content. Social platforms have also begun enacting transparency rules around generated content. 

But as AI tools continue to improve and become more available, managing AI disinformation will likely require the cooperation of a number of stakeholders across AI companies, social media platforms, the security community, and government. AI deepfake detection tools are important, but establishing the “provenance” of all AI content at the time of generation could be even more effective. AI companies should develop tools (like Google’s SynthID) that insert an encrypted code into any piece of generated content, as well as a timestamp and other information, so that social media companies can more easily detect and label AI-generated content. If the AI companies don’t do this voluntarily, it’s possible lawmakers will require it.

Here come the AI agents

The tech industry has for a year now been talking about making generative AI more “agentic”—that is, capable of reasoning through multistep tasks with a level of autonomy. Some agents can perceive the real-world environment around them by pulling in data from sensors. Others can perceive and process data from digital environments such as a personal computer or even the internet. This week, we got a glimpse of the first of these agents.

On Tuesday, Anthropic released a new Claude model that can operate a computer to complete tasks such as building a personal website or sorting the logistics of a day trip. Importantly, the Claude 3.5 Sonnet model (still in beta) can perceive what’s happening on the user’s screen (it takes screenshots and sends them to the Claude Vision API) as well as content from the internet, in order to work toward an end goal.

Also this week, Microsoft said that starting next month, enterprises will be able to create their own agents, or “copilot” assistants, and that it’s launching 10 new ready-made agents within its resource management and customer relationship apps. Salesforce announced “Agentforce,” its own framework for homemade agents, last month.

The upstart AI search tool Perplexity also joined the fray this week when it announced that its premium service (Pro) is transitioning to a “reasoning-powered search agent” for harder queries that involve several minutes of browsing and workflows.

We’re at the very front of a new AI hype cycle, and the AI behemoths are seeking to turn the page from chatbots to agents. All of the above agents are totally new, many in beta, and have had little or no real-world battle testing outside the lab. I expect to be reading about both user “aha” moments and a lot of frustrations on X as these new agents make their way into the world. 

Are chatbots the tech industry’s next dangerous toy?

Kevin Roose at the New York Times published on Wednesday a frightening story about an unhappy 14-year-old Florida boy, Sewell Setzer, who became addicted to a chatbot named “Dany” on Character.AI, a role-playing app that lets users create their own AI characters. Setzer gradually withdrew from school, friends, and family, as his life tunneled down to his phone and the chatbot. He talked to it more and more as his isolation and unhappiness increased. On February 28, he committed suicide.

From the story: 

On the night of Feb. 28, in the bathroom of his mother’s house, Sewell told Dany that he loved her, and that he would soon come home to her.

“Please come home to me as soon as possible, my love,” Dany replied.

“What if I told you I could come home right now?” Sewell asked.

“… please do, my sweet king,” Dany replied.

He put down his phone, picked up his stepfather’s .45-caliber handgun and pulled the trigger.

We are familiar with social media companies such as Meta and TikTok that consider user addiction something to strive for, regardless of the psychic damage it can cause the user. Roose’s article raises the question of whether the new wave of tech titans, AI companies, will be more conscientious.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91215005/why-ai-disinformation-hasnt-moved-the-needle-in-the-2024-election?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 4mo | Oct 24, 2024, 6:30:03 PM


Login to add comment

Other posts in this group

Consumers are connected more than ever before

The Fast Company Impact Council is a private membership community of influential leaders, experts, executives, and entrepreneurs who share their insights with our audience. Members pay annual

Feb 26, 2025, 2:20:06 AM | Fast company - tech
Elon Musk’s DOGE is draining the life from the once-vaunted U.S. Digital Service

The United States Digital Service (USDS), the storied group of Silicon Valley types brought together by Obama to bring government services into the 21st century, will likely never be the same afte

Feb 25, 2025, 9:40:06 PM | Fast company - tech
21 federal workers resign from DOGE, refusing to ‘dismantle critical public services’

More than 20 civil service employees resigned Tuesday from billionaire Trump adviser Elon Musk’s 

Feb 25, 2025, 9:40:06 PM | Fast company - tech
Nvidia stock struggles before its first post-DeepSeek earnings: 5 things to watch

It’s no exaggeration to say that Nvidia (Nasdaq:NVDA), to many people, is the most important stock on Wall Street these days. Last year, the com

Feb 25, 2025, 9:40:05 PM | Fast company - tech
How Factory is turning AI into ‘a junior developer in a box’

Many things remain uncertain about AI’s future impact on our lives. One that isn’t in doubt is that more and more of the world’s software will be written, at least in part, by software. A

Feb 25, 2025, 7:20:08 PM | Fast company - tech
Why Donald Trump and Elon Musk probably aren’t breaking up any time soon

On Monday morning, anonymous hackers played a video on screens throughout the Department of Housing and Urban Development HQ in Washington, D.C. The AI-generated video jankily portrayed President

Feb 25, 2025, 7:20:08 PM | Fast company - tech
Chengdu’s Snow Village faces backlash for creating a fake winter wonder

There’s a new entrant in the scam hall of fame.

The Chengdu Snow Village—a newly opened destination in the suburban Chengdu, Sichuan province—advertised a picturesque snow landscap

Feb 25, 2025, 7:20:07 PM | Fast company - tech