Security experts lay out the most serious election threats ahead of the midterms

Remember when the Mueller investigation documented in detail how Russian agents ran divisive or Trump-supporting ads all across Facebook in an effort to sow division and ultimately get Trump elected? How about when Russian hackers published thousands of stolen DNC emails through WikiLeaks, many of them damaging to the Hillary Clinton campaign? While it’s hard to know to what extent these actions contributed to Trump’s victory in 2016, it’s also hard to believe they didn’t play a sizable role.

In 2020 and 2021, right-wing operatives used Facebook and other platforms to spread the falsehood that the 2020 presidential election was fraudulent and its winner illegitimate. Now two-thirds of Republicans believe the 2020 election–possibly the most secure and accurate in U.S. history–was in fact “stolen.” Worse yet, right-wing operatives used Facebook to plan and promote the January 6, 2021, attack on the Capitol.

All of which is to say it’s perfectly reasonable to be feeling more than a little anxious about the midterm elections next month. Control of the House, and possibly the Senate, will be at stake, as will be hundreds of state and local seats. In this hyper-partisan environment, where every election is seen as “do-or-die” for both sides, chances are high that bad actors, foreign or domestic, will attempt to use high-tech methods to mislead voters or even tamper with ballots. Election security experts say these are the most likely threats.

Hybrid cyber/disinformation attacks

In the month before the 2020 election, hackers with connections to the Iranian government sent emails to tens of thousands of registered Democrats warning them that they must change their party affiliations and vote for Donald Trump, or else face bodily harm. The hackers, who posed as members of the Proud Boys militia group, stole some of the email addresses during a successful attack on at least one state election website, the Department of Justice revealed.

The group also sent emails and Facebook messages to members of Congress, Trump’s campaign staff, White House advisers, and members of the media, claiming that the Democratic Party was planning to exploit “serious security vulnerabilities” in state voter registration websites to “edit mail-in ballots or even register non-existent voters.” The messages included a video showing stolen voter information, as well as people appearing to manipulate ballots.

“They used images of voter information . . . to make it look as if they had engaged in some kind of really successful, really damaging cyber attacks,” says Gowri Ramachandran, senior counsel at the Brennan Center’s Elections & Government Program. She says even though the hackers didn’t come close to altering votes, just the appearance of an attack is enough to “play on people’s fears and on the divisiveness that we have in our country.” In a political environment where trust in elections has been seriously eroded in the last few years, Ramachandran says, voters are vulnerable to that kind of a campaign.

In recent years information warfare, and the weaponization of digital platforms, has become as big a concern as cyberattacks to election security people. The “hybrid” nature of the Iranian attack could represent the future of malicious election interference, and U.S. institutions may not be ready for it.

“A lot of government and industry players tend to respond to attacks in silos–the cyber people respond to cyberattacks, and then there’s the influence operations folks who respond to that,” says Ginny Badanes, senior director of Microsoft’s Democracy Forward security group. “But a lot of the adversaries are not approaching it that way; they’re looking at it holistically and then they see all the different tools in their toolkit and they take a strategy and apply it.”

The solution, then, is collaboration. “We all need to be thinking about these things as more closely connected and sharing indicators on both the cyber side and the influence side,” Badanes says.

Repurposed Images

But with the way large social networks work, effective influence campaigns needn’t be so complicated, says Wael Abd-Almageed, research director at the USC Information Sciences Institute. Malicious actors can now dredge up old images and misrepresent them to make some political point. Someone might take an image, or tweet, or video clip, Abd-Almageed says, and present it on social media as if it occurred recently, “in a different geolocation and in different political circumstances.”

While such misinformation often is debunked fairly quickly, the content is usually edgy, controversial, and highly partisan. It can therefore go viral, landing millions of impressions in a short period of time. Abd-Almageed believes that even when a voter sees that such an image has been debunked, the initial visceral experience of seeing it often sticks with them.

TikTok

For years Facebook, and to a lesser extent Google’s YouTube, have been the key platforms used by influence operations to reach, and mislead, mainstream U.S. voters. Foreign influence operators played a significant role in the 2016 election, but since then, most outright disinformation comes from domestic sources. And for those actors, Facebook remains the easiest, cheapest option (for ads), and most far-reaching method of reaching U.S. voters.

Now Facebook and YouTube have a new challenger. The fast rise in popularity of TikTok has made it another clearinghouse of mis- and disinformation.

Now, TikTok, which is headquartered in Los Angeles but owned by the Chinese company ByteDance, is facing the same kinds of troubles Facebook and YouTube faced in past elections—that is, it’s trying to remove mis/disinformation while preserving freedom of expression.

“Ahead of the midterm elections this fall, TikTok is shaping up to be a primary incubator of baseless and misleading information, in many ways as problematic as Facebook and Twitter, say researchers who track online falsehoods,” Tiffany Hsu wrote in the New York Times in August. “The same qualities that allow TikTok to fuel viral dance fads–the platform’s enormous reach, the short length of its videos, its powerful but poorly understood recommendation algorithm–can also make inaccurate claims difficult to contain.”

TikTok’s recommendation engine regularly serves up political content, including false or half-true information on subjects like COVID-19, Joe Biden, the January 6th attack, and the upcoming elections.

The company has been making changes to purge misinformation from the platform. On September 21, it announced that it is “trialing” a requirement that accounts belonging to governments, politicians, and political parties be verified. It’s also prohibiting accounts from soliciting donations for campaign fundraising. (TikTok already prohibits political content in ads, but it’ll now shut off its advertising and e-commerce features for political accounts.)

AI generated deepfake images and video

One of the fastest growing areas in AI research is the development of models that can generate all kinds of images based on both natural language and image prompts. Because these models are capable of generating photorealistic images, they could be weaponized to generate deceptive and politically damaging visual content, which could be posted on social media at a crucial moment in an election.

“What if three or four hours before polls close on the West Coast somebody creates a deepfake showing Biden is sick and he’s not going to make it, something like that, and it goes viral on social networks,” USC’s Abd-Almageed says. “Before the White House debunks this, the polls may have already closed and maybe Biden loses the election.”

The truth is, the companies developing  this super cutting-edge technology are small and young, and may not yet have had time to think through and prepare for all of the unintended consequences, including the political ones. The obvious safeguard for AI-generated images is a watermark or signature telling anyone seeing an image that it was made by a computer, not captured by a camera.

“They have an ethical responsibility to watermark these images so that if someone tries to remove the watermark it will destroy the image,” Abd-Almageed says. “So, in downstream applications of these images on Facebook or Twitter, people can easily identify them as AI-generated images.”

But creators making images with these models have reasons for removing the watermarks and signatures, so the companies haven’t taken the strict approach to attribution Abd-Almageed suggests.

OpenAI’s DALL-E 2 does apply a signature to every image it generates, but there’s nothing in the tool itself that prevents users from removing it. The company also says in its terms of service that users can’t create images of public figures (or anyone for that matter) without their express consent, which, on paper at least, seems to prohibit someone from making a politically sensitive image of a candidate.

A newer, open source, AI image generator called Stable Diffusion lets users download the model from GitHub and remove the code that generates a watermark.

“For the most part, the people who are creating this technology don’t care much about the ethical aspects, and they keep saying if we don’t do it, somebody else will do it,” Abd-Almageed says.

Doxxing, threats to election workers

State and local election officials were rattled–and frightened into action–by Russian cyberattacks on election systems in all 50 states before the 2016 election. Working with the Department of Homeland Security as well as private security contractors, local and state election commissions bent over backwards to lock down systems and practices by the 2020 election, possibly the most secure in the nation’s history.

Donald Trump’s “Stop the Steal” disinformation campaign was hugely successful and its purveyors have gone largely unpunished. This may have normalized the Trumpian idea that if one’s candidate loses, then the election must be rigged. The Election Assistance Commission (EAC) seems to think so. “The public is less likely to trust the outcome of an election if their preferred candidate(s) loses,” reads an EAC online safety guide for election workers. “Additionally, individual members of the public may blame the system for political losses . . . Unfortunately, what the public views as a faceless system is in fact run by real people.” That means precinct workers and election officials–the public face of the electoral system–can become targets of both online harassment and real-world violence.

Among the best-known victims is Shaye Moss, an election official from Georgia who testified during the January 6 hearings that her life became hell after Trump campaign lawyer Rudy Giuliani claimed that she and her mother (also a poll worker in Fulton County) had rigged the outcome at the polls. Moss, a black woman, discovered after Giuliani’s baseless claim that her Facebook Messenger account had been bombarded with personal attacks and threats. “A lot of threats wishing death upon me, telling me I’ll be in jail with my mother and saying things like, ‘Be glad it’s 2020 and not 1920,’ ” she testified.

The top election official in Arizona, secretary of state Maggie Toulouse, said during a recent House Homeland Security Committee that during the 2020 election cycle, she was doxxed (her home address was revealed online), and had to flee her home under police protection for weeks. She added that since 2020, her office has increasingly been the target of social media trolling.

Consequently, election workers and officials have quit in high numbers. This could raise the chances of snafus or delays during the midterms, which may only fuel more doubt in election systems. It could have longer-term ill effects, too, points out the Brennan Center’s Ramachandran. “If we don’t ensure that election workers and election officials stay, then in the long run, they’re going to have problems recruiting and staffing those positions,” she says. “Over time, then, you’re going to lose a lot of this great experience that we have from a lot of public servants.”

The California legislature recently passed a law that would allow election workers to enroll in a program in which the state redacts their name and other personal information from public records. “Hopefully other states will see that bill as a model for giving people some measure of security and confidence before the fact, instead of having to rely on law enforcement work after the fact,” says Ramachandran.

https://www.fastcompany.com/90788981/security-experts-lay-out-the-most-serious-election-threats-ahead-of-the-midterms?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 2y | Oct 13, 2022, 4:22:42 PM


Login to add comment

Other posts in this group

How this sex-forward gay cruising site finally launched an Apple-approved iOS app

As an app designed to facilitate gay hookups, popular site Sniffies has had a limitation since it started in 2018—it was only accessible via web browser. Until Monday, when the map-based cruising

Mar 6, 2025, 9:20:06 PM | Fast company - tech
Why weird JD Vance memes have taken over the internet

Ironically enough, a divisive moment in the Oval Office last weekend seems to have brought the entire internet together. When Ukrainian President Volodymyr Zelenskyy  visited the White House

Mar 6, 2025, 9:20:05 PM | Fast company - tech
TSMC’s $100 billion U.S. commitment could calm Taiwan tensions

Welcome to AI DecodedFast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week 

Mar 6, 2025, 7:10:05 PM | Fast company - tech
Gig Companies are backing Trump’s Labor Secretary nominee. Here’s what that means for workers

The trade association representing America’s largest gig companies is backing President Trump’s nominees to lead the Department of Labor—an endorsement that could shape the future of worker classi

Mar 6, 2025, 7:10:03 PM | Fast company - tech
The Trump administration just cut Defense Department grants that research terrorism and drug trafficking

Researchers in a highly regarded Department of Defense program called the Minerva Research Initiative recently received word that grants already awarded

Mar 6, 2025, 2:30:02 PM | Fast company - tech
YouTube is doubling down on ‘bedtime’ reminders. Do they work?

Teenage YouTube users across the world will now get automatic reminders to go to bed and take a break from their screens. 

YouTube

Mar 6, 2025, 12:10:06 PM | Fast company - tech
How Audiomack became an unlikely Spotify competitor

Kendrick Lamar. Drake. Lady Gaga. The charts of music streaming services pretty much all look the same these days, with familiar names dominating the top spots—except on up-and-coming Spotify comp

Mar 6, 2025, 12:10:05 PM | Fast company - tech