This small-town Wyoming election could give us a preview of the future of AI in politics

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. You can sign up to receive this newsletter every week here.

The Wyoming mayoral candidate who wants AI to run the city may be a pioneer, not a joke 

In Cheyenne, Wyoming, a mayoral candidate named Victor Miller is basing his candidacy on the idea of letting an AI bot run the local government. He says the bot, called Vic (Virtual Integrated Citizen), can digest mountains of data and make unbiased decisions, and may be capable of proposing novel solutions to previously intractable resource-distribution problems. Miller ">says he would make sure the bot’s actions are “legally and practically executed.” Experts say that Miller’s AI-fueled candidacy is a first of its kind in the U.S. 

It sounds crazy, until you consider the context. According to a 2023 Gallup poll, Americans’ confidence in government has hit historic lows and is heading downward. Some of that erosion of trust is owed to government waste and administrative failure (see: the Veterans Administration’s woes). Maybe AI could make operations run more smoothly and restore some of that trust.

To be sure, AI is already helping the government. Washington Post AI reporter Josh Tyrangiel pointed out in May that when the government really needed to quickly create and distribute a COVID vaccine, it turned to the AI platform Palantir to speed along the process by organizing and analyzing the data. The intelligence community relies heavily on Palantir to make sense of the myriad streams of intelligence data coming in from all over the world. The U.S. Agency for International Development (USAID) recently said it will use OpenAI’s ChatGPT Enterprise to help new and local aid organizations it partners with. Tyrangiel argues that the government could use AI bots to answer the public’s questions about taxes, healthcare benefits, and student loans.

Much of the public’s loss of faith in government is, per that Gallup poll, caused by a belief that the government’s priorities have been skewed by big money and partisan interests. Nowhere is that unscrupulousness more apparent than in our congressional districts, which have routinely been redrawn by state parties for partisan advantage. AI could be used to create fair district maps that closely reflect the demographic realities of a state, based on data from the Census and other sources. Neutral AI-generated maps could instantly create districts that are competitive, not just gimmes for whatever candidate the dominant party picks in the primary. 

Yet, there can be all kinds of pitfalls related to involving AI in political decisions. For example, when states sought to remove partisanship from districting via independent-district drawing boards, the fight became about the political leanings of the board members and who got to pick them. In the future, debates could arise over the type of AI bot and which training data to give it. But it’s unlikely Miller is the last local politician to tout AI on the campaign trail. As AI tools improve, candidates at the state and federal levels may make AI a bigger part of the story they tell voters.

Authors say Anthropic used pirated books to train Claude

San Francisco-based Anthropic, a close competitor to OpenAI and Google, is being sued by three authors—Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson—who say the company trained its Claude AI models using their books, and “hundreds of thousands” of others, without permission. The complaint, filed in federal district court in San Francisco, may become a class action suit if other authors who believe their work has been used to train Claude sign on.

“It is no exaggeration to say that Anthropic’s model seeks to profit from strip-mining the human expression and ingenuity behind each one of those works,” the authors’ attorneys write in the complaint. They point out that Anthropic presents itself as a public benefit company—one that expects to bring in $850 million in revenue by the end of this year. “It is not consistent with core human values or the public benefit to download hundreds of thousands of books from a known illegal source,” the attorneys write.

“We are aware of the suit and are assessing the complaint,” an Anthropic spokesperson said in an email to Fast Company. “We cannot comment further on pending litigation.”

The suit, which follows similar ones taken against AI rivals like OpenAI and Meta, states that Anthropic has admitted to using a third-party dataset called “the Pile,” a large open-source training dataset hosted and made available online by a nonprofit called EleutherAI. The Pile’s creators say the dataset contains 196,640 books, or all of “Bibliotik,” one of several sites the complaint calls an “infamous shadow library” hosting “massive collections of pirated books.”  

The AI disinformation landscape is evolving rapidly during the election 

AI deepfakery is indeed playing a role in the 2024 election, but not in the way many had feared—at least not yet. AI images and video simply aren’t good enough yet to cross the uncanny valley and create a really evil facsimile of legit content. Most of the current AI-generated images look glossy and cartoony, including the image of Democratic nominee Kamala Harris speaking in front of a big hammer and sickle banner and the picture of a poor rendering of Taylor Swift dressed as Uncle Sam endorsing Republican nominee Donald Trump. (Trump himself reposted both images on his Truth Social account, alongside a fake clip of himself dancing next to billionaire Elon Musk.)

Perhaps the most disturbing one we’ve seen so far is the deepfake of a Kamala Harris ad shared by Elon Musk that replaced the real audio to make Harris call herself “incompetent.” No reasonable person would believe that Harris would call herself incompetent, but what if it had been something more subtle, like Harris mentioning a sudden reversal of a key policy idea, such as raising the corporate income tax rate? Something like that could negatively impact the race, especially in the final weeks of the campaign.

More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

https://www.fastcompany.com/91177018/this-small-town-wyoming-election-could-give-us-a-preview-of-the-future-of-ai-in-politics?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 5mo | 22 aug. 2024, 14:30:03


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

DoorDash is expanding its portable benefits program to Georgia next year (exclusive)

DoorDash is expanding its portable benefits pilot program to certain gig workers in Georgia starting next year, the food-delivery giant tells Fast Company.

Dashers (which is wha

10 ian. 2025, 15:20:07 | Fast company - tech
Red Bull and Ford are building a new F1 hybrid race car engine—first as bits, then atoms

To get from 0 to 60 in Formula 1 engine design while competing against organizations with much more experience, Red Bull Ford Powertrains will need extra help (and, no, that boost won’t come in th

10 ian. 2025, 15:20:06 | Fast company - tech
AI taught me to be a (slightly) better badminton player at CES

I am not what you would call a finely tuned athletic machine. I am, if anything, an outdated lawnmower engine held together by duct tape and rust. So when I was offered the opportunity to let AI h

10 ian. 2025, 15:20:04 | Fast company - tech
The L.A. wildfires show how social media has become just another spin room

It’s hard to remember now, as you scroll through a thicket of porn bots, anti-trans activists, and AI slop

10 ian. 2025, 12:50:06 | Fast company - tech
These AI applications are aiding—not replacing—human creatives

There’s been plenty of speculation about whether generative AI could replace—or perh

10 ian. 2025, 12:50:06 | Fast company - tech
What does Meta’s Oversight Board even do?

When Meta established its Oversight Board to adjudicate on decisions it made about removing content from its platforms in 2020, the goal was for the select group of individuals from the media, civ

10 ian. 2025, 10:40:03 | Fast company - tech
6 years ago, Elon Musk offered help during wildfires. This time he blamed DEI

When a devastating wildfire hit California in November 2018, a powerful CEO went on Twitter to ask how his company could help. That

10 ian. 2025, 01:20:06 | Fast company - tech