Senator Mark Warner says Congress is already losing the plot on AI regulation

Senator Mark Warner is one of the most vocal and persuasive members of Congress where regulating the tech industry is concerned. This year, he’s been very engaged in helping the government find its way toward common-sense regulation of burgeoning AI technology, notably in the context of election disinformation. There are already signs that AI has been used to influence foreign elections, and concern is growing that new generative AI tools could be used to inject disinformation into next year’s U.S. election at an unprecedented scale. Congress, however, has so far failed to pass new elections rules to address the threat.

Fast Company spoke to Warner last week, not long before the one-year anniversary of ChatGPT, and 13 months before the sequel to the 2020 presidential contest, the most fraught and disinformation-fueled election in U.S. history—even without much help from AI. This interview has been edited for clarity and brevity.

You said earlier this month that the 2016 elections would look like child’s play compared to what 2024 could be in terms of AI-generated deepfakes and disinformation, and that the scale and speed of such disinformation would be much higher. Where does your concern originate?

[I’m] somebody who I think got a graduate degree in misinformation after having done the Russia [election interference] investigation from 2017, 2018, 2019, and a lot of that activity was you had many times that actual individual having to misrepresent themselves often on Russian trolls trying to misrepresent themselves as an American. You had rudimentary bots and fake [content] placements. Frankly, we don’t have great rules on misinformation, disinformation. Social media companies have evaded responsibility. They use Section 230 [the landmark 1996 law that exempts social media companies from liability for user content]. We still haven’t passed a basic campaign disclosure bill for advertising on social media platforms. Most [social media] companies have gone ahead and put something in place, but Congress’s record is a big fat zero.

One of the things AI does is take nefarious actions, and both the time it takes to enact a series of bad actions and the scale at which you could have one of these tools literally create millions of deepfakes. I’m surprised there’s not been greater utilization of deepfakes already. Just recently in Virginia, we’ve got the legislative races and there was some shadowy group that was trying to send out information on a targeted basis, saying if you don’t vote you could be charged extra taxes—totally false information. You could do this in minutes and send misinformation to your whole voter base because of these tools.

Looking at Congress’s record, I’m not sure we’re going to have some grandiose AI regulatory scheme, no matter how many of the tech [companies] raise their hand and say they want regulation. I find that a little rich; they’ll raise their hand until you actually try to put words on paper.

What I’ve been thinking about is what are areas where AI tools right now—with the current versions, not some future generative model—could have huge disruption. One is undermining public trust in elections, but also in public markets.

I’ve noticed that you’ve begun speaking about the threat of AI disinformation to public markets, often in the same breath as AI disinformation. Why is that?

I’ve been surprised there’s not more [stock] manipulation. I’ve thought about the ways you could manipulate a public stock, from deepfakes to issuing false product claims to issuing fake SEC filings. I’ve been thinking, could we bring some collaboration between them? Between those who want to maintain election integrity and market integrity, I think there’s a lot of interest. What that looks like in a legislative format, we’ve got some initial ideas. They’re not quite ready for prime time. I’m interested in what Google said about watermarking, but unless there’s some standard watermarking process across all the large models or even the individual tools I’m not sure it’s going to be effective. And in the notional idea, which makes sense, of [requiring model developers] to indicate the sourcing of all your [training] materials. Does it become the largest disclaim reform anybody’s ever seen?

But I don’t underestimate the challenges we’re going to have on figuring out something that is both bipartisan and also can get through a pretty unusual House of Representatives at this point.

A couple of interesting bills have been introduced. Amy Klobuchar has the Protect Elections from Deceptive AI Act. In the House Yvette Clark has introduced the Deepfakes Accountability Act. Are you thinking about working with these people on these existing bills, or creating something new that addresses a different set of issues?

You want to get something that can actually pass, and so I’m spending time with my Republican friends now to see where is that sweet spot that we could get something done. I’ve got eight different social media constraint laws—everything from data, portability, interoperability, dark patterns—things that I thought were kind of no brainers, and none of them have passed. So I bear the scars of these unsuccessful efforts.

Even if we start with something modest, I think it’s important just to put a stake in the ground. And that’s why I keep coming back to at least the notional idea from the legislating process. If we could find a commonality between the large number concerned about the manipulation of public trust in the public markets and find that alliance with folks who are concerned as good citizens about manipulation of public trust in the public elections, I think there’s something there.

I think if I can get the big C capitalists along with the small D Democrats to work together, that might be a more powerful alliance. There’s a reason that big tech has been absolutely, 100% successful. These are powerful entities, and I don’t say that in a disparaging way. I count many of the leaders of these organizations as friends—but boy oh boy, it’s a tough group to get a legislative proposal through.

Because it’s so hard to get a piece of legislation through the current Congress, with all its discord and distractions, I wonder if there are ways of addressing the threat of AI to elections by working around Congress. For instance, doesn’t the Federal Elections Commission have the power to change its own rules to more explicitly prohibit AI-generated disinformation?

If you look at history, the FEC was deadlocked on a partisan basis for a long time. It’s not really had the enforcement ability that some more powerful entities have had.

I think while there was a lot of talk at the beginning of the AI wars about, could you even create a new AI-centric agency where you’d have the expertise, and it would work with the specific agencies of competency. I know the Europeans are thinking about that, and [I] have been following everything that’s been happening in all the other nation states. I just don’t see a House that’s in this much chaos creating a new AI super regulatory agency.

I’ve held about half dozen of these sessions; Schumer had sessions. I just see a declining participation rate.

You held a hearing lately with Yann LeCun, who is obviously out there talking about how open source is the right way. Did you leave that meeting more convinced that he is correct, or are we looking at systems that we really need to put walls around?

Yann is very smart and he makes a good case. But [OpenAI CEO] Sam Altman makes a good case on the other side as well. And then I put on my Intel [committee] chair hat and I’m not sure [that] if we took all the information from the NSA, CIA, NGA, and NRO that we could do that on an open-source model. So put me in the “still figuring it out” category on the question of open versus closed.

I hear some similarity between what Mark Zuckerberg comes to D.C. and says and what Altman says, which is, yeah, regulate us, regulate us. But you know, if I’m being cynical, I don’t know if they really mean that.

I don’t know if it’s being cynical or just being realistic. My first social media white paper was 2017; I’m six years into this and batting zero. And then I think, Well, maybe I’m just not a good legislator. But then you look around, and the whole Congress is batting zero.

https://www.fastcompany.com/90970560/senator-mark-warner-says-congress-is-already-losing-the-plot-on-ai-regulation?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Creată 2y | 24 oct. 2023, 11:10:04


Autentifică-te pentru a adăuga comentarii

Alte posturi din acest grup

The other Blue Sky is getting tons of traffic

There’s Blue Sky and then there’s Bluesky.

Blue Sky, a paper goods company

25 apr. 2025, 15:30:05 | Fast company - tech
Google’s profits skyrocketed 50% in Q1, beating expectations

Google’s profits soared 50% in this year’s opening quart

25 apr. 2025, 15:30:04 | Fast company - tech
Here’s how top chief product officers are getting AI right

The AI revolution is redefining business and tech leadership—and no one is standing more squarely on the front lines than product leaders.

Once seen as a behind-the-scenes role, the CPO

25 apr. 2025, 13:10:13 | Fast company - tech
Dubai is opening an influencer academy—and they’ll pay you to join

Dubai, the go-to destination for influencers, is now doubling down on its biggest market with the launch of its very own “influencer academy.”

Jointly funded by the Dubai Department of E

25 apr. 2025, 13:10:12 | Fast company - tech
Roblox faces backlash after a report uncovers games simulating real-life school shootings

A new report has uncovered a community of Roblox players who digitally re-create and “play” through real-life school shootings.

Known as “Active Shooter Studios,” or A.S.S., the group ha

25 apr. 2025, 13:10:11 | Fast company - tech