How the Trump administration plans to use algorithms to target protesters

As protests against the Trump administration and in favor of Palestine continue to grow across the country, the U.S. State Department is reportedly planning to use tech to try and tamp down on dissent. This month, Axios reported that Marco Rubio’s “Catch and Revoke” plan to strip foreign nationals of the visas that allow them to remain in the country could be powered by AI analysis of their social media accounts.

The mooted use of AI comes as former Columbia University grad student Mahmoud Khalil has become the face of the Trump administration’s tougher line on protest, with Khalil currently detained and threatened with the revocation of his green card for his participation on on-campus protests.

Using AI to try and analyze the contents of people’s social media posts for actions that the Trump administration—if notably not the law and rights set out under the country’s constitutional amendments—deems unacceptable is a risky move that runs the risk of creating huge false positives. And it worries privacy and AI experts in equal measure.

“For so many years, we have heard this very bad argument that we don’t need to worry because we have democracy,” says Carissa Véliz, an AI ethicist at the University of Oxford. “Precisely the point of privacy is to keep democracy. When you don’t have privacy, the abuse of power is too tempting, and it’s just a matter of time for it to be abused.”

The risk Véliz and others worry about is that digital privacy is being eroded in favor of a witch hunt driven by a technology that people often have more faith in its accuracy than is truly deserved.

That’s a concern too for Joanna Bryson, professor of ethics and technology at Hertie School in Berlin, Germany. “Disappearing political enemies, or indeed just random citizens, has been a means of repression for a long time, especially in the new world,” she says. “I don’t need to point out the irony of Trump choosing a mechanism so similar to the South and Central American dictators in the countries he denigrates.”

Bryson also points out that there parallels with how Israel used AI to identify tens of thousands of Hamas targets, many of whom were then targeted for physical bombing attacks in Gaza by the Israeli military. The controversial program, nicknamed Lavender, has been questioned as a military use of AI that could throw up false positives and is unvetted. “Unless the AI systems are transparent and audited, we have no way of knowing whether there’s any justification for which 35,000 people were targeted,” says Bryson. “Without appropriate regulation and enforcement of AI and digital systems—including military ones, which incidentally even the EU is not presently doing—we can’t tell whether there was any justification for the targets, or if they just chose enough people that any particular building they wanted to get rid of they’d have some justification for blowing it up.”

The use of AI is also something of a smokescreen, designed to deflect responsibility for serious decisions that those having to make them can claim are guided by supposedly “impartial” algorithms. “This is the kind of thing Musk is trying to do now with DOGE, and already did with Twitter,” says Bryson. “Eliminating humans and reducing accountability. Well, obscuring accountability.”

And the problem is that when looking at AI classifications of social media content, accountability is important because it’s a case of when, not if, the technology misfires. The risks of hallucination and bias are big problems within AI systems. Hallucinations occur when AI systems make up answers to questions, or invents what could be seen as damning posts for users if their social media content is being parsed through artificial intelligence. Inherent bias in systems because of the way they’re designed, and by whom they’re created, is also a big factor in many errors in AI systems. In 2018, Amazon was forced to withdraw plans to perform a first pass at job applicants’ résumés because the system was found to be automatically rejecting all female candidates because of ways in which the AI had been set up and trained.

It’s bad enough for those errors to impact on whether or not someone gets invited to a job interview. But when it comes to potentially being detained and deported from the United States—and risking not being allowed back into the country in the future—it’s a much more high-stakes situation.

https://www.fastcompany.com/91295390/how-the-trump-administration-plans-to-use-algorithms-to-target-protesters?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Created 1mo | Mar 12, 2025, 12:10:03 PM


Login to add comment

Other posts in this group

When will Elon Musk admit he’s bad at gaming?

Elon Musk loves to project strength. He flexes loudly—hyping Tesla and xAI, bashing the federal government, even parenting like a drill sergeant. Lately, he’s been trying to flex in gaming.

Apr 15, 2025, 12:10:04 PM | Fast company - tech
What ‘Ex Machina’ got right (and wrong) about AI, 10 years later

Before media outlets began comparing OpenAI’s Sam Altman with the father of the atomic

Apr 15, 2025, 12:10:04 PM | Fast company - tech
5 under-the-radar Kindle tricks to elevate your e-reading

At first glance, your Kindle might seem like a no-frills reading device: straightforward, minimal, and focused on the basics. Kind of like an actual book, huh?

But beneath its simple exterior lie

Apr 15, 2025, 5:10:05 AM | Fast company - tech
Elon Musk and Jack Dorsey want to kill IP law. That would be a huge mistake

Few statements communicate a sentiment more directly than the four words Jack Dorsey, a cofounder of Twitter, posted over the weekend o

Apr 14, 2025, 10:10:09 PM | Fast company - tech
Uber and Lyft drivers in California could get the right to unionize under this bill

Two California Democrats have introduced a bill that would allow rideshare drivers to bargain with gig companies, including Uber and Lyft, for better pay and certain benefits.

The measur

Apr 14, 2025, 10:10:08 PM | Fast company - tech
OpenAI launches new GPT-4.1 models with improved coding

OpenAI on Monday launched its new AI model GPT-4.1, along with smaller versions GPT-4.1 mini and GPT-4.1 nano, touting major improvements i

Apr 14, 2025, 10:10:07 PM | Fast company - tech
Why you should update your old dating app profile photos ASAP

Daters: It might be time to spring clean your dating app profile.

More than 50% of young Americans have gone on a date with someone who looked different from their profile photos, accord

Apr 14, 2025, 7:50:02 PM | Fast company - tech