As protests against the Trump administration and in favor of Palestine continue to grow across the country, the U.S. State Department is reportedly planning to use tech to try and tamp down on dissent. This month, Axios reported that Marco Rubio’s “Catch and Revoke” plan to strip foreign nationals of the visas that allow them to remain in the country could be powered by AI analysis of their social media accounts.
The mooted use of AI comes as former Columbia University grad student Mahmoud Khalil has become the face of the Trump administration’s tougher line on protest, with Khalil currently detained and threatened with the revocation of his green card for his participation on on-campus protests.
Using AI to try and analyze the contents of people’s social media posts for actions that the Trump administration—if notably not the law and rights set out under the country’s constitutional amendments—deems unacceptable is a risky move that runs the risk of creating huge false positives. And it worries privacy and AI experts in equal measure.
“For so many years, we have heard this very bad argument that we don’t need to worry because we have democracy,” says Carissa Véliz, an AI ethicist at the University of Oxford. “Precisely the point of privacy is to keep democracy. When you don’t have privacy, the abuse of power is too tempting, and it’s just a matter of time for it to be abused.”
The risk Véliz and others worry about is that digital privacy is being eroded in favor of a witch hunt driven by a technology that people often have more faith in its accuracy than is truly deserved.
That’s a concern too for Joanna Bryson, professor of ethics and technology at Hertie School in Berlin, Germany. “Disappearing political enemies, or indeed just random citizens, has been a means of repression for a long time, especially in the new world,” she says. “I don’t need to point out the irony of Trump choosing a mechanism so similar to the South and Central American dictators in the countries he denigrates.”
Bryson also points out that there parallels with how Israel used AI to identify tens of thousands of Hamas targets, many of whom were then targeted for physical bombing attacks in Gaza by the Israeli military. The controversial program, nicknamed Lavender, has been questioned as a military use of AI that could throw up false positives and is unvetted. “Unless the AI systems are transparent and audited, we have no way of knowing whether there’s any justification for which 35,000 people were targeted,” says Bryson. “Without appropriate regulation and enforcement of AI and digital systems—including military ones, which incidentally even the EU is not presently doing—we can’t tell whether there was any justification for the targets, or if they just chose enough people that any particular building they wanted to get rid of they’d have some justification for blowing it up.”
The use of AI is also something of a smokescreen, designed to deflect responsibility for serious decisions that those having to make them can claim are guided by supposedly “impartial” algorithms. “This is the kind of thing Musk is trying to do now with DOGE, and already did with Twitter,” says Bryson. “Eliminating humans and reducing accountability. Well, obscuring accountability.”
And the problem is that when looking at AI classifications of social media content, accountability is important because it’s a case of when, not if, the technology misfires. The risks of hallucination and bias are big problems within AI systems. Hallucinations occur when AI systems make up answers to questions, or invents what could be seen as damning posts for users if their social media content is being parsed through artificial intelligence. Inherent bias in systems because of the way they’re designed, and by whom they’re created, is also a big factor in many errors in AI systems. In 2018, Amazon was forced to withdraw plans to perform a first pass at job applicants’ résumés because the system was found to be automatically rejecting all female candidates because of ways in which the AI had been set up and trained.
It’s bad enough for those errors to impact on whether or not someone gets invited to a job interview. But when it comes to potentially being detained and deported from the United States—and risking not being allowed back into the country in the future—it’s a much more high-stakes situation.
Accedi per aggiungere un commento
Altri post in questo gruppo

Technology workers in Kenya have held a vigil for a colleague who died in unclear circumstances after she was unable to travel to her home in Nigeria for two years.
Ladi Anzaki Olubunmi,
Featuring Matthew Prince, Cofounder and CEO, Cloudflare. Moderated by Brendan Vaughan, Editor in Chief, Fast Company.
With a quarter of the global internet powered by Cloudflare—its netw

An influx of copy-and-pasted Christian messages has recently taken over TikTok’s comment sections.
Over the past several days, comments about Jesus Christ have surfaced among the top com

Apple has successfully blocked its opponents in

Forget a diamond ring, the latest symbol of commitment now comes in the form of wearable tech.
The RAW ring, created by the dating app

Satellite-based disaster monitoring has been a slow and tedious process for decades. The process consists of capturing images, transmitting them back to Earth, and relying on human analysts to int
