A new report commissioned by the International Committee of the Red Cross raises concerns about militaries’ use of artificial intelligence systems in warfare.
The report, authored by researcher Arthur Holland Michel, who is an external researcher contracted by the Red Cross, argues that current AI and computer systems introduce significant risks of “unaccountable errors” due to uncertainties, hidden assumptions, and biases—and that military personnel who make decisions based on AI-reached decisions need to be fully aware of that those qualities are inherent in AI systems.
“The discourse on military AI at the moment kind of operates on this belief that computerized systems and AI systems are either right or wrong,” says Michel. For example, he says, if an AI system mischaracterizes an ambulance as a tank, causing a human to pull the trigger on a missile to destroy that vehicle, that human can currently pass the blame on to an AI system. But they shouldn’t be able to do that, reckons Michel.
The idea that AI systems are right or wrong in a binary sense is a “faulty narrative,” he says. It’s also a damaging one, as the trust in AI systems used in warfare means that AI tools are being rolled out further and more widely on the battlefield—compounding the issue of sorting AI’s good advice from the bad.
“The fact is, anytime that you put a computerized interface between a human and the thing that they’re looking at, there’s this gray area in which things can go wrong and no one can really be held accountable for it,” he says. “To think that these computerized systems that currently exist that can be perfect and highly accountable, and there is no such thing as a blameless error with the arrival of AI systems is factually wrong at best, and very dangerous at worst.”
The issue is particularly prescient now given reporting by 972 Magazine on the Israeli military’s use of the Lavender and Gospel programs in Gaza. Both programs use AI to select targets in complicated, densely populated areas in which military and civilians are alleged to intermingle, with what 972 Magazine reports are sometimes disastrous consequences. (Spokespeople for the Israel Defense Forces deny the claims of errors made in 972 Magazine.)
Michel, for his part, hopes the Red Cross report’s core findings foster greater understanding around the complexities of the AI issue. “These are uncomfortable questions about the optimization of any kind of decision in warfare,” he says. “We simply do not know [enough about current systems]. And that’s why the discourse around the use of AI in Gaza is kind of floundering.”
Jelentkezéshez jelentkezzen be
EGYÉB POSTS Ebben a csoportban

If real Easter eggs aren’t your thing this weekend, you may find hunting for digital ones more enjoyable. And there are some cool ones to find at your fingertips, provided you have an iPhone or Ma

With music streaming, users have gotten used to being at the mercy of algorithms. But French music streamer Deezer is making it easier for its subscribers to make the algorithm work for them.

Trying to get from point A to point B? If only it were that simple! With any manner of travel these days, you’ve got options: planes, trains, buses, ferries, and beyond. And finding the best

When Twitter cofounder and Medium founder Evan “Ev” Williams was planning his 50th birthday party, he didn’t know who to invite. Having spent more of his life building and scaling tech

If you thought you’d heard the last of the viral “Apple” dance, think again. The TikToker behind it is now suing Roblox over its unauthorized use.
Last year, during the height of Brat su

A Wall Street Journal report this week gave an extensive look into how Elon Musk, the

Netflix fared better than analysts anticipated during the first thr