A new report commissioned by the International Committee of the Red Cross raises concerns about militaries’ use of artificial intelligence systems in warfare.
The report, authored by researcher Arthur Holland Michel, who is an external researcher contracted by the Red Cross, argues that current AI and computer systems introduce significant risks of “unaccountable errors” due to uncertainties, hidden assumptions, and biases—and that military personnel who make decisions based on AI-reached decisions need to be fully aware of that those qualities are inherent in AI systems.
“The discourse on military AI at the moment kind of operates on this belief that computerized systems and AI systems are either right or wrong,” says Michel. For example, he says, if an AI system mischaracterizes an ambulance as a tank, causing a human to pull the trigger on a missile to destroy that vehicle, that human can currently pass the blame on to an AI system. But they shouldn’t be able to do that, reckons Michel.
The idea that AI systems are right or wrong in a binary sense is a “faulty narrative,” he says. It’s also a damaging one, as the trust in AI systems used in warfare means that AI tools are being rolled out further and more widely on the battlefield—compounding the issue of sorting AI’s good advice from the bad.
“The fact is, anytime that you put a computerized interface between a human and the thing that they’re looking at, there’s this gray area in which things can go wrong and no one can really be held accountable for it,” he says. “To think that these computerized systems that currently exist that can be perfect and highly accountable, and there is no such thing as a blameless error with the arrival of AI systems is factually wrong at best, and very dangerous at worst.”
The issue is particularly prescient now given reporting by 972 Magazine on the Israeli military’s use of the Lavender and Gospel programs in Gaza. Both programs use AI to select targets in complicated, densely populated areas in which military and civilians are alleged to intermingle, with what 972 Magazine reports are sometimes disastrous consequences. (Spokespeople for the Israel Defense Forces deny the claims of errors made in 972 Magazine.)
Michel, for his part, hopes the Red Cross report’s core findings foster greater understanding around the complexities of the AI issue. “These are uncomfortable questions about the optimization of any kind of decision in warfare,” he says. “We simply do not know [enough about current systems]. And that’s why the discourse around the use of AI in Gaza is kind of floundering.”
Login to add comment
Other posts in this group

At first glance, your Kindle might seem like a no-frills reading device: straightforward, minimal, and focused on the basics. Kind of like an actual book, huh?
But beneath its simple exterior lie

Few statements communicate a sentiment more directly than the four words Jack Dorsey, a cofounder of Twitter, posted over the weekend o

Two California Democrats have introduced a bill that would allow rideshare drivers to bargain with gig companies, including Uber and Lyft, for better pay and certain benefits.
The measur

OpenAI on Monday launched its new AI model GPT-4.1, along with smaller versions GPT-4.1 mini and GPT-4.1 nano, touting major improvements i

Daters: It might be time to spring clean your dating app profile.
More than 50% of young Americans have gone on a date with someone who looked different from their profile photos, accord

Mauritania isn’t typically a major tourist destination. But its only railway has recently become the subject of a viral TikTok travel trend: riding the “Iron Ore Train.” This 437-mile journey thro

Sony said it will raise prices starting Monday for some