Military AI is here. Some experts are worried

For the first two-and-a-half years of the generative AI revolution, the AI arms race has been waged between competing companies seeking to make bank from the promise and potential of the technology. But things are maturing in the AI world—and with it, there’s another frontline for AI: the military.

Scale AI, the company set up by Alexandr Wang, has been awarded what CNBC reports is a multimillion-dollar deal to help develop Thunderforge, which the U.S. Department of Defense calls “an initiative designed to integrate artificial intelligence into military operational and theater-level planning, and fusing cutting-edge modeling and simulation tools.” Wang told CNBC that “our AI solutions will transform today’s military operating process and modernize American defense” and that they “will provide our nation’s military leaders with the greatest technological advantage.”

The move is unsurprising—militaries are always keen on keeping at the cutting edge of technology, trying to eke out an advantage against competitive armies—but disappointing, says Margaret Mitchell, researcher and chief ethics scientist at Hugging Face, an AI company. “We already know we’re moving forward to push AI systems farther and farther out from our control,” she says. “Many in the industry and in the media are treating more and more powerful systems as if they are inevitable, and therefore making it so.”

Mitchell adds that technology has always relied on military clientele to act as a crucible for, and accelerant of, new innovation. “Military use has long been a staple of technological development,” she says. “Massively destructive outcomes are fully predictable based on history and how the tech marketplace works.” (Scale AI declined to comment. The Defense Department did not respond to Fast Company‘s request for comment.)

That level of destruction could be catastrophic, argues David Krueger, assistant professor at the University of Montreal, studying AI safety and risk. “I think it’s likely to lead to the end of humanity, to human extinction,” he says, speaking generally about the use of AI for military purposes, calling the military use of AI “one of the most obvious ways in which AI poses an existential risk to humanity.” 

Krueger says that AI is being used in many areas to hand off human control and outsource it to AI systems. “I think this is a risk in every domain, and I think in the military, it’s particularly concerning, and something which will require international collaboration to avoid getting out of hand and risking human extinction.” Scale AI has said that the Thunderforge program will operate with human oversight, and Noah Sylvia, a research analyst at the Royal United Services Institute (RUSI), points out that “as AI functions go, I would say it is not as controversial as a lot of other ones, because this is what you could term an enterprise function.”

Scale AI is far from the only company to ink a deal with the U.S. military to leverage the power of AI to support such activities. A number of companies have also agreed terms to provide their AI technology for military purposes. “I think part of the reaction is because they started out in a very civilian-oriented company, and over the past few months, especially, we’ve seen all of these civilian companies suddenly turn towards defense more,” says Sylvia. Indeed, the press release by the Defense Innovation Unit announcing Scale AI’s deal for Thunderforge points out the same program will also include Anduril’s Lattice software platform and state of the art LLMs enabled by Microsoft.

“I struggle to see a way out of it,” says Hugging Face’s Mitchell. Even if individual countries or companies were to decide to step aside from using AI for military purposes, or to decline to provide support to countries that are seeking military AI—as Hugging Face has refused to do in the past—others would likely step into the breach. “We need some ability to coordinate to prevent actors from building AI systems,” says the University of Montreal’s Krueger. “I think that should be—in fact—the number-one priority in foreign policy for every country at this point because it’s an incredibly important issue, and it’s going to be difficult to address it.”

Developing cross-country guidelines for how to consider the use of AI in military environments will be vital in the future, says Mitchell. She suggests a multipoint plan that includes keeping AI systems within strict operational boundaries, making it impossible for systems to autonomously deploy weapons, introducing safety mechanisms, and advancing what’s deemed state of the art in input data analysis and output evaluation to gain a deeper understanding into what systems can and cannot do.

She also has two simpler suggestions. “Do not deploy technology whose actions you cannot reasonably foresee,” she says. And secondly: “Do not fully cede human control.”

https://www.fastcompany.com/91295381/military-ai-is-here-some-experts-are-worried?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Établi 1mo | 12 mars 2025, 12:10:04


Connectez-vous pour ajouter un commentaire

Autres messages de ce groupe

5 under-the-radar Kindle tricks to elevate your e-reading

At first glance, your Kindle might seem like a no-frills reading device: straightforward, minimal, and focused on the basics. Kind of like an actual book, huh?

But beneath its simple exterior lie

15 avr. 2025, 05:10:05 | Fast company - tech
Elon Musk and Jack Dorsey want to kill IP law. That would be a huge mistake

Few statements communicate a sentiment more directly than the four words Jack Dorsey, a cofounder of Twitter, posted over the weekend o

14 avr. 2025, 22:10:09 | Fast company - tech
Uber and Lyft drivers in California could get the right to unionize under this bill

Two California Democrats have introduced a bill that would allow rideshare drivers to bargain with gig companies, including Uber and Lyft, for better pay and certain benefits.

The measur

14 avr. 2025, 22:10:08 | Fast company - tech
OpenAI launches new GPT-4.1 models with improved coding

OpenAI on Monday launched its new AI model GPT-4.1, along with smaller versions GPT-4.1 mini and GPT-4.1 nano, touting major improvements i

14 avr. 2025, 22:10:07 | Fast company - tech
Why you should update your old dating app profile photos ASAP

Daters: It might be time to spring clean your dating app profile.

More than 50% of young Americans have gone on a date with someone who looked different from their profile photos, accord

14 avr. 2025, 19:50:02 | Fast company - tech
TikTok travel influencers are illegally hitching a ride on Mauritania’s Iron Ore Train

Mauritania isn’t typically a major tourist destination. But its only railway has recently become the subject of a viral TikTok travel trend: riding the “Iron Ore Train.” This 437-mile journey thro

14 avr. 2025, 17:30:06 | Fast company - tech
Sony Playstation 5 prices are going up. Here’s where consumers will get hit

Sony said it will raise prices starting Monday for some

14 avr. 2025, 17:30:05 | Fast company - tech