The AI military-industrial complex is here

Courtesy of Dall-E
The Pentagon desperately wants technological superiority over its military rivals. And in 2024, that means it’s in hot pursuit of artificial intelligence.

That should come as no surprise; after all, the military has been a major funder, driver, and early adopter of cutting-edge technology throughout the last century. Military spending on AI-related federal contracts has been booming since 2022, according to a Brookings Institution analysis, which found yearly spending on AI increased from $355 million in the year leading up to August 2022 to a whopping $4.6 billion a year later.

In response to this demand, AI companies of all sizes are getting in on the action. Last Wednesday, on Dec. 4, OpenAI announced a new partnership with the military technology company Anduril Industries, known for its drones and autonomous systems. OpenAI had previously banned the use of its large language models, but with this partnership, it has somewhat reversed course, deciding there are, in fact, some applications that it feels comfortable with — in this case, defensive systems that protect US soldiers from drone attacks. In response, OpenAI employees have raised ethical concerns internally, the Washington Post reported, but CEO Sam Altman has stood by the decision. “We are proud to help keep safe the people who risk their lives to keep our families and our country safe,” he wrote in a statement.

OpenAI’s decision came mere weeks after two other big announcements: On Nov. 4, Meta decided to reverse course on its own military prohibition, permitting its language models to be used by US military and national security agencies. The company said it would provide its models directly to agencies, to established defense contractors Lockheed Martin and Booz Allen, and to defense tech companies like Anduril and Palantir. Then, on Nov. 7, OpenAI’s rival Anthropic, which makes the chatbot Claude, partnered with Peter Thiel’s firm Palantir and Amazon Web Services to provide AI capabilities to US intelligence services.

Military applications of AI go far beyond developing lethal autonomous weapons systems, or killer robots, as we’ve written before in this newsletter. AI can help with command and control, intelligence analysis, and precision targeting. That said, the uses of generative AI models such as OpenAI’s GPT-4 and Anthropic’s Claude are more sprawling in nature.

“There’s a lot of both interest and pressure on the national security community to pilot and prototype generative AI capabilities,” says Emelia Probasco, a senior fellow at Georgetown University's Center for Security and Emerging Technology and a former Pentagon official. “They’re not quite sure what they’re going to do with it, but they’re pretty sure it’s going to be powerful.”

And some of the best uses of this technology might simply be the boring stuff, Probasco added, such as writing press releases and filling out personnel paperwork. “Even though [the military] does some warfighting, it also does a lot of bureaucracy.”

For contractors of all types, AI presents a business opportunity too. “Defense contracting is a potentially lucrative business for AI startups despite some very valid concerns about AI safety and ethics,” says Gadjo Sevilla, senior technology analyst at eMarketer. He added that gaining the trust of the military could also help AI companies prove their safety. “They are more likely to gain other contracts once they are perceived as defense-grade AI solutions.”

Probasco says that the US military needs the expertise of Silicon Valley to stay on the cutting edge, but she does worry about the two worlds becoming too cozy with one another.

“The worst thing would be if we end up in another techno-utopia like we had when in the early days of social media, thinking that Silicon Valley is going to 100% come in and save the day,” she said. “What we need are reasonable, smart, hardworking people who respect different perspectives.”

More from GZERO Media

Demonstrators carry the dead body of a man killed during a protest a day after a general election marred by violent demonstrations over the exclusion of two leading opposition candidates at the Namanga One-Post Border crossing point between Kenya and Tanzania, as seen from Namanga, Kenya October 30, 2025.
REUTERS/Thomas Mukoya

Tanzania has been rocked by violence for three days now, following a national election earlier this week. Protestors are angry over the banning of candidates and detention of opposition leaders by President Samia Suluhu Hassan.

Illegal immigrants from Ethiopia walk on a road near the town of Taojourah February 23, 2015. The area, described by the United Nations High Commissioner for Refugees (UNHCR) as one of the most inhospitable areas in the world, is on a transit route for thousands of immigrants every year from Ethiopia, Eritrea and Somalia travelling via Yemen to Saudi Arabia in hope of work. Picture taken February 23.
REUTERS/Goran Tomasevic

7,500: The Trump administration will cap the number of refugees that the US will admit over the next year to 7,500. The previous limit, set by former President Joe Biden, was 125,000. The new cap is a record low. White South Africans will have priority access.

- YouTube

In an era characterized by rapid technological advancement, cybersecurity and artificial intelligence present both challenges and opportunities. At the 2025 Paris Peace Forum, GZERO’s Tony Maciulis engages in an insightful conversation with Dame Jacinda Ardern, former Prime Minister of New Zealand, and Lisa Monaco, President of Global Affairs at Microsoft, discussing strategies for a secure digital future.

- YouTube

As AI adoption accelerates globally, questions of equity and access are coming to the forefront. Speaking with GZERO’s Tony Maciulis on the sidelines of the 2025 Paris Peace Forum, Chris Sharrock, Vice President of UN Affairs and International Organizations at Microsoft, discusses the role of technology in addressing global challenges.