Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The AI military-industrial complex is here
That should come as no surprise; after all, the military has been a major funder, driver, and early adopter of cutting-edge technology throughout the last century. Military spending on AI-related federal contracts has been booming since 2022, according to a Brookings Institution analysis, which found yearly spending on AI increased from $355 million in the year leading up to August 2022 to a whopping $4.6 billion a year later.
In response to this demand, AI companies of all sizes are getting in on the action. Last Wednesday, on Dec. 4, OpenAI announced a new partnership with the military technology company Anduril Industries, known for its drones and autonomous systems. OpenAI had previously banned the use of its large language models, but with this partnership, it has somewhat reversed course, deciding there are, in fact, some applications that it feels comfortable with — in this case, defensive systems that protect US soldiers from drone attacks. In response, OpenAI employees have raised ethical concerns internally, the Washington Post reported, but CEO Sam Altman has stood by the decision. “We are proud to help keep safe the people who risk their lives to keep our families and our country safe,” he wrote in a statement.
OpenAI’s decision came mere weeks after two other big announcements: On Nov. 4, Meta decided to reverse course on its own military prohibition, permitting its language models to be used by US military and national security agencies. The company said it would provide its models directly to agencies, to established defense contractors Lockheed Martin and Booz Allen, and to defense tech companies like Anduril and Palantir. Then, on Nov. 7, OpenAI’s rival Anthropic, which makes the chatbot Claude, partnered with Peter Thiel’s firm Palantir and Amazon Web Services to provide AI capabilities to US intelligence services.
Military applications of AI go far beyond developing lethal autonomous weapons systems, or killer robots, as we’ve written before in this newsletter. AI can help with command and control, intelligence analysis, and precision targeting. That said, the uses of generative AI models such as OpenAI’s GPT-4 and Anthropic’s Claude are more sprawling in nature.
“There’s a lot of both interest and pressure on the national security community to pilot and prototype generative AI capabilities,” says Emelia Probasco, a senior fellow at Georgetown University's Center for Security and Emerging Technology and a former Pentagon official. “They’re not quite sure what they’re going to do with it, but they’re pretty sure it’s going to be powerful.”
And some of the best uses of this technology might simply be the boring stuff, Probasco added, such as writing press releases and filling out personnel paperwork. “Even though [the military] does some warfighting, it also does a lot of bureaucracy.”
For contractors of all types, AI presents a business opportunity too. “Defense contracting is a potentially lucrative business for AI startups despite some very valid concerns about AI safety and ethics,” says Gadjo Sevilla, senior technology analyst at eMarketer. He added that gaining the trust of the military could also help AI companies prove their safety. “They are more likely to gain other contracts once they are perceived as defense-grade AI solutions.”
Probasco says that the US military needs the expertise of Silicon Valley to stay on the cutting edge, but she does worry about the two worlds becoming too cozy with one another.
“The worst thing would be if we end up in another techno-utopia like we had when in the early days of social media, thinking that Silicon Valley is going to 100% come in and save the day,” she said. “What we need are reasonable, smart, hardworking people who respect different perspectives.”
AI and war: Governments must widen safety dialogue to include military use
There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.
Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.
And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.
So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.
- Russia-Ukraine war: How we got here ›
- Robots are coming to a battlefield near you ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- Biden & Xi set to agree on regulating military use of AI ›
- Ukraine’s AI battlefield ›
- Will AI further divide us or help build meaningful connections? - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
Ukraine’s AI battlefield
Saturday marks the two-year anniversary of Russia’s invasion of Ukraine.
Over the course of this bloody war, the Ukrainian defense strategy has grown to a full embrace of cutting-edge artificial intelligence. Ukraine has been described as a “living lab for AI warfare.”
That capability comes largely from the American government but also from American industry. With the help of powerful American tech companies such as Palantir and Clearview AI, Ukraine has deployed AI throughout its military operations. The biggest tech companies have been involved, too; Amazon, Google, Microsoft, and Elon Musk’s Starlink have also provided vital tech to aid Ukraine’s war effort.
Ukraine is using AI to analyze large data sets stemming from satellite imagery, social media, and drone footage, but also supercharging its geospatial intelligence and electronic warfare efforts. AI-powered facial recognition and other imagery technology has been instrumental in identifying Russian soldiers, collecting evidence of war crimes, as well as locating land mines.
And increasingly, weapons are also powered by AI. According to a new report from Bloomberg, US and UK leaders are providing AI-powered drones to Ukraine, which would fly in large fleets, coordinating with one another to identify and take out Russian targets. There is no shortage of ethical concerns about the nature of AI-powered warfare, as we have written about in the past, but that hasn’t stymied President Joe Biden’s commitment to beating back Vladimir Putin and defending a strategically crucial ally.
Reports about Russia’s own use of AI in warfare are murkier, though there’s some evidence to suggest they may be using the technology to fuel disinformation campaigns as well as build weaponry. But Ukraine might have an advantage: Recently, Russia’s fancy new AI-powered drone-killing system was reportedly blown up by, of all things, a Ukrainian drone.
Ukraine’s stand against Russia has been called a David and Goliath story, but it’s also a battle evened by technological prowess. It’s a view into the future of warfare, where the full strength of Silicon Valley and the US military-industrial complex meet.QUAD supply chain strategy to consider values; new AI-powered weapons
Marietje Schaake, International Policy Director at Stanford's Cyber Policy Center, Eurasia Group senior advisor and former MEP, discusses trends in big tech, privacy protection and cyberspace:
How will the QUAD leaders address the microchip supply chain issue during their meeting this week?
Well, the idea for leaders of the US, Japan, India, and Australia, is to collaborate more intensively on building secure supply chains for semiconductors, and that is in response to China's growing assertiveness. I think it's remarkable to see that values are becoming much more clearly articulated by world leaders when they're talking about governing advanced technologies. The current draft statement ahead of the QUAD meeting says that collaboration should be based on the rule of respecting human rights.
Will AI dominate the future battlefields?
Well, we've already seen new uses of AI-powered arms, but also new opportunities for cyberattacks from the increased use of AI, which leads to growing and vulnerable attack surface. The New York Times recently investigated how Iran's top nuclear official was executed with an AI-assisted, remote-controlled killing device. The gun, equipped with intelligent satellite systems, used AI to verify when and at whom to fire the lethal shots. So there are new weapons, but also new opportunities to exploit vulnerabilities in AI. It is safe to say that warfare is already changing and that in many ways, conflict and cyberattacks, as a result of both, the specific use in arms as well as the broad uptake in society will change dramatically.