AI and war: Governments must widen safety dialogue to include military use

AI and war: Governments must widen safety dialogue to include military use | GZERO AI
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Marietje insists that governments must prioritize establishing guardrails for the deployment of artificial intelligence in military operations. Already, there are ongoing endeavors ensuring that AI is safe to use but, according to her, there's an urgent need to widen that discussion to include its use in warfare—an area where lives are at stake.

There's not a week without a new announcement of a new AI office, AI safety institute, or AI advisory body initiated by a government, usually the democratic governments of this world. They're all wrestling with, “How to regulate AI,” and seem to choose, without much variation, for a focus on safety.

Last week we saw the Department of Homeland Security in the US joining this line of efforts with its own advisory body. Lots of industry representatives, some from academia and civil society, to look at safety of AI in its own context. And what's remarkable amidst all this focus on safety is how little emphasis and even attention there is for restricting or putting guardrails around the use of AI in the context of militaries.

And that is remarkable because we can already see the harms of overreliance on AI, even if industry is really pushing this as its latest opportunity. Just look at venture capital poured into defense tech or “DefTech” as it's popularly called. And so, I think we should push for a widening of the lens when we talk about AI safety to include binding rules on military uses of AI. The harms are real. It's about life and death situations. Just imagine somebody being misidentified as a legitimate target for a drone strike, or the kinds of uses that we see in Ukraine where facial recognition tools, other kinds of data, crunching AI applications, are used in the battlefield without many rules around it, because the fog of war also makes it possible for companies to kind of jump into the void.

So it is important that safety of AI at least includes the focus and discussion on what is proper use of AI in the context of war, combat, and conflict, of which we see too much in today's world, and that there are rules in place initiated by democratic countries to make sure that the rules based order, international law, and human rights humanitarian law is upheld even in the context of the latest technologies like AI.

More from GZERO Media

Hurricane Melissa, which has developed into a Category 5 storm, moves north in the Caribbean Sea towards Jamaica and Cuba in a composite satellite image obtained by Reuters on October 27, 2025.
CIRA/NOAA/Handout via REUTERS

30: Hurricane Melissa, which was upgraded over the weekend to a Category 5 storm, is expected to hit Jamaica on Monday and bring 30 inches of rain and 165-mph winds, in what will be one of the most intense storms to ever hit the island.

US President Donald Trump shakes hands with Vietnam's Prime Minister Pham Minh Chinh as East Timor's Prime Minister Kay Rala Xanana Gusmao and Singapore's Prime Minister Lawrence Wong look on at the ASEAN Summit in Kuala Lumpur, Malaysia, on October 26, 2025.
Vincent Thian/Pool via REUTERS

The US president signed a raft of trade deals on Sunday at the ASEAN summit in Malaysia, but the main event of his Asia trip will be his meeting with Chinese President Xi Jinping on Thursday.

Argentina's President Javier Milei celebrates after the La Libertad Avanza party won the midterm election, which is seen as crucial for Milei's administration after U.S. President Donald Trump warned that future support for Argentina would depend on Milei's party performing well in the vote, in Buenos Aires, Argentina, October 26, 2025.
REUTERS/Cristina Sille
- YouTube

On GZERO World with Ian Bremmer, Tristan Harris of the Center for Humane Technology warns that tech companies are racing to build powerful AI models and ignoring mental health risks and other consequences for society and humanity.

Tristan Harris, co-founder of the Center for Humane Technology, joins Ian Bremmer on the GZERO World Podcast to talk about the risks of recklessly rolling out powerful AI tools without guardrails as big tech firms race to build “god in a box.”

- YouTube

The next leap in artificial intelligence is physical. On Ian Explains, Ian Bremmer breaks down how robots and autonomous machines will transform daily life, if we can manage the risks that come with them.