Israel's Lavender: What could go wrong when AI is used in military operations?

Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces' use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.

So last week, six Israeli intelligence officials spoke to an investigative reporter for a magazine called +972 about what might be the most dangerous weapon in the war in Gaza right now, an AI system called Lavender.

As I discussed in an earlier video, the Israeli Army has been using AI in their military operations for some time now. This isn't the first time the IDF has used AI to identify targets, but historically, these targets had to be vetted by human intelligence officers. But according to the sources in this story, after the Hamas attack of October 7th, the guardrails were taken off, and the Army gave its officers sweeping approval to bomb targets identified by the AI system.

I should say that the IDF denies this. In a statement to the Guardian, they said that, "Lavender is simply a database whose purpose is to cross-reference intelligence sources." If true, however, it means we've crossed a dangerous Rubicon in the way these systems are being used in warfare. Let me just frame these comments with the recognition that these debates are ultimately about systems that take people's lives. This makes the debate about whether we use them, or how we use them, or how we regulate them and oversee them, both immensely difficult, but also urgent.

In a sense, these systems and the promises that they're based on are not new. Technologies like Palantir have long promised clairvoyance from more and more data. At their core, these systems all work in the same way, users upload raw data into them, in this case, the Israeli army loaded in data on known Hamas operatives, location data, social media profiles, cell phone information, and then these data are used to create profiles of other potential militants.

But of course, these systems are only as good as the training data that they are based on. One source who worked with the team that trained Lavender said that, "Some of the data they used came from the Hamas-run Internal Security Ministry, who aren't considered militants." The source said that, "Even if you believe these people are legitimate targets, by using their profiles to train the AI system, it means the system is more likely to target civilians." And this does appear to be what's happening. The sources say that, "Lavender is 90% accurate," but this raises profound questions about how accurate we expect and demand these systems to be. Like any other AI system, Lavender is clearly imperfect, but context matters. If ChatGPT hallucinates 10% of the time, maybe we're okay with that. But if an AI system is targeting innocent civilians for assassination 10% of the time, most people would likely consider that an unacceptable level of harm.

With the rise of AI systems in the workplace, it seems like an inevitability that militaries around the world will begin to adopt technologies like Lavender. Countries around the world, including the US, have set aside billions for AI-related military spending, which means we need to update our international laws for the AI age as urgently as possible. We need to know how accurate these systems are, what data they're being trained on, how their algorithms are identifying targets, and we need to oversee the use of these systems. It's not hyperbolic to say that new laws in this space will literally be the difference between life and death.

I'm Taylor Owen, and thanks for watching.

More from GZERO Media

A combination photo shows a person of interest in the fatal shooting of U.S. right-wing activist and commentator Charlie Kirk during an event at Utah Valley University, in Orem, Utah, U.S. shown in security footage released by the Utah Department of Public Safety on September 11, 2025.
Utah Department of Public Safety/Handout via REUTERS
A drone view shows the scene where U.S. right-wing activist, commentator, Charlie Kirk, an ally of U.S. President Donald Trump, was fatally shot during an event at Utah Valley University, in Orem, Utah, U.S. September 11, 2025.
REUTERS/Cheney Orr

The assassination of 31-year old conservative activist Charlie Kirk at a college event in Utah yesterday threatened to plunge a deeply divided America further into a cycle of rising political violence.

Venezuela's President Nicolas Maduro stands next to members of the armed forces, on the day he says that his country would deploy military, police and civilian defenses at 284 "battlefront" locations across the country, amid heightened tensions with the U.S., in La Guaira, Venezuela, September 11, 2025.
Miraflores Palace/Handout via REUTERS

284: Venezuelan president Nicolás Maduro has deployed military assets to 284 “battlefront” locations across the country, amid rising tensions with the US.

A member of Nepal army stands guard as people gather to observe rituals during the final day of Indra Jatra festival to worship Indra, Kumari and other deities and to mark the end of monsoon season.
REUTERS/Navesh Chitrakar

Nepal’s “Gen-Z” protest movement has looked to a different generation entirely with their pick for an interim leader. Protest leaders say they want the country’s retired chief justice, Sushila Karki, 73, to head a transitional government.