Scroll to the top

Israel’s lethal AI

Israel’s lethal AI

The Israeli military is using artificial intelligence to determine bombing targets with cursory oversight from humans, according to reports from The Guardian and +972 Magazine last week.

The reports cite anonymous Israeli intelligence officials, who say an AI program called Lavender is trained to identify Hamas and Palestinian Islamic Jihad militants as potential bombing targets. The government has reportedly given Israel Defense Forces officers approval to take out anyone identified as a target by Lavender. The tool has been used to order strikes on “thousands” of Palestinian targets — even though Lavender is known to have a 10% error rate. According to the Guardian, 37,000 potential targets were identified by the program.


The IDF did not deny the existence of Lavender in a statement to CNN but said AI was not being used to identify terrorists.

Militaries around the world are building up their AI capacities, and we’ve already seen the technology play a major role on both sides of the war between Russia and Ukraine. Israel’s bloody campaign in Gaza has led to widespread allegations of indifference toward mass civilian casualties, underlined by the bombing of a World Central Kitchen caravan last week. Artificial intelligence threatens to play a greater role in determining who lives and who dies at war — will it also make conflicts more inhumane?

GZEROMEDIA

Subscribe to GZERO's daily newsletter