scroll to top arrow or icon

{{ subpage.title }}

AI and war: Governments must widen safety dialogue to include military use
AI and war: Governments must widen safety dialogue to include military use | GZERO AI

AI and war: Governments must widen safety dialogue to include military use

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, Marietje insists that governments must prioritize establishing guardrails for the deployment of artificial intelligence in military operations. Already, there are ongoing endeavors ensuring that AI is safe to use but, according to her, there's an urgent need to widen that discussion to include its use in warfare—an area where lives are at stake.
Read moreShow less
Israel's Lavender: What could go wrong when AI is used in military operations?
Israel's Lavender: What could go wrong when AI is used in military operations? | GZERO AI

Israel's Lavender: What could go wrong when AI is used in military operations?

In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, examines the Israeli Defence Forces' use of an AI system called Lavender to target Hamas operatives. While it reportedly shares hallucination issues familiar with AI systems like ChatGPT, the cost of errors on the battlefront is incomparably severe.
Read moreShow less

Subscribe to our free newsletter, GZERO Daily

Latest