Digital Governance
Deepfakes and dissent: How AI makes the opposition more dangerous

Did AI make Navalny more dangerous? | Fiona Hill | Global Stage

Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.
After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.
Speaking at a Global Stage panel on AI and elections hosted by GZERO and Microsoft on the sidelines of the Munich Security Forum, she said in this new world, someone like Alexei Navalny “would've been able to use AI in some extraordinary creative way to shake up what in the case of the Russian election is something of a foregone conclusion.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Egyptians are voting this month in parliamentary elections that aren’t expected to change who’s in charge, but could allow President Abdel Fattah el-Sisi to rule beyond 2030.
Remember Xinjiang? There was a time, not long ago, when China’s crackdown on the Uyghurs, a Muslim minority group living in Xinjiang province in Northwestern China, was a hot topic. But these days the attention has faded.
GZERO Media is seeking a video producer with strong writing skills and a passion for global affairs. He/She/They will have an opportunity to work with a growing team of talented journalists, producers, and editors crafting unbiased and compelling geopolitical content. This position will last approximately four months beginning in January 2026.
Despite the lavish ceremony, Russian President Vladimir Putin and Indian Prime Minister Narendra Modi’s meeting produced few concrete outcomes.