LIVE TODAY at 7:45pm ET: State of the World with Ian Bremmer
LEARN MORE

How are emerging technologies helping to shape democracy?

How do you know that what you are seeing, hearing, and reading is real?

It’s not an abstract question: Artificial intelligence technology allows anyone with an internet connection and a half-decent laptop to fabricate entirely fictitious video, audio, and text and spread it around the world in the blink of an eye.

The media may be ephemeral, but the threat to governments, journalists, corporations, and you yourself is here to stay. That’s what Julien Pain, journalist and host of Franceinfo, tried to get at during the GZERO Global Stage discussion he moderated live from the 2023 Paris Peace Forum.

In response to a poll that showed 77% of the GZERO audience felt democracies are weakening, Eléonore Caroit, vice president of the French Parliament’s Foreign Affairs Committee, pointed out that the more alarming part is many people around the globe are sufficiently frightened to trade away democratic liberties for the purported stability of unfree governments — a trend authoritarian regimes exploit using AI.


“Democracy is getting weaker, but what does that provoke in you?” she asked. “Do you feel protected in an undemocratic regime? Because that is what worries me, not just that democracy is getting weaker but that fewer people seem to care about it.”

Ian Bremmer, president and founder of the Eurasia Group and GZERO Media, said a lot of that fear stems from an inability to know what to trust or even what is real as fabricated media pervades the internet. The very openness that democratic societies hold as the keystone of their civic structures exacerbates the problem.

“Authoritarian states can tell their citizens what to believe. People know what to believe, the space is made very clear, there are penalties for not believing those things,” Bremmer explained. “In democracies, you increasingly don’t know what to believe. What you believe has become tribalized and makes you insecure.”

Rappler CEO Maria Ressa, who is risking a century-long prison sentence to fight state suppression of the free press in the Philippines, called information chaos in democracies the “core” of the threat.

“Technology has taken over as the gatekeeper to the public sphere,” she said “They have abdicated responsibility when lies spread six times faster than the truth” on social media platforms.

Microsoft vice chair and president Brad Smith offered a poignant example from Canada, in which a pro-Ukraine activist was targeted by Russia with AI-generated audio of a completely fabricated statement. They spliced it into a real TV broadcast and spread the clip across social media to discredit the activist’s work of years within minutes.

The good news, Smith said, is that AI can also be used to help fight disinformation campaigns.

“AI is an extraordinarily powerful tool to identify patterns within data,” he said. “For example, after the fire in Lahaina, we detected the Chinese using an influence network of more than a hundred influencers — all saying the same thing at the same time in more than 30 different languages” to spread a conspiracy theory that the US government deliberately started the blaze.”

All the panelists agreed on one crucial next step: aligning all the stakeholders — many with competing interests and a paucity of mutual trust — to create basic rules of the road on AI and how to punish its misuse, which will help ordinary people rebuild trust and feel safer.

The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.

More from Global Stage

Podcast: Can governments protect us from dangerous software bugs?

We've probably all felt the slight annoyance at prompts we receive to update our devices. But these updates deliver vital patches to our software, protecting us from bad actors. Governments around the world are increasingly interested in monitoring when dangerous bugs are discovered as a means to protect citizens. But would such regulation have the intended effect?

AI at the tipping point: danger to information, promise for creativity

Artificial intelligence is on everyone's mind these days. The potential for AI to mess up democracy is scary, but the truth is that it can also make the world a better place. So, are bots good or bad for us? We asked a few experts to weigh in during the Global Stage livestream conversation "Risks and Rewards of AI," hosted by GZERO in partnership with Microsoft at this year's World Economic Forum meeting in Davos, Switzerland.

Paris 2024 Olympics chief: “We are ready”

Eight months ahead of the 2024 Summer Olympics, Tony Estanguet says Paris plans to offer “a fantastic moment of celebration.”

Podcast: Would the proposed UN Cybercrime Treaty hurt more than it helps?

As the world of cybercrime continues to expand, it is only ideal that more international legal standards should follow. But while many governments around the globe see a need for a cybercrime treaty to set a standard, a current proposal on the table at the United Nations is raising concerns among private companies and nonprofit organizations alike.

How cyberattacks hurt people in war zones

They may not be bombs or tanks, but hacks and cyberattacks can still make life miserable for people caught in the crosshairs of conflicts miserable, said Stéphane Duguin, CEO of the Cyber Peace Institute.

Why snooping in your private life is big business

Kaja Ciglic, senior director of digital diplomacy at Microsoft, said, "cybersecurity is the defining challenge of our time" amid a spike in misinformation campaigns thanks to wars in Ukraine and Gaza, growing interest from governments in building cyberweapons, and plain old profit-motivated thieves.

Digital Equity