We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”
Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa
How do you keep guardrails on AI? “In the United States, historically, we don't respond with censorship. We respond with lawyers,” said Ian Bremmer, President and Founder of the Eurasia Group & GZERO Media, speaking in a GZERO Global Stage discussion live from the 2023 Paris Peace Forum.
Setting up basic legal structures around artificial intelligence is the first step toward building an infrastructure of accountability that can keep the technology from doing at least as much harm as good.
The European Union has an early lead in setting up systems, but Rappler CEO Maria Ressa said, “the EU is winning the race of the turtles” as the entire globe lags far behind the pace of technological advancement. Without legal structures and a healthy free press and civic society in place, democracies will struggle to remain resilient to the threats of AI-generated disinformation.
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- How are emerging technologies helping to shape democracy? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- The AI power paradox: Rules for AI's power ›
- AI and data regulation in 2023 play a key role in democracy ›
- Ian Bremmer: How AI may destroy democracy ›
- Paris Peace Forum Director General Justin Vaïsse: Finding common ground - GZERO Media ›
- At the Paris Peace Forum, grassroots activists highlight urgent issues - GZERO Media ›