Scroll to the top

AI election safeguards aren’t great

​A man views a computer screen displaying the AI-crafted speech of former Prime Minister Imran Khan, to call for votes ahead of the general elections in Karachi, Pakistan, in early February 2024.

A man views a computer screen displaying the AI-crafted speech of former Prime Minister Imran Khan, to call for votes ahead of the general elections in Karachi, Pakistan, in early February 2024.

REUTERS/Akhtar Soomro
The Center for Countering Digital Hate has been testing the most popular AI tools to see if they’re able to be manipulated to generate election disinformation despite public promises and usage rules to the contrary.

The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.

Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.

Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.

“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”

GZEROMEDIA

Subscribe to GZERO's daily newsletter