scroll to top arrow or icon

Generative influence: Using AI to sow chaos online

Generative influence: Using AI to sow chaos online
Midjourney

On May 30, OpenAI announced that it had disrupted five foreign influence campaigns using its software to spread misinformation and sow chaos on the internet. The tech startup, which has become the industry leader in generative AI on the back of its ChatGPT chatbot and GPT-4 large language model, said that over the previous three months it detected covert campaigns from groups in China, Iran, Israel, and Russia.


Most of these campaigns were very simple, using generative AI tools to generate social media posts and comments and translate them into various languages so the actors could post them on social media websites such as Facebook and X (formerly Twitter) and blogging platforms such as Blogspot and Medium. OpenAI’s tools were also used to translate news articles, clean up grammar, and remove errors from computer code.

Some of these operations were already well-known, including the Chinese group Spamaflouge, which is widely considered a government operation, and the pro-Russia group Doppelganger, whose members were recently sanctioned by the US government for spreading disinformation. There’s also the Iranian group, the International Union of Virtual Media, which translated anti-US and anti-Israel articles and posted them on its own website. And OpenAI disrupted the activity of an Israeli company, called STOIC, that was found to be generating pro-Israel articles and social media comments about the conflict in Gaza. STOIC also tried to influence the elections in India, posting comments critical of Narendra Modi’s BJP and praising the opposition party, the Indian National Congress.

OpenAI also said it discovered a new Russian group, which it’s calling Bad Grammar for its sloppy writing. The group used OpenAI’s tools to debug code for a Telegram bot where it posted political comments in Russian and English about the war in Ukraine and Western involvement in the conflict.

Social media companies constantly comb their platforms for signs of abuse from influence operations, especially after the 2016 election. Allegations abounded of Russia’s Internet Research Agency and other groups spreading fake news and conspiracy theories on Facebook and other platforms to sow confusion and discord in the US.

Now, social media companies are monitoring how AI is being used to spread disinformation: On May 29, Meta said that in its quarterly threat report that it removed six influence campaigns from its platforms, including an anti-Sikh network based in China that used AI to generate images about floods in Punjab and the assassination of a political leader in Canada.

Still, OpenAI’s report is the first major move by an AI company — not a social media platform — to police how its tools are being used for this purpose. It’s a sign that the leading generative AI company wants to be seen as a responsible actor, or at least mitigate the harm that its platform enables on a global scale.

For governments looking to disrupt elections, influence people’s political views, or cause chaos, it makes sense to turn to generative AI. “If you’re a government that already runs online influence operations and spreads disinformation through websites, social media accounts, and other networks, of course you’re going to use GenAI to … do so faster and cheaper,” said Justin Sherman, a nonresident fellow at the Atlantic Council.

Sherman noted that it’s impossible for any tech developer to fully eliminate the risk their technology poses and it’s a good thing that OpenAI is cracking down on this issue. “But it’s simultaneously amusing for OpenAI and its CEO to hand-wave about how worried they are about the misuse of GenAI for elections when they continue to rapidly, without much pause, design, develop, and deploy the very AI technologies that facilitate the harm,” he said.

And while the influence operations OpenAI discovered were simplistic and, frankly, ineffective, that doesn’t mean future ones will be — especially in a year full of so many global elections.


“There’s widespread concern among policymakers and the public that AI tools can be misused to wreak havoc come election time,” said Josh A. Goldstein, a research fellow on the CyberAI Project at Georgetown University’s Center for Security and Emerging Technology. “We should not over-index on OpenAI’s most recent report — we don’t know how other bad actors will misuse AI tools, or what operations have yet to be discovered, for example. It’s important that different pieces of the election security community remain vigilant.”

GZEROMEDIA

Subscribe to GZERO's daily newsletter