We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”
Tracking anti-Navalny bot armies
In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.
Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.
Still, he came home.
“If your beliefs are worth something,” Navalny wrote on Facebook, “you must be willing to stand up for them. And if necessary, make some sacrifices.”
He made the ultimate sacrifice on Feb. 16, when Russian authorities announced, with Arctic banality, that he had “died” at the IK-3 penal colony more than 1,200 miles north of Moscow. A frozen gulag. “Convict Navalny A.A. felt unwell after a walk, almost immediately losing consciousness,” they announced as if quoting a passage from Koestler’s “Darkness at Noon.” Later, deploying the pitch-black doublespeak of all dictators, they decided to call it, “sudden death syndrome.”
Worth noting: Navalny was filmed the day before, looking well. There is no body for his wife and two kids to see. No autopsy.
As we wrote this morning, Putin is winning on all fronts. Sensing NATO support for the war in Ukraine is wavering – over to you, US Congress – Putin is acting with confident impunity. His army is gaining ground in Ukraine. He scored a propaganda coup when he toyed with dictator-fanboy Tucker Carlson during his two-hour PR session thinly camouflaged as an “interview.” And just days after Navalny was declared dead, the Russian pilot Maksim Kuzminov, who defected to Ukraine with his helicopter last August, was gunned down in Spain.
And then, of course, there is the disinformation war, another Putin battleground. Navalny’s death got me wondering if there would be an orchestrated disinformation campaign around the event, and if so, whether there was any way to track it? Would there be, say, an online release of shock bot troops to combat Western condemnation of Navalny’s death and blunt the blowback?
It turns out there was.
To investigate, GZERO asked the “social threat information company” Cyabra, which specializes in tracking bots, to look for disinformation surrounding the online reactions to the news about Navalny. The Israeli company says its job is to uncover “threats” on social platforms. It has built AI-driven software to track “attacks such as impersonation, data leakage, and online executive perils as they occur.”
Cyabra’s team focused on the tweets President Joe Bidenand Prime Minister Justin Trudeau posted condemning Navalny’s death. Their software analyzed the number of bots that targeted these official accounts. And what they found was fascinating.
According to Cyabra, “29% of the Twitter profiles interacting with Biden’s post about Navalny on X were identified as inauthentic.” For Trudeau, the number was 25%.
Courtesy of Cyabra
So, according to Cyabra, more than a quarter of the reaction you saw on X related to Navalny’s death and these two leaders’ reactions came from bots, not humans. In other words, a bullshit campaign of misinformation.
This finding raises a lot of questions. What’s the baseline of corruption to get a good sense of comparison? For example, is 27% bot traffic on Biden’s tweet about Navalny’s death a lot, or is everything on social media flooded with the same amount of crap? How does Cyabra's team actually track bots, and how accurate is their data? Are they missing bots that are well-disguised, or, on the other side, are some humans being labeled as “inauthentic”? In short, what does this really tell us?
In the year of elections, with multiple wars festering and AI galloping ahead of regulation, the battle against disinformation and bots is more consequential than ever. The bot armies of the night are marching. We need to find a torch to see where they are and if there are any tools that can help us separate fact from fiction.
Tracking bot armies is a job that often happens in the shadows, and it comes with a lot of challenges. Can this be done without violating people’s privacy? How hard is this to combat? I spoke with the CEO of Cyabra, Dan Brahmy, to get his view.
Solomon: When Cyabra tracked the reactions to the tweets from President Joe Biden and Prime Minister Trudeau about the “death” of Navalny, you found more than 25% of the accounts were inauthentic. What does this tell us about social media and what people can actually trust is real?
Brahmy: From elections to sporting events to other significant international headline events, social media is often the destination for millions of people to follow the news and share their opinion. Consequently, it is also the venue of choice for malicious actors to manipulate the narrative.
This was also the case when Cyabra looked into President Biden and Prime Minister Trudeau’s X post directly blaming Putin for Navalny’s death. These posts turned out to be the ideal playing ground for narrative-manipulating bots. Inauthentic accounts on a large scale attacked Biden and Trudeau and blamed them for their foreign and domestic policies while attempting to divert attention from Putin and the negative narrative surrounding him.
The high number of fake accounts detected by Cyabra, together with the speed at which those accounts engaged in the conversation to divert and distract following the announcement of Navalny’s death, shows the capabilities of malicious actors and their intentions to conduct sophisticated influence operations.
Solomon: Can you tell where these are from and who is doing it?
Brahmy: Cyabra monitors for publicly available information on social media and does not track IP addresses or any private information. The publicly shared location of the account is collected by Cyabra. When analyzing the Navalny conversation, Cyabra saw that the majority of the accounts claimed themselves as coming from the US.
Solomon: There is always the benchmark question: How much “bot” traffic or inauthentic traffic do you expect at any time, for any online event? Put the numbers we see here for Trudeau and Biden in perspective.
Brahmy: The average percentage of fake accounts participating in an everyday conversation online typically varies between 4 and 8%. Cyabra’s discovery of 25-29% fake accounts related to this conversation is alarming, significant, and should give us cause for concern.
Solomon: Ok, then there is the accuracy question. How do you actually identify a bot and how do you know, given the sophistication of AI and new bots, that you are not missing a lot of them? Is it easier to find “obvious bots”— i.e., something that tweets every two minutes 24 hours a day, then say, a series of bots that look and act very human?
Brahmy: Using advanced AI and machine learning, Cyabra analyzes a profile’s activity and interactions to determine if it demonstrates non-human behaviors. Cyabra’s proprietary algorithm consists of over 500 behavioral parameters. Some parameters are more intuitive, like the use of multiple languages, while others require in-depth expertise and advanced machine learning. Cyabra’s technology works at scale and in almost real-time.
Solomon: There is so much disinformation anyway – actual people who lie, mislead, falsify, scam – how much does this matter?
Brahmy: The creation and activities of fake accounts on social media (whether it be a bot, sock puppet, troll, or otherwise) should be treated with the utmost seriousness. Fake accounts are almost exclusively created for nefarious purposes. By identifying inauthentic profiles and then analyzing their behaviors and the false narratives they are spreading, we can understand the intentions of malicious actors and remedy them as a society.
While we all understand that the challenge of disinformation is pervasive and a threat to society, being able to conduct the equivalent of an online CT scan reveals the areas that most urgently need our attention.
Solomon: Why does it matter in a big election year?
Brahmy: More than 4 billion people globally are eligible to vote in 2024, with over 50 countries holding elections. That’s 40% of the world’s population. Particularly during an election year, tracking disinformation is important – from protecting the democratic process, ensuring informed decision-making, preventing foreign interference, and promoting transparency, to protecting national security. By tracking and educating the public on the prevalence of inauthentic accounts, we slowly move closer to creating a digital environment that fosters informed, constructive, and authentic discourse.
You can check out part of the Cybara report here.
- Understanding Navalny’s legacy inside Russia ›
- Navalny’s widow continues his fight for freedom ›
- “A film is a weapon on time delay” — an interview with “Navalny” director Daniel Roher ›
- Navalny's death is a huge loss for democracy - NATO's Mircea Geona ›
- Alexei Navalny's death: A deep tragedy for Russia ›
- Navalny's death is a message to the West ›
- Navalny’s death: Five things to know ›
Ukraine’s AI battlefield
Saturday marks the two-year anniversary of Russia’s invasion of Ukraine.
Over the course of this bloody war, the Ukrainian defense strategy has grown to a full embrace of cutting-edge artificial intelligence. Ukraine has been described as a “living lab for AI warfare.”
That capability comes largely from the American government but also from American industry. With the help of powerful American tech companies such as Palantir and Clearview AI, Ukraine has deployed AI throughout its military operations. The biggest tech companies have been involved, too; Amazon, Google, Microsoft, and Elon Musk’s Starlink have also provided vital tech to aid Ukraine’s war effort.
Ukraine is using AI to analyze large data sets stemming from satellite imagery, social media, and drone footage, but also supercharging its geospatial intelligence and electronic warfare efforts. AI-powered facial recognition and other imagery technology has been instrumental in identifying Russian soldiers, collecting evidence of war crimes, as well as locating land mines.
And increasingly, weapons are also powered by AI. According to a new report from Bloomberg, US and UK leaders are providing AI-powered drones to Ukraine, which would fly in large fleets, coordinating with one another to identify and take out Russian targets. There is no shortage of ethical concerns about the nature of AI-powered warfare, as we have written about in the past, but that hasn’t stymied President Joe Biden’s commitment to beating back Vladimir Putin and defending a strategically crucial ally.
Reports about Russia’s own use of AI in warfare are murkier, though there’s some evidence to suggest they may be using the technology to fuel disinformation campaigns as well as build weaponry. But Ukraine might have an advantage: Recently, Russia’s fancy new AI-powered drone-killing system was reportedly blown up by, of all things, a Ukrainian drone.
Ukraine’s stand against Russia has been called a David and Goliath story, but it’s also a battle evened by technological prowess. It’s a view into the future of warfare, where the full strength of Silicon Valley and the US military-industrial complex meet.AI has entered the race to primary Joe Biden
For a brief moment this week, there were two Dean Phillips – the man and the bot. The human is a congressman from Minnesota who’s running for the Democratic nomination for president, hoping to rise above his measly 7% poll numbers to displace sitting President Joe Biden as the party’s nominee.
But there was also an AI chatbot version of the 55-year-old congressman.
A political action committee that’s raised millions to finance Phillips’ longshot bid for president from donors like billionaire hedge fund manager Bill Ackman, released an AI chatbot called Dean.Bot last week. It only lasted a few days.
The bot, which disclosed it was artificial intelligence, mimicked Phillips, letting voters converse with it like it was the real congressman.
The 2024 presidential election has seen AI-generated videos and advertisements, but nothing in the way of a candidate stand-in — until now. And for good reason: OpenAI, the company with the most popular chatbot, ChatGPT, doesn’t allow developers to adapt its software for political campaigning.
OpenAI took action against Dean.Bot, which is built on ChatGPT’s platform. The company shut down the bot and suspended access for its developer on Friday, saying the bot violated its terms of use. Funnily enough, the PAC behind the bot is run by an early OpenAI employee.
There are no current federal regulations prohibiting the use of AI in political campaigning, though legislation has been introduced intended to curb the politically deceptive use of AI, and the Federal Election Commission has sought public comment on the same issue.
Phillips the man, meanwhile, has had to resort to campaigning in the flesh in New Hampshire ahead of today’s primary since his AI doppelganger is nowhere to be found.
Can watermarks stop AI deception?
Is it a real or AI-generated photo? Is it Drake’s voice or a computerized track? Was the essay written by a student or by ChatGPT? In the age of AI, provenance is paramount – a fancy way of saying we need to know where the media we consume comes from.
While generative AI promises to transform industries – from health care to entertainment to finance, just to name a few – it might also cast doubt on the origins of everything we see online. Experts have spent years warning that AI-generated media could disrupt elections and cause social unrest, so the stakes couldn’t be higher.
To counter this threat, lawmakers have proposed mandatory disclosures for political advertising using AI, and companies like Google and Meta, the parent company of Facebook and Instagram, are already requiring this. But bad actors won’t be deterred by demands for disclosures. So wouldn’t it be helpful if we had a way to instantly debunk and decipher what’s made by AI and what’s not?
Some experts say “watermarks” are the answer. A traditional watermark is a visible imprint — like what you see on a Getty image when you haven’t paid for it – or the inclusion of a corner logo. Today, these are used to deter theft rather than deception.
But most watermark proposals for AI-generated media center on invisible ones. These are functionally bits of code that tell third-party software that an image, picture, video, audio clip, or even lines of text were generated with AI. Using invisible watermarks would allow the audience to see art without it being visually altered or ruined — but, if there’s any confusion, in theory, the consumer of that media can run it through a computer program to see whether it was human-made or not.
Joe Biden’s administration is curious about watermarks. In his October executive order, the US president told the Commerce Department to “develop guidance for content authentication and watermarking to clearly label AI-generated content.” The goal: To protect Americans from “fraud and deception.”
It’s an effort many private companies are already working on — but solving the watermark issue has involved a lot of trial and error.
In August, Google released SynthID, a new method for embedding a watermark in the pixels of an image that’s perceptible to machine detectors but not the human eye. Still, it warns that SynthID isn’t “foolproof to extreme” methods of image manipulation. And last week, Meta announced it’s adding invisible watermarks to its text-to-image generator, promising that it’s “resilient to common image manipulations like cropping, color change (brightness, contrast, etc.), screen shots and more.”
There are more creative, cross-industry solutions too. In October, Adobe developed a special icon that can be added to an image’s metadata that both indicates who made it and how. Adobe told The Verge that it wants the icon to serve as a “nutrition label” for AI-generated images. But just like nutrition labels on food, the reality is no one can punish you for ignoring them.
And there are daunting challenges to actually making watermarks work.
Adam Conner, the tech policy lead at the Center for American Progress, said that watermarks need to transcend file format changes. “Even the best plans for watermarking will need to solve for the issue … where content is distributed as a normal file type, like a JPEG or MP3,” he said. In other words, the watermarks need to carry over from where they’re generated — say, an image downloaded on DALL-E — to wherever they are copied or converted into various file formats.
Meanwhile, researchers have poked holes in the latest and greatest watermarking tech. Researchers at Carnegie Mellon, for example, published a method for destroying watermarks by adding “noise” (basically, useless data) to an image and then reconstructing it. “All invisible watermarks are vulnerable to the proposed attack,” they wrote in July.
Others think that watermarking efforts might just be a fool’s errand. “I don’t believe watermarking the output of the generative models will be a practical solution,” University of Maryland computer science professor Soheil Feizi told The Verge. “This problem is theoretically impossible to be solved reliably.”
But there is clear political will to get watermarks working. Apart from Biden’s call, the G-7 nations are reportedly planning to ask private companies to develop watermarking technology so AI media is detectable. China banned AI-generated media without watermarks a year ago. Europe has pushed for AI watermarking, too, but it’s unclear if it’ll make it into the final text of its AI Act, the scope of which lawmakers agreed to last week.
The main limitation to achieving these goals is the elephant in the room: If Feizi is right, then watermarking AI will simply … miss the mark.
Please write in and tell us what you think – are watermarks on AI-generated images a good idea? Should they be legally required? Write to us here.
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney - GZERO Media ›
- Grown-up AI conversations are finally happening, says expert Azeem Azhar - GZERO Media ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›
Paris Peace Forum Director General Justin Vaïsse: Finding common ground
How do you find peace in a world so riven by rivalries and competing interests? One step, according to Director General of the Paris Peace Forum Justin Vaïsse, is to challenge simplistic notions of East-West or North-South alignment.
“We're trying to get all the different actors, East, West, North, South, to work on the same issues and to make progress where they have common interests,” he said to GZERO’s Tony Maciulis on the sidelines of the 2023 Paris Peace Forum. “We focus on competition and geopolitical rivalry while we forget the iceberg coming our way when we fight on the deck of the Titanic.”
This year, the conversation centers on cyberspace and how to protect democracies in a world where rapidly advancing AI technology makes them ever more vulnerable to disinformation campaigns. The bad news is that 2024 promises to be the worst year on record for election interference. The good news, on the other hand, is that many countries have their own interest in hammering out a workable system for regulating AI, and Vaisse expects a common language regulating cyberattacks to emerge in the coming months.
At the 2023 Paris Peace Forum, GZERO also hosted a Global Stage event, Live from the Paris Peace Forum: Embracing technology to protect democracy.
- Should AI content be protected as free speech? ›
- How AI and deepfakes are being used for malicious reasons ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How are emerging technologies helping to shape democracy? ›
- Paris 2024 Olympics chief: “We are ready” - GZERO Media ›
Russian Black Sea Fleet commander still alive despite Ukraine's claims
Ian Bremmer shares his insights on global politics this week on World In :60.
Is Russian commander Sokolov still alive?
Black Sea fleet commander. The Ukrainians said he was killed in a missile strike, but after that missile strike, he's attending a meeting with the Kremlin and looks very much alive. Should all remember that there is a lot of disinformation and a lot of misinformation in the fog of war. You remember that Snake Island strike. And that, of course, turned out those guys didn't die. They were made prisoner and then they were released. So Russians are absolutely at fault for the invasion. Ukrainian information is meant to promote Ukrainian efforts in the war. And this is one of those instances.
Will the West intervene in Nagorno-Karabakh?
Intervene in the sense that they are trying to put pressure on the Turks and the Azeris not to engage in war crimes, not to support war crimes against the Armenians, the 120,000 Armenians living in this autonomous region that is part of Azerbaijan. Thousands and thousands are streaming out, getting out. They're not forced out, but they certainly don't feel that they're going to be safe in this region for long. The war has been lost pretty decisively by the Armenians. And the question I suspect that you are going to see a level of ethnic cleansing, ethnic migration of the Armenians from this space is going to be problematic. Armenia itself is a small country. It's going to be a serious burden for them to resettle these people. And of course, it's been their homes and their homes for generations. It's very sad to see like we've seen in the Balkans, like we've seen in Iraq after the Iraq war. But it's hard to imagine anybody intervening at this point to stop that from happening. That's where I think we are. Armenia's best friend has been Russia, and that's not very useful for them.
How is China's proactive approach to trilateral cooperation impacting its relations with South Korea and Japan?
Well, it's making them harder, especially because Japan right now is on a, their food, their seafood is being banned from China. It's a significant export because of the irradiated water from Fukushima that is being released into the Pacific. Certainly, I have a hard time seeing a friendly trilateral relationship given that and I don't think it would be fixed anytime soon. But the South Koreans and the Chinese are working hard to try to make this work, and it doesn't need to be at the head of state level. It historically hasn't been frequently. I suspect that comes off and it will be formulaic and incrementally positive, but won't lead to an immediate breakthrough in relations between those two countries.
- Disinformation the “biggest threat” from Russia – Anne-Marie Slaughter ›
- UN Security Council debates Nagorno-Karabakh ›
- Nagorno-Karabakh war flares again ›
- Armenia, Azerbaijan & the Nagorno-Karabakh crisis that needs attention ›
- Yoon leads South Korea away from China, toward the US ›
- Ian Explains: Why China’s era of high growth is over ›
- China to shake up Russia-Ukraine war ›
- Ukraine war sees escalation of weapons and words ›
- Russia-Ukraine war: How we got here ›
- “Crimea river”: Russia & Ukraine’s water conflict ›
Ian Explains: Is the world better today thanks to human progress?
Human progress doesn’t have a finish line.
Our body clocks stop ticking at some point, but that’s not the same as reaching a destination, or achieving a goal. So how do we—as a community, as a country...as a world—define progress? What does “better” even look like?
In a word: laundry.
In 1920, the average American spent 11.5 hours a week doing laundry (and that average American was almost always a woman). By 2014, the number had dropped to 1.5 hours a week, thanks to what renowned public health scholar Hans Rosling has called QUOTE "greatest invention of the Industrial Revolution,”: the washing machine. By freeing people of washing laundry by hand, this new technology allowed parents to devote more time to educating their children, and it allowed women to cultivate a life beyond the washboard.
So, as I always say to myself whenever I’m stuck in traffic or on hold with customer service, there has never been a better time to be alive. And yet...And yet...And yet... War in Europe. Famine in Africa. Global pandemics. Fake news. Conspiracy theories. Democracy dying in the bright light of day. And that’s just your average Tuesday. So how much is technology making our lives better, and how much is a part of the problem?
Watch the GZERO World episode: Is life better than ever for the human race?
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld and on US public television. Check local listings.
- What Ukraine's digital revolution teaches the world ›
- “Health is a human right”: How the world can make up progress lost to COVID ›
- Staving off default: How unsustainable debt is threatening human progress ›
- Is life better than ever for the human race? - GZERO Media ›
- CRISPR, AI, and cloning could transform the human race - GZERO Media ›