We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”
Tracking anti-Navalny bot armies
In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.
Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.
Still, he came home.
“If your beliefs are worth something,” Navalny wrote on Facebook, “you must be willing to stand up for them. And if necessary, make some sacrifices.”
He made the ultimate sacrifice on Feb. 16, when Russian authorities announced, with Arctic banality, that he had “died” at the IK-3 penal colony more than 1,200 miles north of Moscow. A frozen gulag. “Convict Navalny A.A. felt unwell after a walk, almost immediately losing consciousness,” they announced as if quoting a passage from Koestler’s “Darkness at Noon.” Later, deploying the pitch-black doublespeak of all dictators, they decided to call it, “sudden death syndrome.”
Worth noting: Navalny was filmed the day before, looking well. There is no body for his wife and two kids to see. No autopsy.
As we wrote this morning, Putin is winning on all fronts. Sensing NATO support for the war in Ukraine is wavering – over to you, US Congress – Putin is acting with confident impunity. His army is gaining ground in Ukraine. He scored a propaganda coup when he toyed with dictator-fanboy Tucker Carlson during his two-hour PR session thinly camouflaged as an “interview.” And just days after Navalny was declared dead, the Russian pilot Maksim Kuzminov, who defected to Ukraine with his helicopter last August, was gunned down in Spain.
And then, of course, there is the disinformation war, another Putin battleground. Navalny’s death got me wondering if there would be an orchestrated disinformation campaign around the event, and if so, whether there was any way to track it? Would there be, say, an online release of shock bot troops to combat Western condemnation of Navalny’s death and blunt the blowback?
It turns out there was.
To investigate, GZERO asked the “social threat information company” Cyabra, which specializes in tracking bots, to look for disinformation surrounding the online reactions to the news about Navalny. The Israeli company says its job is to uncover “threats” on social platforms. It has built AI-driven software to track “attacks such as impersonation, data leakage, and online executive perils as they occur.”
Cyabra’s team focused on the tweets President Joe Bidenand Prime Minister Justin Trudeau posted condemning Navalny’s death. Their software analyzed the number of bots that targeted these official accounts. And what they found was fascinating.
According to Cyabra, “29% of the Twitter profiles interacting with Biden’s post about Navalny on X were identified as inauthentic.” For Trudeau, the number was 25%.
Courtesy of Cyabra
So, according to Cyabra, more than a quarter of the reaction you saw on X related to Navalny’s death and these two leaders’ reactions came from bots, not humans. In other words, a bullshit campaign of misinformation.
This finding raises a lot of questions. What’s the baseline of corruption to get a good sense of comparison? For example, is 27% bot traffic on Biden’s tweet about Navalny’s death a lot, or is everything on social media flooded with the same amount of crap? How does Cyabra's team actually track bots, and how accurate is their data? Are they missing bots that are well-disguised, or, on the other side, are some humans being labeled as “inauthentic”? In short, what does this really tell us?
In the year of elections, with multiple wars festering and AI galloping ahead of regulation, the battle against disinformation and bots is more consequential than ever. The bot armies of the night are marching. We need to find a torch to see where they are and if there are any tools that can help us separate fact from fiction.
Tracking bot armies is a job that often happens in the shadows, and it comes with a lot of challenges. Can this be done without violating people’s privacy? How hard is this to combat? I spoke with the CEO of Cyabra, Dan Brahmy, to get his view.
Solomon: When Cyabra tracked the reactions to the tweets from President Joe Biden and Prime Minister Trudeau about the “death” of Navalny, you found more than 25% of the accounts were inauthentic. What does this tell us about social media and what people can actually trust is real?
Brahmy: From elections to sporting events to other significant international headline events, social media is often the destination for millions of people to follow the news and share their opinion. Consequently, it is also the venue of choice for malicious actors to manipulate the narrative.
This was also the case when Cyabra looked into President Biden and Prime Minister Trudeau’s X post directly blaming Putin for Navalny’s death. These posts turned out to be the ideal playing ground for narrative-manipulating bots. Inauthentic accounts on a large scale attacked Biden and Trudeau and blamed them for their foreign and domestic policies while attempting to divert attention from Putin and the negative narrative surrounding him.
The high number of fake accounts detected by Cyabra, together with the speed at which those accounts engaged in the conversation to divert and distract following the announcement of Navalny’s death, shows the capabilities of malicious actors and their intentions to conduct sophisticated influence operations.
Solomon: Can you tell where these are from and who is doing it?
Brahmy: Cyabra monitors for publicly available information on social media and does not track IP addresses or any private information. The publicly shared location of the account is collected by Cyabra. When analyzing the Navalny conversation, Cyabra saw that the majority of the accounts claimed themselves as coming from the US.
Solomon: There is always the benchmark question: How much “bot” traffic or inauthentic traffic do you expect at any time, for any online event? Put the numbers we see here for Trudeau and Biden in perspective.
Brahmy: The average percentage of fake accounts participating in an everyday conversation online typically varies between 4 and 8%. Cyabra’s discovery of 25-29% fake accounts related to this conversation is alarming, significant, and should give us cause for concern.
Solomon: Ok, then there is the accuracy question. How do you actually identify a bot and how do you know, given the sophistication of AI and new bots, that you are not missing a lot of them? Is it easier to find “obvious bots”— i.e., something that tweets every two minutes 24 hours a day, then say, a series of bots that look and act very human?
Brahmy: Using advanced AI and machine learning, Cyabra analyzes a profile’s activity and interactions to determine if it demonstrates non-human behaviors. Cyabra’s proprietary algorithm consists of over 500 behavioral parameters. Some parameters are more intuitive, like the use of multiple languages, while others require in-depth expertise and advanced machine learning. Cyabra’s technology works at scale and in almost real-time.
Solomon: There is so much disinformation anyway – actual people who lie, mislead, falsify, scam – how much does this matter?
Brahmy: The creation and activities of fake accounts on social media (whether it be a bot, sock puppet, troll, or otherwise) should be treated with the utmost seriousness. Fake accounts are almost exclusively created for nefarious purposes. By identifying inauthentic profiles and then analyzing their behaviors and the false narratives they are spreading, we can understand the intentions of malicious actors and remedy them as a society.
While we all understand that the challenge of disinformation is pervasive and a threat to society, being able to conduct the equivalent of an online CT scan reveals the areas that most urgently need our attention.
Solomon: Why does it matter in a big election year?
Brahmy: More than 4 billion people globally are eligible to vote in 2024, with over 50 countries holding elections. That’s 40% of the world’s population. Particularly during an election year, tracking disinformation is important – from protecting the democratic process, ensuring informed decision-making, preventing foreign interference, and promoting transparency, to protecting national security. By tracking and educating the public on the prevalence of inauthentic accounts, we slowly move closer to creating a digital environment that fosters informed, constructive, and authentic discourse.
You can check out part of the Cybara report here.
- Understanding Navalny’s legacy inside Russia ›
- Navalny’s widow continues his fight for freedom ›
- “A film is a weapon on time delay” — an interview with “Navalny” director Daniel Roher ›
- Navalny's death is a huge loss for democracy - NATO's Mircea Geona ›
- Alexei Navalny's death: A deep tragedy for Russia ›
- Navalny's death is a message to the West ›
- Navalny’s death: Five things to know ›
Al Gore's take on American democracy, climate action, and "artificial insanity"
Listen: In this episode of GZERO World podcast, Ian Bremmer sits down with former US Vice President Al Gore on the sidelines of Davos in Switzerland. Gore, an individual well-versed in navigating contested elections, shared his perspectives on the current landscape of American politics and, naturally, his renowned contributions to climate action.
While the mainstage discussions at the World Economic Forum throughout the week delved into topics such as artificial intelligence, conflicts in Ukraine and the Middle East, and climate change, behind the scenes, much of the discourse was centered on profound concerns about the upcoming 2024 US election and the state of American democracy. The US presidential election presents substantial risks, particularly with Donald Trump on the path to securing the GOP nomination.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
- Podcast: Can the US get its act together? Susan Glasser & Peter Baker on "the world’s greatest geopolitical crisis" ›
- America vs itself: Political scientist Francis Fukuyama on the state of democracy ›
- Divided we fall: Democracy at risk in the US ›
- Francis Fukuyama: Americans should be very worried about failing democracy ›
- Al Gore: "Artificial insanity" threatens democracy ›
- Ian Bremmer: How AI may destroy democracy ›
- Trump's immunity claim: US democracy in crisis ›
Azeem Azhar explores the future of AI
AI was all the rage at Davos this year – and for good reason. As we’ve covered each week in our weekly GZERO AI newsletter, artificial intelligence is impacting everything from regulatory debates and legal norms to climate change, disinformation, and identity theft. GZERO Media caught up with Azeem Azhar, founder of Exponential View, an author and analyst, and a GZERO AI guest columnist, for his insights on the many issues facing the industry.
GZERO: Whether The New York Times’ lawsuit against OpenAI on copyright grounds is settled, or found for or against OpenAI, do you think large language models are less feasible in the long term?
Azeem Azhar: Copyright has always been a compromise. The compromise has been between how many rights should be afforded to creators, and ultimately, of course, what that really means is the big publishers who accumulate them and have the legal teams.
And harm is being done to research, free exchange of knowledge, cultural expression by creating these enclosures around our intellectual space. This compromise, which worked reasonably well perhaps 100 years ago doesn't really work that well right now.
And now we have to say, “Well, we've got this new technology that could provide incredibly wide human welfare and when copyright was first imagined, those were not the fundamental axioms of the world.”
GZERO: Can you give me an example of something that could be attained by reforming copyright laws?
Azhar: Take Zambia. Zambia doesn't have very many doctors per capita. And because they don't have many doctors, they can't train many doctors. So you could imagine a situation where you can have widespread personalized AI tutoring to improve primary, secondary, tertiary, and educational outcomes for billions of people.
And those will use large language models dependent on a vast variety of material that will fall under the sort of traditional frame of copyright.
GZERO: AI is great at finding places to be more efficient. Do you think there's a future in which AI is used to decrease the world's net per capita energy consumption?
Azhar: No, we won't decrease energy consumption because energy is health and energy is prosperity and energy is welfare. Over the next 30 years, energy use will grow higher and at a higher rate than it has over the last 30, and at the same time, we will entirely decarbonize our economy.
Effectively, you cannot find any countries that don't use lots of energy that you would want to live in and that are safe and have good human outcomes.
But how can AI help? Well, look at an example from DeepMind. DeepMind released this thing called GNoME at the end of last year, which helps identify thermodynamically stable materials.
And DeepMind’s system delivered 60 years of stable producible materials with their physical properties in just one shot. Now that's really important because a lot of the climate transition and the materiality question is about how we produce all the stuff for your iPods and your door frames and your water pipes in ways that are thermodynamically more efficient, and that's going to require new materials and so AI can absolutely help us do that.
GZERO: In 2024, we are facing over four dozen national-level elections in a completely changed disinformation environment. Are you more bullish or bearish on how governments might handle the challenge of AI-driven disinformation?
Azhar: It does take time for bad actors to actually make use of these technologies, so I don't think that deep fake video will significantly play a role this year because it's just a little bit too soon.
But distribution of disinformation, particularly through social media, matters a great deal and so too do the capacities and the behaviors of the media entities and the political class.
If you remember in Gaza, there was an explosion at a hospital, and one of the newswires reported immediately that 500 people had been killed and they reported this within a few minutes. There's no way that within a few minutes one can count 500 bodies. But other organizations then picked it up, who are normally quite reputable.
That wasn't AI-driven disinformation. The trouble is the lie travels halfway around the world before the truth gets its trousers on. Do media companies need to put up a verification unit as the goalkeeper? Or do you put the idea of defending the truth and veracity and factuality throughout the culture of the organization?
GZERO: You made me think of an app that's become very popular in Taiwan over the last few months called Auntie Meiyu, which allows you to take a big group chat, maybe a family chat for example, and then you add Auntie Meiyu as a chatbot. And when Grandpa sends some crazy article, Auntie Meiyu jumps in and says, “Hey, this is BS and here’s why.”
She’s not preventing you from reading it. She's just giving you some additional information, and it's coming from a third party, so no family member has to take the blame for making Grandpa feel foolish.
Azhar: That is absolutely brilliant because, when you look back at the data from the US 2016 election, it wasn't the Instagram, TikTok, YouTube teens who were likely to be core spreaders of political misinformation. It was the over-60s, and I can testify to that with some of my experience with my extended family as well.
GZERO: As individuals are thinking about risks that AI might pose to them – elderly relatives being scammed or someone generating fake nude images of real people – is there anything an individual can do to protect themselves from some of the risks that AI might pose to their reputation or their finances?
Azhar: Wow, that's a really hard question. Have really nice friends.
I am much more careful now than I was five years ago and I'm still vulnerable. When I have to make transactions and payments I will always verify by doing my own outbound call to a number that I can verify through a couple of other sources.
I very rarely click on links that are sent to me. I try to double-check when things come in, but this is, to be honest, just classic infosec hygiene that everyone should have.
With my elderly relatives, the general rule is you don't do anything with your bank account ever unless you've got one of your kids with you. Because we’ve found ourselves, all of us, in the digital equivalent of that Daniel Day-Lewis film “Gangs of New York,” where there are a lot of hoodlums running around.
AI has entered the race to primary Joe Biden
For a brief moment this week, there were two Dean Phillips – the man and the bot. The human is a congressman from Minnesota who’s running for the Democratic nomination for president, hoping to rise above his measly 7% poll numbers to displace sitting President Joe Biden as the party’s nominee.
But there was also an AI chatbot version of the 55-year-old congressman.
A political action committee that’s raised millions to finance Phillips’ longshot bid for president from donors like billionaire hedge fund manager Bill Ackman, released an AI chatbot called Dean.Bot last week. It only lasted a few days.
The bot, which disclosed it was artificial intelligence, mimicked Phillips, letting voters converse with it like it was the real congressman.
The 2024 presidential election has seen AI-generated videos and advertisements, but nothing in the way of a candidate stand-in — until now. And for good reason: OpenAI, the company with the most popular chatbot, ChatGPT, doesn’t allow developers to adapt its software for political campaigning.
OpenAI took action against Dean.Bot, which is built on ChatGPT’s platform. The company shut down the bot and suspended access for its developer on Friday, saying the bot violated its terms of use. Funnily enough, the PAC behind the bot is run by an early OpenAI employee.
There are no current federal regulations prohibiting the use of AI in political campaigning, though legislation has been introduced intended to curb the politically deceptive use of AI, and the Federal Election Commission has sought public comment on the same issue.
Phillips the man, meanwhile, has had to resort to campaigning in the flesh in New Hampshire ahead of today’s primary since his AI doppelganger is nowhere to be found.
Graphic Truth: Davos doomsdayers
The World Economic Forum asked 1,490 experts from the worlds of academia, business, and government, as well as the international community and civil society to assess the evolving global risk landscape.
These leaders hailed from 113 different countries and the results show a deteriorating global outlook over the next 10 years, with the number of people who responded that the “global catastrophic risks [are] looming” jumping from 3% over the next 2 years to 17% over the next 10.
But after a year of lethal conflicts from Gaza and Ukraine to Sudan, record-breaking heat, with both droughts and wildfires, and polarization on the rise, can you blame them for being worried?
Podcast: Talking AI: Sociologist Zeynep Tufekci explains what's missing in the conversation
Listen: In this edition of the GZERO World podcast, Ian Bremmer speaks with sociologist and all-around-brilliant person, Zeynep Tufekci. Tufekci has been prescient on a number of issues, from Covid causes to misinformation online. Ian caught up with her on the sidelines of the Paris Peace Forum outside, so pardon the traffic. They discuss what people are missing when they talk about artificial intelligence today. Listen to find out why her answer surprised Ian because it seems so obvious in retrospect.
Subscribe to the GZERO World Podcast on Apple Podcasts, Spotify, Stitcher, or your preferred podcast platform, to receive new episodes as soon as they're published.
Can watermarks stop AI deception?
Is it a real or AI-generated photo? Is it Drake’s voice or a computerized track? Was the essay written by a student or by ChatGPT? In the age of AI, provenance is paramount – a fancy way of saying we need to know where the media we consume comes from.
While generative AI promises to transform industries – from health care to entertainment to finance, just to name a few – it might also cast doubt on the origins of everything we see online. Experts have spent years warning that AI-generated media could disrupt elections and cause social unrest, so the stakes couldn’t be higher.
To counter this threat, lawmakers have proposed mandatory disclosures for political advertising using AI, and companies like Google and Meta, the parent company of Facebook and Instagram, are already requiring this. But bad actors won’t be deterred by demands for disclosures. So wouldn’t it be helpful if we had a way to instantly debunk and decipher what’s made by AI and what’s not?
Some experts say “watermarks” are the answer. A traditional watermark is a visible imprint — like what you see on a Getty image when you haven’t paid for it – or the inclusion of a corner logo. Today, these are used to deter theft rather than deception.
But most watermark proposals for AI-generated media center on invisible ones. These are functionally bits of code that tell third-party software that an image, picture, video, audio clip, or even lines of text were generated with AI. Using invisible watermarks would allow the audience to see art without it being visually altered or ruined — but, if there’s any confusion, in theory, the consumer of that media can run it through a computer program to see whether it was human-made or not.
Joe Biden’s administration is curious about watermarks. In his October executive order, the US president told the Commerce Department to “develop guidance for content authentication and watermarking to clearly label AI-generated content.” The goal: To protect Americans from “fraud and deception.”
It’s an effort many private companies are already working on — but solving the watermark issue has involved a lot of trial and error.
In August, Google released SynthID, a new method for embedding a watermark in the pixels of an image that’s perceptible to machine detectors but not the human eye. Still, it warns that SynthID isn’t “foolproof to extreme” methods of image manipulation. And last week, Meta announced it’s adding invisible watermarks to its text-to-image generator, promising that it’s “resilient to common image manipulations like cropping, color change (brightness, contrast, etc.), screen shots and more.”
There are more creative, cross-industry solutions too. In October, Adobe developed a special icon that can be added to an image’s metadata that both indicates who made it and how. Adobe told The Verge that it wants the icon to serve as a “nutrition label” for AI-generated images. But just like nutrition labels on food, the reality is no one can punish you for ignoring them.
And there are daunting challenges to actually making watermarks work.
Adam Conner, the tech policy lead at the Center for American Progress, said that watermarks need to transcend file format changes. “Even the best plans for watermarking will need to solve for the issue … where content is distributed as a normal file type, like a JPEG or MP3,” he said. In other words, the watermarks need to carry over from where they’re generated — say, an image downloaded on DALL-E — to wherever they are copied or converted into various file formats.
Meanwhile, researchers have poked holes in the latest and greatest watermarking tech. Researchers at Carnegie Mellon, for example, published a method for destroying watermarks by adding “noise” (basically, useless data) to an image and then reconstructing it. “All invisible watermarks are vulnerable to the proposed attack,” they wrote in July.
Others think that watermarking efforts might just be a fool’s errand. “I don’t believe watermarking the output of the generative models will be a practical solution,” University of Maryland computer science professor Soheil Feizi told The Verge. “This problem is theoretically impossible to be solved reliably.”
But there is clear political will to get watermarks working. Apart from Biden’s call, the G-7 nations are reportedly planning to ask private companies to develop watermarking technology so AI media is detectable. China banned AI-generated media without watermarks a year ago. Europe has pushed for AI watermarking, too, but it’s unclear if it’ll make it into the final text of its AI Act, the scope of which lawmakers agreed to last week.
The main limitation to achieving these goals is the elephant in the room: If Feizi is right, then watermarking AI will simply … miss the mark.
Please write in and tell us what you think – are watermarks on AI-generated images a good idea? Should they be legally required? Write to us here.
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney - GZERO Media ›
- Grown-up AI conversations are finally happening, says expert Azeem Azhar - GZERO Media ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›