We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Tracking anti-Navalny bot armies
In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.
Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.
Still, he came home.
“If your beliefs are worth something,” Navalny wrote on Facebook, “you must be willing to stand up for them. And if necessary, make some sacrifices.”
He made the ultimate sacrifice on Feb. 16, when Russian authorities announced, with Arctic banality, that he had “died” at the IK-3 penal colony more than 1,200 miles north of Moscow. A frozen gulag. “Convict Navalny A.A. felt unwell after a walk, almost immediately losing consciousness,” they announced as if quoting a passage from Koestler’s “Darkness at Noon.” Later, deploying the pitch-black doublespeak of all dictators, they decided to call it, “sudden death syndrome.”
Worth noting: Navalny was filmed the day before, looking well. There is no body for his wife and two kids to see. No autopsy.
As we wrote this morning, Putin is winning on all fronts. Sensing NATO support for the war in Ukraine is wavering – over to you, US Congress – Putin is acting with confident impunity. His army is gaining ground in Ukraine. He scored a propaganda coup when he toyed with dictator-fanboy Tucker Carlson during his two-hour PR session thinly camouflaged as an “interview.” And just days after Navalny was declared dead, the Russian pilot Maksim Kuzminov, who defected to Ukraine with his helicopter last August, was gunned down in Spain.
And then, of course, there is the disinformation war, another Putin battleground. Navalny’s death got me wondering if there would be an orchestrated disinformation campaign around the event, and if so, whether there was any way to track it? Would there be, say, an online release of shock bot troops to combat Western condemnation of Navalny’s death and blunt the blowback?
It turns out there was.
To investigate, GZERO asked the “social threat information company” Cyabra, which specializes in tracking bots, to look for disinformation surrounding the online reactions to the news about Navalny. The Israeli company says its job is to uncover “threats” on social platforms. It has built AI-driven software to track “attacks such as impersonation, data leakage, and online executive perils as they occur.”
Cyabra’s team focused on the tweets President Joe Bidenand Prime Minister Justin Trudeau posted condemning Navalny’s death. Their software analyzed the number of bots that targeted these official accounts. And what they found was fascinating.
According to Cyabra, “29% of the Twitter profiles interacting with Biden’s post about Navalny on X were identified as inauthentic.” For Trudeau, the number was 25%.
Courtesy of Cyabra
So, according to Cyabra, more than a quarter of the reaction you saw on X related to Navalny’s death and these two leaders’ reactions came from bots, not humans. In other words, a bullshit campaign of misinformation.
This finding raises a lot of questions. What’s the baseline of corruption to get a good sense of comparison? For example, is 27% bot traffic on Biden’s tweet about Navalny’s death a lot, or is everything on social media flooded with the same amount of crap? How does Cyabra's team actually track bots, and how accurate is their data? Are they missing bots that are well-disguised, or, on the other side, are some humans being labeled as “inauthentic”? In short, what does this really tell us?
In the year of elections, with multiple wars festering and AI galloping ahead of regulation, the battle against disinformation and bots is more consequential than ever. The bot armies of the night are marching. We need to find a torch to see where they are and if there are any tools that can help us separate fact from fiction.
Tracking bot armies is a job that often happens in the shadows, and it comes with a lot of challenges. Can this be done without violating people’s privacy? How hard is this to combat? I spoke with the CEO of Cyabra, Dan Brahmy, to get his view.
Solomon: When Cyabra tracked the reactions to the tweets from President Joe Biden and Prime Minister Trudeau about the “death” of Navalny, you found more than 25% of the accounts were inauthentic. What does this tell us about social media and what people can actually trust is real?
Brahmy: From elections to sporting events to other significant international headline events, social media is often the destination for millions of people to follow the news and share their opinion. Consequently, it is also the venue of choice for malicious actors to manipulate the narrative.
This was also the case when Cyabra looked into President Biden and Prime Minister Trudeau’s X post directly blaming Putin for Navalny’s death. These posts turned out to be the ideal playing ground for narrative-manipulating bots. Inauthentic accounts on a large scale attacked Biden and Trudeau and blamed them for their foreign and domestic policies while attempting to divert attention from Putin and the negative narrative surrounding him.
The high number of fake accounts detected by Cyabra, together with the speed at which those accounts engaged in the conversation to divert and distract following the announcement of Navalny’s death, shows the capabilities of malicious actors and their intentions to conduct sophisticated influence operations.
Solomon: Can you tell where these are from and who is doing it?
Brahmy: Cyabra monitors for publicly available information on social media and does not track IP addresses or any private information. The publicly shared location of the account is collected by Cyabra. When analyzing the Navalny conversation, Cyabra saw that the majority of the accounts claimed themselves as coming from the US.
Solomon: There is always the benchmark question: How much “bot” traffic or inauthentic traffic do you expect at any time, for any online event? Put the numbers we see here for Trudeau and Biden in perspective.
Brahmy: The average percentage of fake accounts participating in an everyday conversation online typically varies between 4 and 8%. Cyabra’s discovery of 25-29% fake accounts related to this conversation is alarming, significant, and should give us cause for concern.
Solomon: Ok, then there is the accuracy question. How do you actually identify a bot and how do you know, given the sophistication of AI and new bots, that you are not missing a lot of them? Is it easier to find “obvious bots”— i.e., something that tweets every two minutes 24 hours a day, then say, a series of bots that look and act very human?
Brahmy: Using advanced AI and machine learning, Cyabra analyzes a profile’s activity and interactions to determine if it demonstrates non-human behaviors. Cyabra’s proprietary algorithm consists of over 500 behavioral parameters. Some parameters are more intuitive, like the use of multiple languages, while others require in-depth expertise and advanced machine learning. Cyabra’s technology works at scale and in almost real-time.
Solomon: There is so much disinformation anyway – actual people who lie, mislead, falsify, scam – how much does this matter?
Brahmy: The creation and activities of fake accounts on social media (whether it be a bot, sock puppet, troll, or otherwise) should be treated with the utmost seriousness. Fake accounts are almost exclusively created for nefarious purposes. By identifying inauthentic profiles and then analyzing their behaviors and the false narratives they are spreading, we can understand the intentions of malicious actors and remedy them as a society.
While we all understand that the challenge of disinformation is pervasive and a threat to society, being able to conduct the equivalent of an online CT scan reveals the areas that most urgently need our attention.
Solomon: Why does it matter in a big election year?
Brahmy: More than 4 billion people globally are eligible to vote in 2024, with over 50 countries holding elections. That’s 40% of the world’s population. Particularly during an election year, tracking disinformation is important – from protecting the democratic process, ensuring informed decision-making, preventing foreign interference, and promoting transparency, to protecting national security. By tracking and educating the public on the prevalence of inauthentic accounts, we slowly move closer to creating a digital environment that fosters informed, constructive, and authentic discourse.
You can check out part of the Cybara report here.
- Understanding Navalny’s legacy inside Russia ›
- Navalny’s widow continues his fight for freedom ›
- “A film is a weapon on time delay” — an interview with “Navalny” director Daniel Roher ›
- Navalny's death is a huge loss for democracy - NATO's Mircea Geona ›
- Alexei Navalny's death: A deep tragedy for Russia ›
- Navalny's death is a message to the West ›
- Navalny’s death: Five things to know ›
Ian Bremmer: Algorithms are now shaping human beings' behavior
Everyone is a product of their environment. But where once the influences on young people were largely shaped by their physical community, algorithmic content online has opened a new and dangerous pathway to radicalization and violence, says Eurasia Group President Ian Bremmer in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly.
That’s why the Christchurch Call’s work has resonated. The organization, founded by former New Zealand Prime Minister Jacinda Ardern in the wake of a heinous livestreamed mass shooting in the eponymous city, addresses a pressing need that was not handled before it had cost too many lives: online radicalization.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
What is a technopolar world?
Who runs the world? In a series of videos about artificial intelligence, Ian Bremmer, founder and president of GZERO Media and Eurasia Group introduces the concept of a technopolar world––one where technology companies wield unprecedented influence on the global stage, where sovereignty and influence is determined not by physical territory or military might, but control over data, servers, and, crucially, algorithms.
We aren’t yet in a fully technopolar world, but we do exist in a digital order where major tech companies hold sway over standards, operations, interactions, security and economics in the virtual realm. And Bremmer says this is just the beginning. He highlights two key advantages that technology companies have: their dominance over the digital space, which profoundly impacts the lives of billions of people every day, as well as their role in providing critical digital infrastructure required to run a modern economy and society.
As artificial intelligence and other transformative technologies advance, and more and more of our daily life shifts online, Bremmer predicts a shift in power dynamics, where tech companies extend their reach beyond the digital sphere into economics, politics, and even national security. This will almost certainly challenge traditional ideas about global power, which may be determined as much by competition between nation states and tech companies as it is, say, between the US and China. Incorporating tech firms into governance models may be necessary to effectively navigate the complexity of a technopolar world, Bremmer argues. Ultimately, how these companies choose to wield power and their interactions with governments will shape the trajectory of our economic, social, and political futures.
See more of GZERO Media's coverage on artificial intelligence and geopolitics,
TikTok, Huawei, and the US-China tech arms race
“When the Chinese get good at something, all of the sudden, the United States says, ‘This is a national security risk.’”
That’s what Shaun Rein, founder and managing director of the China Market Research Group, argued on GZERO World with Ian Bremmer while discussing the increasingly hostile geopolitical environment between the two superpowers.
What We’re Ignoring: Revenge of the nerds
There’s growing evidence that the much-ballyhooed mixed martial arts battle between X-Man Elon Musk and Meta CEO Mark Zuckerberg may actually take place.
Musk first posted that he would be up for a cage match against Zuckerberg in June. Since then, the two moguls have traded multiple barbs on the topic. Now Zuckerberg, who trains in jiu jitsu, has shared a screenshot of a conversation with his wife Priscilla Chan in which he crows about installing a training cage in their backyard. (Her response: “I have been working on that grass for two years.”)
Not to be outdone, Musk posted to X that he is preparing for the fight by “lifting weights throughout the day,” and that the "Zuck v Musk fight will be live-streamed on X. All proceeds will go to charity for veterans.”
Zuckerberg says he is "not holding his breath" because he offered a date of Aug. 26 but didn't hear back. No word yet on whether Threads will attempt a rival broadcast. Stay tuned. Or don’t.
ChatGPT and the 2024 US election
2024 will be the first US presidential election in the age of generative AI. How worried should we be about the spread of misinformation and its implications for democracy?
In 2016, social media platforms became Petri dishes of disinformation as foreign actors and far-right activists spread fake stories and worked to heighten partisan divisions. The 2020 election was fraught with conspiracy theories and baseless claims about voter fraud.
As 2024 approaches, tech and media experts warn that new generative AI tools like ChatGPT and Midjourney have the potential to spread misinformation and disinformation faster and easier than ever before. And this comes as newsrooms are experiencing mass layoffs and trusted systems like Twitter’s verification process become further eroded.
On GZERO World with Ian Bremmer, Media experts Brian Stelter and Nicole Hemmer says the stakes are incredibly high for truth and democracy.
“I think AI is going to make it easier to have a lot more information pollution in the atmosphere,” Stelter warns.
But Hemmer says there may be a light at the end of the tunnel. “I think that people don’t want to be post-truth,” she argues, “So maybe that’s where we’ll see those green shoots as people innovate ways to make it easier to navigate a world that’s awash in this kind of disinformation.”
Watch this episode of GZERO World with Ian Bremmer: "Politics, trust & the media in the age of misinformation"
Watch GZERO World with Ian Bremmer at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- How AI will roil politics even if it creates more jobs ›
- Be very scared of AI + social media in politics ›
- Be more worried about artificial intelligence ›
- Can we trust AI to tell the truth? - GZERO Media ›
- Is AI's "intelligence" an illusion? - GZERO Media ›
- Is Biden's embrace of Israel a political liability for him? - GZERO Media ›
- How AI threatens elections - GZERO Media ›
- AI in 2024: Will democracy be disrupted? - GZERO Media ›
NATO membership for Ukraine?
Ian Bremmer shares his insights on global politics this week on World In :60.
Sweden will join NATO. Is Ukraine next?
Well, sure, but next doesn't mean tomorrow. Next means like at some indeterminate point, which makes President Zelensky pretty unhappy and he's made that clear, but he has massive amounts of support from NATO right now, and he needs that support to continue. So, it's not like he has a lot of leverage on joining NATO. As long as the Americans are saying it's not going to happen, that means it's not going to happen. No, the real issue is how much and how concrete the multilateral security guarantees that can be provided by NATO to Ukraine actually turn out to be. We will be watching that space.
Is Taiwan readying itself for an invasion by conducting its biggest evacuation drills in years?
I wouldn't say readying for an invasion. I would say, you know, sort of preparing for every contingency, and that means taking care of your people. I mean, the Americans weren't readying themselves for nuclear Armageddon by doing drills in classrooms and by, you know, having bomb shelters, but they had them because we were in a world where nuclear war was thinkable. Well, we're in a world where Chinese, mainland Chinese invasion of Taiwan is very unlikely, but thinkable. And of course, the Taiwanese have to think about it a lot more than you and I do.
Elon vs. Zuck. Thoughts?
Well, my thoughts are mostly about the battle of the social media platforms and the fact that of course you now have the big gorilla in the room with a Twitter competitor. And I've seen it pretty functional for the first several days. Obviously, massive numbers of people are on it, mostly because it's really easy to sign up. They're all coming over from Instagram and it's owned by the same person, by the same shareholders. Unclear to me who's going to win. If I had to bet, I would say that within 6 or 12 months, we're going to have a fragmented social media landscape politically, the way we do blogosphere or cable news, which is, I guess, good for consumer choice, but it's bad for civil society. What else is new?
Threads, Twitter, & the 2024 US election
Jon Lieber, head of Eurasia Group's coverage of political and policy developments in Washington, DC shares his perspective on US politics.
Hi, I'm Jon Lieber, and this is US Politics in (a little over) 60 Seconds.
Meta last week announced the launch of Threads, a direct competitor to Twitter that reportedly already has reached a hundred million signups, a huge number in just a week. This long-awaited move by. One of the kings of social media could dramatically alter the media environment heading into the 2024 election.
Twitter is enormously popular and important in the political and media world in the US, but has increasingly become a source of consternation and stress for highly engaged political users, particularly those on the left, after the takeover of the platform by Elon Musk, who has pursued what has looked at times like a bizarre and at least partially ideological strategy to upend Twitter's content moderation rules, and in his personal feed, highlighted tweets that troll liberals and promote conspiracy theories. Other competitors to Twitter, like Mastodon or Bluesky, have not achieved mass reach necessary to pose a serious threat to Twitter's dominance of the online media ecosystem, while others like Truth Social remain niche corners of the Internet.
Other outlets like Telegram have grown in importance, but do not provide the open platform of the more dominant social media apps. All of these trends point to the increased atomization of the media landscape globally. In the last 50 years, the US has moved from three dominant national broadcast news networks to a patchwork of increasingly fragmented social media sites with very little gatekeeping and strong, and in some cases partisan, ideological communities.
The launch of a viable competitor to Twitter will accelerate this trend. Meta's content moderation will build off what is learned from managing Instagram and Facebook. This could make it more than just a convening site for people interested in talking about sports and politics, and instead give it a unique appeal for political liberals in the US who don't like where Twitter is going.
That's not to say that conservatives won't be found there too. Even in the height of their concerns about Twitter censoring conservative speech, major conservative figures and writers did quite well on the platform, expanding their reach even as they said they were being stifled. A more fractured online information environment will be even more difficult to moderate than a unified one and provides more avenues for echo chambers and allow politicians to more aggressively micro-target their messages and could render it much more difficult to restrict the spread of disinformation in the 2024 election, especially if Twitter and Threads become the domains of the political right and left, respectively, and if their corporate owners pursue different content moderation policies.
We'd also expect campaigns to start taking advantage of this fractured media landscape, as they have already, targeting different messages to the different audiences on their different channels, making it much more difficult to see what's actually happening on these campaigns as their messages go to increasingly smaller corners of the Internet.
Thanks for watching. This has been US Politics in (a little over) 60 Seconds.