We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Are US elections Safe? Chris Krebs is optimistic
The debate around the US banning TikTok is a proxy for a larger question: How safe are democracies from high-tech threats, especially from places like China and Russia?
There are genuine concerns about the integrity of elections. What are the threats out there and what can be done about it? No one understands this issue better than Chris Krebs. Krebs is best known as the former director of the US Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency.
In a high-profile showdown, Donald Trump fired Krebs in November 2020, after CISA publicly affirmed that the election was among the “most secure in history” and that the allegations of election corruption were flat-out wrong. Since then, Krebs has become the chief public policy officer at SentinelOne and cochairs the Aspen Institute’s U.S. Cybersecurity Working Group, and he remains at the forefront of the cyber threat world.
GZERO Publisher Evan Solomon spoke to him this week about what we should expect in this volatile election year.
Solomon: How would you compare the cyber threat landscape now to the election four years ago? Have the rapid advances in AI made a material difference?
Chris Krebs: The general threat environment related to elections tracks against the broader cyber threat environment. The difference here is that beyond just pure technical attacks on election systems, election infrastructure, and on campaigns themselves, we have a parallel threat of information operations, and influence operations —what we more broadly call disinformation.
This has picked up almost exponentially since 2016, when the Russians, as detailed in the Intelligence Community Assessment of January 2017, showed that you can get into the middle of domestic elections and pour kerosene on that conversation. That means it jumps into the real world, potentially even culminating in political violence like we saw on Jan. 6.
We saw the Iranians follow the lead in 2020. The intelligence community released another report in December that talked about how the Chinese attempted to influence the 2022 elections. We've seen the Russians are active too through a group we track called Doppelganger, specifically targeting the debate around the border and immigration in the US.
Solomon: When you say Doppelganger is “active,” what exactly does that mean in real terms?
Krebs: They use synthetic personas or take over existing personas that have some element of credibility and jump into the online discourse. They also use Pink Slime websites, which is basically fake media, and then get picked up through social media and move over to traditional media. They are taking existing divides and amplifying the discontent.
Solomon: Does it have a material impact on, say, election results?
Krebs: I was at an event back in 2019, and a former governor came up to me as we were talking about prepping for the 2020 election and said: “Hey, everything you just talked about sounds like opposition research, typical electioneering, and hijinks.”
And you know what? That's not totally wrong. But there is a difference.
Rather than just being normal domestic politics, now we have a foreign security service that's inserting itself in driving discourse domestically. And that's where there are tools that the intelligence services here in the US as well as our allies in the West have the ability to go in and disrupt.
They can get onto foreign networks and say, “Hey, I know that account right there. I am able to determine that the account which is pushing this narrative is controlled by the Russian security services, and we can do something with that.”
But here is the key: Once you have a social media influencer here in the US that picks up that narrative and runs with it, well, now, it's effectively fair game. It's part of the conversation, First Amendment protected.
Solomon: Let's move to the other side. What do you do about it without violating the privacy and free speech civil liberties of citizens?
Krebs: This is really the political question of the day. In fact, just last week there was a Supreme Court hearing on Murthy v. Missouri that gets to this question of government and platforms working together. (Editor’s note: The case hinges on whether the government’s efforts to combat misinformation online around elections and COVID constitute a form of censorship). Based on my read, the Supreme Court was largely being dismissive of Missouri and Louisiana's arguments in that case. But we'll see what happens.
I think the bigger issue is that there is this broader conflict, particularly with China, and it is a hot cyber war. Cyber war from their military doctrine has a technical leg and there's a psychological leg. And as we see it, there are a number of different approaches.
For example, India has outlawed and banned hundreds of Chinese origin apps, including WeChat and TikTok and a few others. The US has been much more discreet in combating Chinese technology. The recent actions by the US Congress and the House of Representatives are much more focused on getting the foreign control piece out of the conversation and requiring divestitures.
Solomon: Chris, what’s the biggest cyber threat to the elections?
Krebs: Based on my conversations with law enforcement and the national security community, the number one request that they're getting from election officials isn't on the cyber side. It isn't on the disinformation side. It's on physical threats to election workers. We're talking about doxing, we're talking about swatting, we're talking about people physically intimidating at the polls and at offices. And this is resulting in election officials resigning and quitting and not showing up.
How do we protect those real American heroes who are making sure that we get to follow through on our civic duty of voting and elections? If those election workers aren't there, it's going to be a lot harder for you and me to get out there and vote.
Solomon: What is your biggest concern about AI technology galloping ahead of regulations?
Krebs: Here in the United States, I'm not too worried about regulation getting in front of AI. When you look at the recent AI executive order out of the Biden administration, it's about transparency and even the threshold they set for compute power and operations is about four times higher than the most advanced publicly available generative AI. And even if you cross that threshold, the most you have to do is tell the government that you're building or training that model and show safety and red teaming results, which hardly seems onerous to me.
The Europeans are taking a different approach, more of a regulate first, ask questions later, which I think is going to limit some of their ability to truly be at the bleeding edge of AI.
But I'll tell you this: We are using AI and cybersecurity to a much greater effect and impact than the bad guys right now. The best they can do right now is use it for social engineering, for writing better phishing emails, for some research, and for functionality. We are not seeing credible reports of AI being used to write new innovative malware. But in the meantime, we are giving tools that are AI powered to the threat hunters that have really advanced capabilities to go find bad stuff, to improve configurations, and ultimately take the security operations piece and supercharge it.
Midjourney quiets down politics
Everything is political for GZERO, but AI image generator Midjourney would rather avoid the drama. The company has begun blocking the creation of images featuring President Joe Biden and former President Donald Trump in the run-up to the US presidential election in November.
“I don’t really care about political speech,” said Midjourney CEO David Holz in an event with users last week. “That’s not the purpose of Midjourney. It’s not that interesting to me. That said, I also don’t want to spend all of my time trying to police political speech. So we’re going to have to put our foot down on it a bit.”
Holz’s statement comes just weeks after the Center for Countering Digital Hate issued a report showing it was able to use popular AI image generators to create election disinformation in 41% of its attempts. Midjourney performed worst out of all of the tools the group tested with researchers able to generate these images 65% of the time.
Examples included images of Joe Biden sick in a hospital bed, Donald Trump in a jail cell, and a box of thrown-out ballots in a dumpster. GZERO tried to generate a simple image of Biden and Trump shaking hands and received an error message: “Sorry! Our AI moderator thinks this prompt is probably against our community standards.”
For Midjourney, it seems like they simply don’t want to be in the business of policing what political speech is acceptable and what isn’t — so they’re taking the easy way out and turning the nozzle off entirely. OpenAI’s tools have long been hesitant to wade into political waters, and stark criticism has come for Microsoft and Google for their sensitivity failures about historical accuracy and offensive imagery. Why would Midjourney take that risk?
AI election safeguards aren’t great
The British nonprofit used Midjourney, OpenAI's ChatGPT, Stability.ai's DreamStudio, and Microsoft's Image Creator for testing in February, simply tying in different text prompts related to the US elections. The group was able to bypass the tools’ protections a whopping 41% of the time.
Some of the images they created showed Donald Trump being taken away in handcuffs, Trump on a plane with alleged pedophile and human trafficker Jeffrey Epstein, and Joe Biden in a hospital bed.
Generative AI is already playing a tangible role in political campaigns, especially as voters go to the polls for national elections in 64 different countries this year. AI has been used to help a former prime minister get his message out from prison in Pakistan, to turn a hardened defense minister into a cuddly character in Indonesia, and to impersonate US President Biden in New Hampshire. Protections that fail nearly half the time just won’t cut it. With regulation lagging behind the pace of technology, AI companies have made voluntary commitments to prevent the creation and spread of election-related AI media.
“All of these tools are vulnerable to people attempting to generate images that could be used to support claims of a stolen election or could be used to discourage people from going to polling places," CCDH’s Callum Hood told the BBC. “If there is will on the part of the AI companies, they can introduce safeguards that work.”
Tracking anti-Navalny bot armies
In an exclusive investigation into online disinformation surrounding online reaction to Alexei Navalny's death, GZERO asks whether it is possible to track the birth of a bot army. Was Navalny's tragic death accompanied by a massive online propaganda campaign? We investigated, with the help of a company called Cyabra.
Alexei Navalny knew he was a dead man the moment he returned to Moscow in January 2021. Vladimir Putin had already tried to kill him with the nerve agent Novichok, and he was sent to Germany for treatment. The poison is one of Putin’s signatures, like pushing opponents out of windows or shooting them in the street. Navalny knew Putin would try again.
Still, he came home.
“If your beliefs are worth something,” Navalny wrote on Facebook, “you must be willing to stand up for them. And if necessary, make some sacrifices.”
He made the ultimate sacrifice on Feb. 16, when Russian authorities announced, with Arctic banality, that he had “died” at the IK-3 penal colony more than 1,200 miles north of Moscow. A frozen gulag. “Convict Navalny A.A. felt unwell after a walk, almost immediately losing consciousness,” they announced as if quoting a passage from Koestler’s “Darkness at Noon.” Later, deploying the pitch-black doublespeak of all dictators, they decided to call it, “sudden death syndrome.”
Worth noting: Navalny was filmed the day before, looking well. There is no body for his wife and two kids to see. No autopsy.
As we wrote this morning, Putin is winning on all fronts. Sensing NATO support for the war in Ukraine is wavering – over to you, US Congress – Putin is acting with confident impunity. His army is gaining ground in Ukraine. He scored a propaganda coup when he toyed with dictator-fanboy Tucker Carlson during his two-hour PR session thinly camouflaged as an “interview.” And just days after Navalny was declared dead, the Russian pilot Maksim Kuzminov, who defected to Ukraine with his helicopter last August, was gunned down in Spain.
And then, of course, there is the disinformation war, another Putin battleground. Navalny’s death got me wondering if there would be an orchestrated disinformation campaign around the event, and if so, whether there was any way to track it? Would there be, say, an online release of shock bot troops to combat Western condemnation of Navalny’s death and blunt the blowback?
It turns out there was.
To investigate, GZERO asked the “social threat information company” Cyabra, which specializes in tracking bots, to look for disinformation surrounding the online reactions to the news about Navalny. The Israeli company says its job is to uncover “threats” on social platforms. It has built AI-driven software to track “attacks such as impersonation, data leakage, and online executive perils as they occur.”
Cyabra’s team focused on the tweets President Joe Bidenand Prime Minister Justin Trudeau posted condemning Navalny’s death. Their software analyzed the number of bots that targeted these official accounts. And what they found was fascinating.
According to Cyabra, “29% of the Twitter profiles interacting with Biden’s post about Navalny on X were identified as inauthentic.” For Trudeau, the number was 25%.
Courtesy of Cyabra
So, according to Cyabra, more than a quarter of the reaction you saw on X related to Navalny’s death and these two leaders’ reactions came from bots, not humans. In other words, a bullshit campaign of misinformation.
This finding raises a lot of questions. What’s the baseline of corruption to get a good sense of comparison? For example, is 27% bot traffic on Biden’s tweet about Navalny’s death a lot, or is everything on social media flooded with the same amount of crap? How does Cyabra's team actually track bots, and how accurate is their data? Are they missing bots that are well-disguised, or, on the other side, are some humans being labeled as “inauthentic”? In short, what does this really tell us?
In the year of elections, with multiple wars festering and AI galloping ahead of regulation, the battle against disinformation and bots is more consequential than ever. The bot armies of the night are marching. We need to find a torch to see where they are and if there are any tools that can help us separate fact from fiction.
Tracking bot armies is a job that often happens in the shadows, and it comes with a lot of challenges. Can this be done without violating people’s privacy? How hard is this to combat? I spoke with the CEO of Cyabra, Dan Brahmy, to get his view.
Solomon: When Cyabra tracked the reactions to the tweets from President Joe Biden and Prime Minister Trudeau about the “death” of Navalny, you found more than 25% of the accounts were inauthentic. What does this tell us about social media and what people can actually trust is real?
Brahmy: From elections to sporting events to other significant international headline events, social media is often the destination for millions of people to follow the news and share their opinion. Consequently, it is also the venue of choice for malicious actors to manipulate the narrative.
This was also the case when Cyabra looked into President Biden and Prime Minister Trudeau’s X post directly blaming Putin for Navalny’s death. These posts turned out to be the ideal playing ground for narrative-manipulating bots. Inauthentic accounts on a large scale attacked Biden and Trudeau and blamed them for their foreign and domestic policies while attempting to divert attention from Putin and the negative narrative surrounding him.
The high number of fake accounts detected by Cyabra, together with the speed at which those accounts engaged in the conversation to divert and distract following the announcement of Navalny’s death, shows the capabilities of malicious actors and their intentions to conduct sophisticated influence operations.
Solomon: Can you tell where these are from and who is doing it?
Brahmy: Cyabra monitors for publicly available information on social media and does not track IP addresses or any private information. The publicly shared location of the account is collected by Cyabra. When analyzing the Navalny conversation, Cyabra saw that the majority of the accounts claimed themselves as coming from the US.
Solomon: There is always the benchmark question: How much “bot” traffic or inauthentic traffic do you expect at any time, for any online event? Put the numbers we see here for Trudeau and Biden in perspective.
Brahmy: The average percentage of fake accounts participating in an everyday conversation online typically varies between 4 and 8%. Cyabra’s discovery of 25-29% fake accounts related to this conversation is alarming, significant, and should give us cause for concern.
Solomon: Ok, then there is the accuracy question. How do you actually identify a bot and how do you know, given the sophistication of AI and new bots, that you are not missing a lot of them? Is it easier to find “obvious bots”— i.e., something that tweets every two minutes 24 hours a day, then say, a series of bots that look and act very human?
Brahmy: Using advanced AI and machine learning, Cyabra analyzes a profile’s activity and interactions to determine if it demonstrates non-human behaviors. Cyabra’s proprietary algorithm consists of over 500 behavioral parameters. Some parameters are more intuitive, like the use of multiple languages, while others require in-depth expertise and advanced machine learning. Cyabra’s technology works at scale and in almost real-time.
Solomon: There is so much disinformation anyway – actual people who lie, mislead, falsify, scam – how much does this matter?
Brahmy: The creation and activities of fake accounts on social media (whether it be a bot, sock puppet, troll, or otherwise) should be treated with the utmost seriousness. Fake accounts are almost exclusively created for nefarious purposes. By identifying inauthentic profiles and then analyzing their behaviors and the false narratives they are spreading, we can understand the intentions of malicious actors and remedy them as a society.
While we all understand that the challenge of disinformation is pervasive and a threat to society, being able to conduct the equivalent of an online CT scan reveals the areas that most urgently need our attention.
Solomon: Why does it matter in a big election year?
Brahmy: More than 4 billion people globally are eligible to vote in 2024, with over 50 countries holding elections. That’s 40% of the world’s population. Particularly during an election year, tracking disinformation is important – from protecting the democratic process, ensuring informed decision-making, preventing foreign interference, and promoting transparency, to protecting national security. By tracking and educating the public on the prevalence of inauthentic accounts, we slowly move closer to creating a digital environment that fosters informed, constructive, and authentic discourse.
You can check out part of the Cybara report here.
- Understanding Navalny’s legacy inside Russia ›
- Navalny’s widow continues his fight for freedom ›
- “A film is a weapon on time delay” — an interview with “Navalny” director Daniel Roher ›
- Navalny's death is a huge loss for democracy - NATO's Mircea Geona ›
- Alexei Navalny's death: A deep tragedy for Russia ›
- Navalny's death is a message to the West ›
- Navalny’s death: Five things to know ›
Ukraine’s AI battlefield
Saturday marks the two-year anniversary of Russia’s invasion of Ukraine.
Over the course of this bloody war, the Ukrainian defense strategy has grown to a full embrace of cutting-edge artificial intelligence. Ukraine has been described as a “living lab for AI warfare.”
That capability comes largely from the American government but also from American industry. With the help of powerful American tech companies such as Palantir and Clearview AI, Ukraine has deployed AI throughout its military operations. The biggest tech companies have been involved, too; Amazon, Google, Microsoft, and Elon Musk’s Starlink have also provided vital tech to aid Ukraine’s war effort.
Ukraine is using AI to analyze large data sets stemming from satellite imagery, social media, and drone footage, but also supercharging its geospatial intelligence and electronic warfare efforts. AI-powered facial recognition and other imagery technology has been instrumental in identifying Russian soldiers, collecting evidence of war crimes, as well as locating land mines.
And increasingly, weapons are also powered by AI. According to a new report from Bloomberg, US and UK leaders are providing AI-powered drones to Ukraine, which would fly in large fleets, coordinating with one another to identify and take out Russian targets. There is no shortage of ethical concerns about the nature of AI-powered warfare, as we have written about in the past, but that hasn’t stymied President Joe Biden’s commitment to beating back Vladimir Putin and defending a strategically crucial ally.
Reports about Russia’s own use of AI in warfare are murkier, though there’s some evidence to suggest they may be using the technology to fuel disinformation campaigns as well as build weaponry. But Ukraine might have an advantage: Recently, Russia’s fancy new AI-powered drone-killing system was reportedly blown up by, of all things, a Ukrainian drone.
Ukraine’s stand against Russia has been called a David and Goliath story, but it’s also a battle evened by technological prowess. It’s a view into the future of warfare, where the full strength of Silicon Valley and the US military-industrial complex meet.AI has entered the race to primary Joe Biden
For a brief moment this week, there were two Dean Phillips – the man and the bot. The human is a congressman from Minnesota who’s running for the Democratic nomination for president, hoping to rise above his measly 7% poll numbers to displace sitting President Joe Biden as the party’s nominee.
But there was also an AI chatbot version of the 55-year-old congressman.
A political action committee that’s raised millions to finance Phillips’ longshot bid for president from donors like billionaire hedge fund manager Bill Ackman, released an AI chatbot called Dean.Bot last week. It only lasted a few days.
The bot, which disclosed it was artificial intelligence, mimicked Phillips, letting voters converse with it like it was the real congressman.
The 2024 presidential election has seen AI-generated videos and advertisements, but nothing in the way of a candidate stand-in — until now. And for good reason: OpenAI, the company with the most popular chatbot, ChatGPT, doesn’t allow developers to adapt its software for political campaigning.
OpenAI took action against Dean.Bot, which is built on ChatGPT’s platform. The company shut down the bot and suspended access for its developer on Friday, saying the bot violated its terms of use. Funnily enough, the PAC behind the bot is run by an early OpenAI employee.
There are no current federal regulations prohibiting the use of AI in political campaigning, though legislation has been introduced intended to curb the politically deceptive use of AI, and the Federal Election Commission has sought public comment on the same issue.
Phillips the man, meanwhile, has had to resort to campaigning in the flesh in New Hampshire ahead of today’s primary since his AI doppelganger is nowhere to be found.
Can watermarks stop AI deception?
Is it a real or AI-generated photo? Is it Drake’s voice or a computerized track? Was the essay written by a student or by ChatGPT? In the age of AI, provenance is paramount – a fancy way of saying we need to know where the media we consume comes from.
While generative AI promises to transform industries – from health care to entertainment to finance, just to name a few – it might also cast doubt on the origins of everything we see online. Experts have spent years warning that AI-generated media could disrupt elections and cause social unrest, so the stakes couldn’t be higher.
To counter this threat, lawmakers have proposed mandatory disclosures for political advertising using AI, and companies like Google and Meta, the parent company of Facebook and Instagram, are already requiring this. But bad actors won’t be deterred by demands for disclosures. So wouldn’t it be helpful if we had a way to instantly debunk and decipher what’s made by AI and what’s not?
Some experts say “watermarks” are the answer. A traditional watermark is a visible imprint — like what you see on a Getty image when you haven’t paid for it – or the inclusion of a corner logo. Today, these are used to deter theft rather than deception.
But most watermark proposals for AI-generated media center on invisible ones. These are functionally bits of code that tell third-party software that an image, picture, video, audio clip, or even lines of text were generated with AI. Using invisible watermarks would allow the audience to see art without it being visually altered or ruined — but, if there’s any confusion, in theory, the consumer of that media can run it through a computer program to see whether it was human-made or not.
Joe Biden’s administration is curious about watermarks. In his October executive order, the US president told the Commerce Department to “develop guidance for content authentication and watermarking to clearly label AI-generated content.” The goal: To protect Americans from “fraud and deception.”
It’s an effort many private companies are already working on — but solving the watermark issue has involved a lot of trial and error.
In August, Google released SynthID, a new method for embedding a watermark in the pixels of an image that’s perceptible to machine detectors but not the human eye. Still, it warns that SynthID isn’t “foolproof to extreme” methods of image manipulation. And last week, Meta announced it’s adding invisible watermarks to its text-to-image generator, promising that it’s “resilient to common image manipulations like cropping, color change (brightness, contrast, etc.), screen shots and more.”
There are more creative, cross-industry solutions too. In October, Adobe developed a special icon that can be added to an image’s metadata that both indicates who made it and how. Adobe told The Verge that it wants the icon to serve as a “nutrition label” for AI-generated images. But just like nutrition labels on food, the reality is no one can punish you for ignoring them.
And there are daunting challenges to actually making watermarks work.
Adam Conner, the tech policy lead at the Center for American Progress, said that watermarks need to transcend file format changes. “Even the best plans for watermarking will need to solve for the issue … where content is distributed as a normal file type, like a JPEG or MP3,” he said. In other words, the watermarks need to carry over from where they’re generated — say, an image downloaded on DALL-E — to wherever they are copied or converted into various file formats.
Meanwhile, researchers have poked holes in the latest and greatest watermarking tech. Researchers at Carnegie Mellon, for example, published a method for destroying watermarks by adding “noise” (basically, useless data) to an image and then reconstructing it. “All invisible watermarks are vulnerable to the proposed attack,” they wrote in July.
Others think that watermarking efforts might just be a fool’s errand. “I don’t believe watermarking the output of the generative models will be a practical solution,” University of Maryland computer science professor Soheil Feizi told The Verge. “This problem is theoretically impossible to be solved reliably.”
But there is clear political will to get watermarks working. Apart from Biden’s call, the G-7 nations are reportedly planning to ask private companies to develop watermarking technology so AI media is detectable. China banned AI-generated media without watermarks a year ago. Europe has pushed for AI watermarking, too, but it’s unclear if it’ll make it into the final text of its AI Act, the scope of which lawmakers agreed to last week.
The main limitation to achieving these goals is the elephant in the room: If Feizi is right, then watermarking AI will simply … miss the mark.
Please write in and tell us what you think – are watermarks on AI-generated images a good idea? Should they be legally required? Write to us here.
- This year's Davos is different because of the AI agenda, says Charter's Kevin Delaney - GZERO Media ›
- Grown-up AI conversations are finally happening, says expert Azeem Azhar - GZERO Media ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? - GZERO Media ›
- When AI makes mistakes, who can be held responsible? - GZERO Media ›
Paris Peace Forum Director General Justin Vaïsse: Finding common ground
How do you find peace in a world so riven by rivalries and competing interests? One step, according to Director General of the Paris Peace Forum Justin Vaïsse, is to challenge simplistic notions of East-West or North-South alignment.
“We're trying to get all the different actors, East, West, North, South, to work on the same issues and to make progress where they have common interests,” he said to GZERO’s Tony Maciulis on the sidelines of the 2023 Paris Peace Forum. “We focus on competition and geopolitical rivalry while we forget the iceberg coming our way when we fight on the deck of the Titanic.”
This year, the conversation centers on cyberspace and how to protect democracies in a world where rapidly advancing AI technology makes them ever more vulnerable to disinformation campaigns. The bad news is that 2024 promises to be the worst year on record for election interference. The good news, on the other hand, is that many countries have their own interest in hammering out a workable system for regulating AI, and Vaisse expects a common language regulating cyberattacks to emerge in the coming months.
At the 2023 Paris Peace Forum, GZERO also hosted a Global Stage event, Live from the Paris Peace Forum: Embracing technology to protect democracy.
- Should AI content be protected as free speech? ›
- How AI and deepfakes are being used for malicious reasons ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How are emerging technologies helping to shape democracy? ›
- Paris 2024 Olympics chief: “We are ready” - GZERO Media ›