We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
Deepfakes and dissent: How AI makes the opposition more dangerous
Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.
After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.
Speaking at a Global Stage panel on AI and elections hosted by GZERO and Microsoft on the sidelines of the Munich Security Forum, she said in this new world, someone like Alexei Navalny “would've been able to use AI in some extraordinary creative way to shake up what in the case of the Russian election is something of a foregone conclusion.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
AI vs. truth: Battling deepfakes amid 2024 elections
With nearly half of the globe heading to the polls this year amid lightning-speed developments in generative AI, fears are running rampant over tech-driven disinformation campaigns.
During a Global Stage panel at the Munich Security Conference, Bulgarian politician and European Parliament member Eva Maydell said she fears we will soon be unable to separate fact from deepfake fiction.
While acknowledging the important developments AI and emerging tech offer, Maydell warned that we also “need to be very sober” about how they are threatening the “very fabric of our democratic societies” and eroding trust.
While the EU is trying to push voluntary measures and legislative proposals, Maydell points out that political conversations often revolve around the sense that “we'll probably never be as good as those that are trying to deceive society.”
“But you still have to give it a try, and you need to do it in a very prepared way,” she adds.
Watch the full conversation: How to protect elections in the age of AI
Watch more Global Stage coverage on the 2024 Munich Security Conference.
- How to protect elections in the age of AI ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes on are on the campaign trail too ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- Combating AI deepfakes in elections through a new tech accord ›
Deepfakes are ‘fraud,’ says Microsoft's Brad Smith
The rapid rise of AI has presented a wide array of challenges, particularly in terms of finding a balance between protecting the right to free expression and safeguarding democracy from the corrosive effects of misinformation.
But Microsoft Vice Chair and President Brad Smith says freedom of expression does not apply to deepfakes — fake images or videos created via AI, which can involve using someone else’s face and/or voice without their permission. During a Global Stage panel on AI and elections at the Munich Security Conference, Smith unequivocally decried deepfakes as a form of “fraud.”
“The right to free expression gives me the right to stand up and say what is on my mind,” says Smith, adding, “I do not have the right to steal and use your voice. Your voice belongs to you and you alone… Let's give people the right to say what they think. Let's not steal their voice and put words in their mouth.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- Will comedy deepfakes generate laughs or lawsuits? ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfake porn targets high schoolers ›
- Deepfake it till you make it ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act ... ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›
How to protect elections in the age of AI
Half of the world’s population will have the chance to head to the polls this year in dozens of critical elections worldwide. These votes, which will shape policy and democracy for years to come, come amid light-speed development in artificial intelligence. As Eurasia Group noted in its 2024 Top Risk entitled “Ungoverned AI,” generative AI could be used by domestic and foreign actors – we’re looking at you, Russia – to impact campaigns and undermine trust in democracy.
To meet the moment, GZERO Media, on the ground at the 2024 Munich Security Conference, held a Global Stage discussion on Feb. 17 entitled “Protecting Elections in the Age of AI.” We spoke with Brad Smith, vice chair and president of Microsoft; Ian Bremmer, president and founder of Eurasia Group and GZERO Media; Fiona Hill, senior fellow for the Center on the United States and Europe at Brookings; Eva Maydell, an EU parliamentarian and a lead negotiator of the EU Chips Act and Artificial Intelligence Act; Kersti Kaljulaid, the former president of Estonia; with European correspondent Maria Tadeo moderating. The program also featured interviews with Kyriakos Mitsotakis, Greece’s prime minister, and Benedikt Franke, CEO and vice-chair of the Munich Security Conference. These thought leaders and experts discussed the implications of the rapid rise of AI amid this historic election year.
The group started by delving into what Bremmer has referred to as the “Voldemort” of years surrounding elections, to look at how election interference and disinformation have evolved since 2016.
“This is the year that people have been very concerned about, but have kind of hoped that they could push off. It's not just because there are elections all over the world and trust in institutions is deteriorating, it's also because the most powerful country in the world, and it's not becoming less powerful, is also one of the most politically dysfunctional,” says Bremmer, referring to the US.
The 2024 US presidential election “is maximally distrust-laden,” says Bremmer, adding that it’s “really hard to have a free and fair election in the US that all of its population” believes is legitimate.
And the worry is that AI could complicate the landscape even further.
Hill agreed that there’s cause for concern but underscored that people should not “panic” to a point where they’re “paralyzed” and “not taking action.”
“Panic is not an option given the stakes,” says Hill, adding, “There are negative aspects of all of this, but there's also the kind of question that we have to grapple with is how when legitimate competitors or opposition movements that otherwise beleaguered decide to use AI tools, that then also has an impact.”
There’s no doubt that AI can be used for nefarious purposes. Deepfakes can fool even the most discerning eye. Disinformation has already been rampant across the internet in recent election cycles and helped sow major divisions in many countries well before AI tools — far more sophisticated than your average meme — were widely available.
“With new tools and products that use generative AI, including from a company like ours, somebody can create a very realistic video, audio, or image. Just think about the different ways it can be used. Somebody can use it and they can make a video of themself, and they can make clear in the video that this is AI generated. That is one way a political candidate, even one who is in prison can speak,” says Smith, alluding to ex-Pakistani Prime Minister Imran Khan’s recent use of AI from behind bars.
Along these lines, there are many serious, valid concerns about the impact AI can have on elections and democracy more generally — particularly at a time when people are exhibiting rising levels of distrust in key institutions.
“It's very important to acknowledge a lot of the important developments that AI and emerging tech can bring to support our economic development,” says Maydell, adding, “but in the same time, especially this year, we need to be very sober about some of those threats that are in a way threatening the very fabric of our democratic societies.
As Maydell noted, this evolving new technology can be harnessed for good and bad. Can AI be used as a tool to protect candidates and the integrity of the electoral process?
A number of major tech companies, including Microsoft, signed an accord at the Munich Security Conference on Friday to help thwart and combat AI-related election interference.
“It's all about trying to put ourselves in a position, not to solve this problem completely, I don't think that's possible, but to manage this new reality in a way that will make a difference,” says Smith. The Microsoft president says the accord brings the tech sector together to preserve the authenticity of content, including by working to detect deepfakes and providing candidates with a mechanism to report any that are created about them.
“We'll work together to promote transparency and public education. This clearly is going to require a lot of work with civil society, with others around the world to help the public be ready,” says Smith.
But is enough being done?
“It's good that both politicians and the companies and society as a whole now has a better understanding where this is all leading us and we are collectively taking actions,” says Kaljulaid, but this is just a “first step” and “next steps need to follow.”
A balance will need to be found between legislating the challenges presented by AI and giving tech companies space to collaborate, innovate and address problems on their own.
“Democracy is always in jeopardy. Every generation has to answer the call to defend it,” says Smith, adding, “Now it's our turn. It's our turn as a generation of people to say that technology always changes, but democracy is a value that we hold timeless. So let's do what it takes to defend it, to preserve and promote it.”
The livestream was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, and technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
- AI's potential to impact election is cause for concern - EU's Eva Maydell ›
- AI in 2024: Will democracy be disrupted? ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- How AI threatens elections ›
- At the Munich Security Conference, Trump isn't the only elephant in the room ›
- Ukraine crisis one of many global threats at Munich Security Conference ›
- 4 things to know about the Munich Security Conference ›
- Munich Security Conference 2024: What to expect ›
Live premiere today at 12 pm ET: Can we use AI to protect elections?
Today at 12 pm ET/9 am PT/6 pm CET, watch the live premiere of our Global Stage discussion at the Munich Security Conference, "Munich 2024: Protecting Elections in the Age of AI." 2024 is truly the “Year of Elections” with more than 75 nations heading to the polls, affecting roughly half the world’s population. But an ongoing decline of trust in institutions plus an explosion of AI tools and deep fake technologies could create a dangerous environment. Our panel will examine how AI can also be a way to protect consumers and candidates, helping to shore up the integrity of the electoral process. Can AI be used to quickly flag and even eliminate online lies and misinformation?
European correspondent Maria Tadeo moderates the conversation with an expert panel including:
- Ian Bremmer, President and Founder, Eurasia Group and GZERO Media
- Fiona Hill, Senior Fellow, Center on the United States and Europe, Brookings
- Kersti Kaljulaid, former President of Estonia
- Eva Maydell, Member of the European Parliament and lead negotiator, EU Chips Act and Artificial Intelligence Act.
- Brad Smith, Vice Chair and President, Microsoft
- Special appearances by Kyriakos Mitsotakis, Prime Minister of Greece, and Benedikt Franke, Vice-Chairman and CEO, Munich Security Conference
More about Global Stage:
Global Stage: Global issues at the intersection of technology, politics, and societyyoutu.be
Will comedy deepfakes generate laughs or lawsuits?
Comedian George Carlin died in 2008, but he’s back for an hour-long special, “George Carlin: I’m Glad I’m Dead,” which recently dropped on YouTube. It was the work of a comedy duo employing deepfake technology to bring Carlin’s work back to life.
The artist’s daughter, Kelly Carlin, who manages her late father’s estate and did not grant permission for the fake Carlin special, responded angrily on X: “My dad spent a lifetime perfecting his craft from his very human life, brain and imagination. No machine will ever replace his genius.”
Kelly Carlin is exploring legal action, and she’s far from alone in questioning the unauthorized use of one’s likeness or work. You’ll recall that AI became a crucial bargaining point in the actors’ and writers’ strikes last year. Unauthorized use of their likenesses and writing styles by the studios was chief among their concerns. The agreements struck with the studios generally allow for AI tools to be used with appropriate compensation for union members.
It’s unclear what legal avenue the Carlin estate could pursue. Parody is well-protected by the First Amendment and the deceased generally don’t have privacy rights under US law. A better question perhaps is whether the underlying technology was illegally trained on Carlin’s material — part of a broader copyright battle between copyright holders and AI developers that we’ve discussed at length in this newsletter.
Court action aside, Kelly Carlin plans to meet with SAG-AFTRA and help them lobby Congress for better protections for the dead.