We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Dreams of a dancing Modi
A video circulating on social media shows Indian Prime Minister Narendra Modidressed stylishly and dancing to a Bollywood song while another shows his political rival Mamata Banerjee in a similar setting, though there’s a political speech of hers playing in the background. Are India’s political leaders getting down on the dancefloor to drive voters to the polls in ongoing elections? Nope — both were created with artificial intelligence.
While Modi made light of his, calling such creativity a “a delight,” the video of Banerjee, which featured parts of a speech in which she criticized those who have left her party for Modi’s, elicited a different response: Indian police said it could “affect law and order,” and they are investigating. One Kolkata cybercrime officer warned the X user who posted the Banerjee video that they could be “liable for strict penal action.” Still, the user told Reuters they are not deleting the video and don’t believe the police can trace their anonymous account.
The videos were made with Viggle, a free online service, showing that even cheap or free tools can cause a major stir in global politics.
The Indian government has been selective about when it embraces artificial intelligence, positioning itself as a leader in the technology while also cracking down on uses that offend the sensibilities of its right-wing government. Late last year, the government even considered asking Meta to break WhatsApp’s encryption to identify who created and circulated deepfake videos of politicians. Perhaps Modi’s regime can make India into a destination for AI companies — if it doesn’t keep shooting itself in the foot when it feels threatened.Gaza protests, union negotiations, and deepfakes: Is the Met Gala a microcosm of the times?
Last night, the Metropolitan Museum of Art rolled out the red carpet for the Met Gala — a star-studded fundraiser hosted by media giant Condé Nast — amid pro-Palestinian protests, union negotiations, and deepfake dresses.
Gaza protests: As celebrities took to the red carpet Monday night, police struggled to contain hundreds of pro-Palestinian protesters marching down Fifth Avenue to protest the event. Many of the demonstrators came from Hunter College in an evolution of the campus protests that have swept the country – and likely a harbinger of things to come after students leave campus this summer but still strive to make their voices heard.
Union negotiations: Just 12 hours earlier, Condé Nast reached an agreement with unionized employees who were threatening to abandon their jobs at the event if they did not reach an agreement in long-stalled contract negotiations. In a post on X, the union warned on Saturday night that management could “meet us at the table or meet us at the Met on Monday.” The agreement continues a year of union wins and includes wage increases, additional parental leave, and hybrid work protections.
Deepfakes: Meanwhile, many of us who didn’t pay $75,000 for a seat and were watching the red carpet online were bamboozled by a deepfake of Katy Perry in two dresses, both generated by AI. Perry did not attend the gala, but if you were fooled by the deepfake, don't feel too bad; her own mother was too.
The Met Gala is often criticized for being a pedestal for the out-of-touch, but this time, even the force of the mighty Anna Wintour couldn’t insulate the event from the outside world.
Alleged AI crime rocks Maryland high school
Dazhon Darien, a former athletic director at Pikesville High School in Baltimore County, Maryland, was arrested on April 25 and charged with a litany of crimes related to using AI to frame the school's principal. Darien allegedly created a fake AI voice of Principal Eric Eiswert, used it to generate racist and antisemitic statements, and posted the audio on social media in January. Eiswert was temporarily removed from the school after the audio emerged.
The police allege that Darien used the school’s internet to search for AI tools and sent emails about the recording. The audio was then sent to and posted by a popular Baltimore-area Instagram account on Jan. 17. It’s unclear which tool was used to make the recording, but digital forensics experts said it was clearly fake.
At least 10 states have some form of deepfake laws, though some are focused on pornography. Still, AI-specific charges are rare in the US. Darien was charged with disrupting school activities, theft, retaliation against a witness, and stalking.
Deepfake audio has become a major problem in global elections, but this story demonstrates it can also easily weaponize person-to-person disputes.
AI and Canada's proposed Online Harms Act
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, takes at a look at the Canadian government’s Online Harms Act, which seeks to hold social media companies responsible for harmful content – often generated by artificial intelligence.
So last week, the Canadian government tabled their long-awaited Online Harms legislation. Similar to the Digital Services Act in the EU., this is a big sweeping piece of legislation, so I won't get into all the details. But essentially what it does is it puts the onus on social media companies to minimize the risk of their products. But in so doing, this bill actually provides a window in how we might start regulate AI.
It does this in two ways. First, the bill requires platforms to minimize the risk of exposure to seven types of harmful content, including self-harm content directed to kids or posts that incite hatred or violence. The key here is the obligation is on social media platforms, like Facebook or Instagram or TikTok, to minimize the risk of their products, not to take down every piece of bad content. The concern is not with all of the each individual pieces of content, but the way that social media products and particularly their algorithms might amplify or help target its distribution. And these products are very often driven by AI.
Second, one area where the proposed law does mandate a takedown of content is when it comes to intimate image abuse, and that includes deepfakes or content that's created by AI. If an intimate image is flagged as non-consensual, even if it's created by AI, it needs to be taken down within 24 hours by the platform. Even in a vacuum, AI generated deepfake pornography or revenge porn is deeply problematic. But what's really worrying is when these things are shared and amplified online. And to get at that element of this problem, we don't actually need to regulate the creation of these deepfakes, we need to regulate the social media that distributes them.
So countries around the world are struggling with how to regulate something as opaque and unknown as the existential risk of AI, but maybe that's the wrong approach. Instead of trying to govern this largely undefined risk, maybe we should be watching for countries like Canada who are starting with the harms we already know about.
Instead of broad sweeping legislation for AI, we might want to start with regulating the older technologies, like social media platforms that facilitate many of the harms that AI creates.
I'm Taylor Owen and thanks for watching.
- When AI makes mistakes, who can be held responsible? ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric ›
- AI regulation means adapting old laws for new tech: Marietje Schaake ›
- EU AI regulation efforts hit a snag ›
- Online violence means real-world danger for women in politics - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
Deepfakes and dissent: How AI makes the opposition more dangerous
Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.
After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.
Speaking at a Global Stage panel on AI and elections hosted by GZERO and Microsoft on the sidelines of the Munich Security Forum, she said in this new world, someone like Alexei Navalny “would've been able to use AI in some extraordinary creative way to shake up what in the case of the Russian election is something of a foregone conclusion.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
AI vs. truth: Battling deepfakes amid 2024 elections
With nearly half of the globe heading to the polls this year amid lightning-speed developments in generative AI, fears are running rampant over tech-driven disinformation campaigns.
During a Global Stage panel at the Munich Security Conference, Bulgarian politician and European Parliament member Eva Maydell said she fears we will soon be unable to separate fact from deepfake fiction.
While acknowledging the important developments AI and emerging tech offer, Maydell warned that we also “need to be very sober” about how they are threatening the “very fabric of our democratic societies” and eroding trust.
While the EU is trying to push voluntary measures and legislative proposals, Maydell points out that political conversations often revolve around the sense that “we'll probably never be as good as those that are trying to deceive society.”
“But you still have to give it a try, and you need to do it in a very prepared way,” she adds.
Watch the full conversation: How to protect elections in the age of AI
Watch more Global Stage coverage on the 2024 Munich Security Conference.
- How to protect elections in the age of AI ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes on are on the campaign trail too ›
- Taylor Swift AI images & the rise of the deepfakes problem ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- Combating AI deepfakes in elections through a new tech accord ›
Deepfakes are ‘fraud,’ says Microsoft's Brad Smith
The rapid rise of AI has presented a wide array of challenges, particularly in terms of finding a balance between protecting the right to free expression and safeguarding democracy from the corrosive effects of misinformation.
But Microsoft Vice Chair and President Brad Smith says freedom of expression does not apply to deepfakes — fake images or videos created via AI, which can involve using someone else’s face and/or voice without their permission. During a Global Stage panel on AI and elections at the Munich Security Conference, Smith unequivocally decried deepfakes as a form of “fraud.”
“The right to free expression gives me the right to stand up and say what is on my mind,” says Smith, adding, “I do not have the right to steal and use your voice. Your voice belongs to you and you alone… Let's give people the right to say what they think. Let's not steal their voice and put words in their mouth.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- Will comedy deepfakes generate laughs or lawsuits? ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfake porn targets high schoolers ›
- Deepfake it till you make it ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act ... ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›