We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.

Prime Minister Narendra Modi greets people during the Hindustan Times Leadership Summit, in New Delhi, on Saturday, Nov. 4, 2023.
While deepfake technology is typically associated with deceit – tricking voters and disrupting democracy – this seems like a more innocuous way of influencing global politics. But the Indian government hasn’t embraced this technology: It recently considered drastic action to compel WhatsApp parent company Meta to break the app’s encryption and identify the creators of deepfake videos of politicians. Deepfakes could, in other words, have a tangible impact on the world’s largest democracy.
In the United States, it’s not just politicians who have clashed with AI over this brand of imitation, but celebrities too. According to a report in “Variety,” actress Scarlett Johansson has taken “legal action” against the app Lisa AI, which used a deepfake version of her image and voice in a 22-second ad posted on X. The law may be on Johansson’s side: California has a right of publicity law prohibiting someone’s name, image, or likeness from being used in an advertisement without permission. In their ongoing strike, Hollywood actors have also been bargaining over how the studios can, if at all, use their image rights with regard to AI.
Deepfake technology is only improving, making it ever more difficult to determine when a politician or celebrity is appearing before your eyes – and when it’s just a dupe. In his recent executive order on AI, President Joe Biden called for new standards for watermarking AI-generated media so people know what’s real and what’s computer generated.
That approach – akin to US consumer protections for advertising – has obvious appeal, but it might not be technically foolproof, experts say. What’s more likely is that the US court system will try to apply existing statutes to new technology, only to reveal (possibly glaring) gaps in the laws.
Generative AI and deepfakes have already crept into the 2024 election, including a Republican National Committee ad depicting a dystopian second Biden term. But look closely at the top-left corner of the ad toward the end, and you’ll notice the following disclosure: “Built entirely with AI imagery.” Surely, this won’t be the last we see of AI in this election – we’ll be keeping an eye out for all the ways it rears its head on the campaign trail.