We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Deepfake recordings make a point in Georgia
A Georgia lawmaker used a novel approach to help pass legislation to ban deepfakes in politics: he used a deepfake. Republican state representative Brad Thomas used an AI-generated recording of two of his bills opponents—state senator Colton Moore and activist Mallory Staples—endorsing the bill.
Thomas presented the convincing audio to his peers, but cautioned that he made this fake recording on the cheap: “The particular one we used is, like, $50. With a $1,000 version, your own mother wouldn’t be able to tell the difference,” he said. The bill subsequently passed out of committee by an 8-1 vote.
Fake audio like this recently reared its head in US politics on the national level when an ally of then-Democratic presidential candidate Dean Phillips released a fake robocall of President Joe Biden telling New Hampshire voters to stay home during the state’s primary. The Federal Communications Commission moved quickly in the aftermath of this incident to declare that AI-generated robocalls are illegal under federal law.Voters beware: Elections and the looming threat of deepfakes
With AI tools already being used to manipulate voters across the globe via deepfakes, more needs to be done to help people comprehend what this technology is capable of, says Microsoft vice chair and president Brad Smith.
Smith highlighted a recent example of AI being used to deceive voters in New Hampshire.
“The voters in New Hampshire, before the New Hampshire primary, got phone calls. When they answered the phone, there was the voice of Joe Biden — AI-created — telling people not to vote. He did not authorize that; he did not believe in it. That was a deepfake designed to deceive people,” Smith said during a Global Stage panel on AI and elections on the sidelines of the Munich Security Conference last month.
“What we fundamentally need to start with is help people understand the state of what technology can do and then start to define what's appropriate, what is inappropriate, and how do we manage that difference?” Smith went on to say.
Watch the full conversation here: How to protect elections in the age of AI
Deepfakes and dissent: How AI makes the opposition more dangerous
Former US National Security Council advisor Fiona Hill has plenty of experience dealing with dangerous dictators – but 2024 is even throwing her some curveballs.
After Imran Khan upset the Pakistani establishment in February’s elections by using AI to rally his voters behind bars, she thinks authoritarians must reconsider their strategies around suppressing dissent.
Speaking at a Global Stage panel on AI and elections hosted by GZERO and Microsoft on the sidelines of the Munich Security Forum, she said in this new world, someone like Alexei Navalny “would've been able to use AI in some extraordinary creative way to shake up what in the case of the Russian election is something of a foregone conclusion.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
Hard Numbers: NVIDIA rising, the magician’s assistant, indefensible budget lags, Make PDFs sexy again
3: NVIDIA is now the third-most valuable company in the U.S. after reporting rosy financial returns. The AI-focused chipmaker’s market capitalization is now $1.812 trillion, surpassing Google parent company Alphabet, and trailing only Microsoft and Apple. How things change: just one year ago, NVIDIA’s market cap was a paltry $580 billion.
1: A New Orleans magician says he was paid $150 by a Democratic operative supporting presidential longshot Dean Philipps to create the fake Joe Biden robocall sent to New Hampshire voters in January. Creating the fake audio took him 20 minutes and cost $1, the magician said. The incident sparked national outrage, including an investigation by the New Hampshire attorney general and the Federal Communications Commission banning unsolicited AI-generated robocalls.
1.8 billion: The U.S. Department of Defense is seeking $1.8 billion in the federal budget solely for AI. But with congressional budget talks still ongoing, Craig Martell, the Pentagon’s chief digital and AI officer, said his office needs to make tough decisions about what projects to prioritize. AI-related defense projects range from the simple—such as making administrative tasks more efficient—to the complex, like building new advanced weapons systems.
400 billion: Adobe has lots of cutting-edge products: Photoshop, Premiere, After Effects; but there’s nothing sexy about PDFs. On paid versions of Acrobat and Reader, which people use to view 400 billion PDFs each year, an AI chatbot will soon summarize and search your document. Adobe wants users to have a “conversation” with their PDFs—summaries sound nice, but does anyone want a full dialogue?Tech accord on AI & elections will help manage the ‘new reality,’ says Microsoft’s Brad Smith
At the Munich Security Conference, leading tech companies unveiled a new accord that committed them to combating AI-generated content that could disrupt elections.
During a Global Stage panel on the sidelines of this year’s conference, Microsoft Vice Chair and President Brad Smith said the accord would not completely solve the problem of deceptive AI content but would help “manage this new reality in a way that will make a difference and really serve all of the elections… between now and the end of the year.”
As Smith explains, the accord is designed to bring the tech industry together to preserve the “authenticity of content,” including via the creation of content credentials. The industry will also work to detect deepfakes and provide candidates with a mechanism to report them, says Smith, while also taking steps to “promote transparency and education.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- How AI and deepfakes are being used for malicious reasons ›
- Deepfakes are ‘fraud,’ says Microsoft's Brad Smith ›
- AI explosion, elections, and wars: What to expect in 2024 ›
- AI, election integrity, and authoritarianism: Insights from Maria Ressa ›
- How AI threatens elections ›
- How to protect elections in the age of AI ›
Deepfakes are ‘fraud,’ says Microsoft's Brad Smith
The rapid rise of AI has presented a wide array of challenges, particularly in terms of finding a balance between protecting the right to free expression and safeguarding democracy from the corrosive effects of misinformation.
But Microsoft Vice Chair and President Brad Smith says freedom of expression does not apply to deepfakes — fake images or videos created via AI, which can involve using someone else’s face and/or voice without their permission. During a Global Stage panel on AI and elections at the Munich Security Conference, Smith unequivocally decried deepfakes as a form of “fraud.”
“The right to free expression gives me the right to stand up and say what is on my mind,” says Smith, adding, “I do not have the right to steal and use your voice. Your voice belongs to you and you alone… Let's give people the right to say what they think. Let's not steal their voice and put words in their mouth.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.
Watch the full conversation here: How to protect elections in the age of AI
- Will comedy deepfakes generate laughs or lawsuits? ›
- How AI and deepfakes are being used for malicious reasons ›
- Deepfake porn targets high schoolers ›
- Deepfake it till you make it ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act ... ›
- Deepfakes and dissent: How AI makes the opposition more dangerous - GZERO Media ›
Hard Numbers: It’s electric, OpenAI’s billions, AI-related legislation, Fred Trump ‘returns,’ Multiplication problems
1,300: Training a large language model is estimated to use about 1,300 megawatt hours of electricity. It’s about the same consumption of 130 US homes for one year. But that’s for the last generation of LLMs, like OpenAI’s GPT-3. The potential electricity usage for GPT-4, the current model, and beyond could be much, much greater.
80 billion: OpenAI struck a deal that would value the ChatGPT maker at $80 billion, making it one of the world’s most valuable private companies. It’s not a traditional fundraising round but a tender offer that allows employees to cash out their much sought-after shares in the company.
50: US states are clamoring to pass legislation to curb the worst effects of AI. By one measure, there are about 50 new AI-related bills introduced to state legislatures each week. New York leads the charge with about 65 outstanding bills, including a new one recently proposed by Gov. Kathy Hochul to criminalize deceptive AI.
1999: Fred Trump, the father of former President Donald Trump, died in 1999. But now, the Lincoln Project, the anti-Trump political action committee, has used AI to reanimate the elder Trump for a new ad in which he appears to call his son a “disgrace.”
44: The education company Khan Academy made a ChatGPT-based tutoring bot called Khanmigo. The problem? It’s terrible at math, unable to calculate 343 minus 17. The chatbot is being piloted by 65,000 students in 44 school districts. One Yale professor who studies AI put it bluntly: “Asking ChatGPT to do math is sort of like asking a goldfish to ride a bicycle.”