We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
Can the government dictate what’s on Facebook?
The Supreme Court heard arguments on Monday from groups representing major social media platforms which argue that new laws in Florida and Texas that restrict their ability to deplatform users are unconstitutional. It’s a big test for how free speech is interpreted when it comes to private technology companies that have immense reach as platforms for information and debate.
Supporters of the states’ laws originally framed them as measures meant to stop the platforms from unfairly singling out conservatives for censorship – for example when X (then Twitter) booted President Donald Trump for his tweets during January 6.
What do the states’ laws say?
The Florida law prevents social media platforms from banning any candidates for public office, while the Texas one bans removing any content because of a user’s viewpoint. As the 5th Circuit Court of Appeals put it, Florida “prohibits all censorship of some speakers,” while Texas “prohibits some censorship of all speakers.”
Social media platforms say the First Amendment protects them either way, and that they aren't required to transmit everyone’s messages, like a telephone company which is viewed as a public utility. Supporters of the laws say the platforms are essentially a town square now, and the government has an interest in keeping discourse totally open – in other words, more like a phone company than a newspaper.
What does the court think?
The justices seemed broadly skeptical of the Florida and Texas laws during oral arguments. As Chief Justice John Roberts pointed out, the First Amendment doesn’t empower the state to force private companies to platform every viewpoint.
The justices look likely to send the case back down to a lower court for further litigation, which would keep the status quo for now, but if they choose to rule, we could be waiting until June.TikTok videos go silent amid deafening calls for safety guardrails
It's time for TikTokers to enter their miming era. Countless videos suddenly went silent as music from top stars like Drake and Taylor Swift disappeared from the popular app on Thursday. The culprit? Universal Music Group – the world’s largest record company – could not secure a new licensing deal with the powerful information-sharing video platform.
In an open letter published by UMG, it blamed TikTok for “trying to build a music-based business, without paying fair value for the music.” UMG claimed TikTok “responded first with indifference, and then with intimidation” after being pressured not only on artist royalties, but also restrictions about AI-generated content, and a push for user safety.
It’s been a rough week for CEO Shou Zi Chew. He joined CEOs from Meta, X, and Discord for a grilling on Capitol Hill this week over the dangers of abuse and exploitation children are facing on their platforms. Sen. Lindsey Graham went so far as to say these companies have “blood on their hands.” The hearing followed last year’s public health advisory released by the Surgeon General that argued social media presents “a risk of harm” to youth mental health and called for “urgent action” from these companies.
The big takeaway: It appears social media companies are quite agile when under pressure and can change the user experience for billions of people at the drop of a hat, especially when profit margins are involved. Imagine what these companies could do if they put that energy into the health of their users instead.Graphic Truth: Where does the US get its online news?
The site has a well-documented history of being a breeding ground for misinformation, which continues to be a topic of concern in Washington with the 2024 election on the horizon.
Pew found that half of US adults get their news from social media at least some of the time, while 30% regularly get their news from Facebook. Next up was YouTube, followed by Instagram, TikTok, and X, formerly known as Twitter. Like Facebook, all of these platforms have also faced issues with the spread of disinformation as well as rampant hate speech.
Fighting online hate: Global internet governance through shared values
After a terrorist attack on a mosque in Christchurch, New Zealand was live-streamed on the internet in 2019, the Christchurch Call was launched to counter the increasing weaponization of the internet and to ensure that emerging tech is harnessed for good.
In a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, former New Zealand Prime Minster Dame Jacinda Ardern discussed the challenges and disparities inherent in the ever-evolving digital age, ranging from unrestricted online platforms in liberal democracies to severe content limitations in certain countries.
“If you look beyond just liberal democracies, on the one hand you have the discussion about free speech and the view that some hold around being able to use online platforms to publish just about anything. Then in some countries, the inability to publish anything at all,” said Ardern.
In her new role, as Special Envoy for the Christchurch Call, she advocated for departing from conventional country-centric strategies and proposed a foundation built upon shared values instead, prioritizing the safeguarding of human rights and the preservation of an open internet over national interests. “Let's establish the value set, the common problem identification to bring everyone around the table.”
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- Hearing the Christchurch Call ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- What We’re Watching: Ardern's shock exit, sights on Crimea, Bibi’s budding crisis, US debt ceiling chaos ›
- Jacinda Ardern on the Christchurch Call: How New Zealand led a movement ›
Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed
Artificial Intelligence promises revolutionary advances in the way we work, live and govern ourselves, but is it all a rosy picture?
United Nations Deputy Secretary-General Amina Mohammed says that while the potential benefits are enormous, “so is the dark side.” Without thoughtful leadership, the world could lose a precious opportunity to close major social divides. She spoke during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly. The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
She says it will take a “transformative mindset” and an eagerness to tackle more and bigger problems to pull off the transition, and emphasizes the severe mismatch of capable leadership with positions of power.
"Where there is leadership, there's not much power. And where there is power, that leadership is struggling,” she said.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The UN will discuss AI rules at this week's General Assembly ›
- Ian Bremmer: How AI may destroy democracy ›
- AI at the tipping point: danger to information, promise for creativity ›
- Can data and AI save lives and make the world safer? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- Governing AI Before It’s Too Late ›
- The AI power paradox: Rules for AI's power ›
Why human beings are so easily fooled by AI, psychologist Steven Pinker explains
There's no question that AI will change the world, but the verdict is still out on exactly how. But one thing that is already clear: people are going to confuse it with humans. And we know this because it's already happening. That's according to Harvard psychologist Steven Pinker, who joined Ian Bremmer on GZERO World for a wide-ranging conversation about his surprisingly optimistic outlook on the world and the way that AI may affect it.
"People are too easily fooled. It doesn't take much to fool a user or an observer into attributing a lot of intelligence to the system that they're dealing with, even if it's rather stupid."
So what should regulators do to rein AI in? Especially when it comes to children?
Watch the GZERO World episode: Is life better than ever for the human race?
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld and on US public television. Check local listings.
- Emotional AI: More harm than good? ›
- Altman to Congress: Regulate me, please. ›
- Ian Bremmer explains: Should we worry about AI? ›
- Be very scared of AI + social media in politics ›
- Podcast: Tracking the rapid rise of human-enhancing biotech with Siddhartha Mukherjee - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
- Social media's AI wave: Are we in for a “deepfakification” of the entire internet? - GZERO Media ›
Christchurch Call had a global impact on tech giants - Microsoft's Brad Smith
The Christchurch killer livestreamed his heinous crimes, highlighting a macabre threat ensconced within the relatively new field of social media. Extremists could use the technology to get the attention of millions of people — and perhaps even find some incentive for their violence in that fact.
Microsoft Vice Chair and President Brad Smith, in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, says the technology industry set out to ensure extremists could “never again” reach mass audiences during massacres. Tech companies, governments and civil society groups work together on the so-called Content Incident Protocol, a sort of digital emergency response plan.
Now, people are on call 24/7 to intervene early, shut down broadcasts, and cooperate with authorities. Smith says the impact has been transformative and urged further efforts to enhance safety against online extremism.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call