We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Dr. Vivek Murthy, the United States Surgeon General, discusses the importance of social connection to our mental and physical well-being with actor Matthew McConaughey, a University of Texas professor.
Should social media apps be labeled dangerous for kids?
US Surgeon General Vivek Murthy is demanding Congress require a safety label on social media apps like cigarettes and alcohol, citing that teens who use them for three hours a day double their risk of depression.
Murthy has a history of advocating for mental health: He issued a similar advisory last year categorizing loneliness as a health crisis comparable to smoking up to 15 cigarettes a day.
So far, Congress hasn’t done much to curb children’s social media usage, apart from chastising a few tech CEOs and targeting TikTok as a national security threat. Murthy’s emergency declaration on Monday was a call for concrete action.
“A surgeon general’s warning label,” Murthy argued in a recent op-ed in the New York Times, “would regularly remind parents and adolescents that social media has not been proved safe.”
Would it work? Labels on tobacco did lead to a steady decline in adolescent cigarette smoking over the past several decades (that is, until vapes came along … but that’s another story). Murthy acknowledged, however, that a warning label alone wouldn’t fix that the average teen spends nearly five hours a day scrolling, and also suggested that schools, family dinners, and anyone in middle school or below, stay phone-free.
What do you think? Should social media apps be labeled as dangerous for children? Let us know here.Are bots trying to undermine Donald Trump?
In an exclusive investigation into online disinformation surrounding the reaction to Donald Trump’s hush-money trial, GZERO asks whether bots are being employed to shape debates about the former president’s guilt or innocence. We investigated, with the help of Cyabra, a firm that specializes in tracking bots, to look for disinformation surrounding the online reactions to Trump’s trial. Is Trump’s trial the target of a massive online propaganda campaign – and, if so, which side is to blame?
_____________
Adult film actress Stormy Daniels testified on Tuesday against former President Donald Trump, detailing her sexual encounter with Trump in 2006 and her $130,000 hush money payment from Trump's ex-attorney Michael Cohen before the 2016 election. In the process, she shared explicit details and said she had not wanted to have sex with Trump. This led the defense team to call for a mistrial. Their claim? That the embarrassing aspects were “extraordinarily prejudicial.”
Judge Juan Merchan denied the motion – but also agreed that some of the details from Daniels were “better left unsaid.”
The trouble is, plenty is being said, inside the courtroom and in the court of public opinion – aka social media. With so many people learning about the most important trials of the century online, GZERO partnered with Cyabra to investigate how bots are influencing the dialogue surrounding the Trump trials. For a man once accused of winning the White House off the steam of Russian meddling, the results may surprise you.
Bots – surprise, surprise – are indeed rampant amid the posts about Trump’s trials online. Cyabra’s AI algorithm analyzed 7,500 posts with hashtags and phrases related to the trials and found that 17% of Trump-related tweets came from fake accounts. The team estimated that these inauthentic tweets reached a whopping 49.1 million people across social media platforms.
Ever gotten into an argument on X? Your opponent might not have been real. Cyabra found that the bots frequently comment and interact with real accounts.
The bots also frequently comment on tweets from Trump's allies in large numbers, leading X’s algorithm to amplify those tweets. Cyabra's analysis revealed that, on average, bots are behind 15% of online conversations about Trump. However, in certain instances, particularly concerning specific posts, bot activity surged to over 32%.
But what narrative do they want to spread? Well, it depends on who’s behind the bot. If you lean left, you might assume most of the bots were orchestrated by MAGA hat owners – if you lean right, you’ll be happy to learn that’s not the case.
Rather than a bot army fighting in defense of Trump, Cyabra found that 73% of the posts were negative about the former president, offering quotes like “I don’t think Trump knows how to tell the truth” and “not true to his wife, not true to the church, not true to the country, just a despicable traitor.”
Meanwhile, only 4% were positive. On the positive posts, Cyabra saw a pattern of bots framing the legal proceedings as biased and painting Trump as a political martyr. The tweets often came in the form of comments on Trump’s allies’ posts in support of the former president. For example, in a tweet from Marjorie Taylor Greene calling the trials “outrageous” and “election interference,” 32% of the comments were made by inauthentic profiles.
Many of the tweets and profiles analyzed were also indistinguishable from posts made by real people – a problem many experts fear is only going to worsen. As machine learning and artificial intelligence advance, so too will the fake accounts and attempts to shape political narratives.
Moreover, while most of the bots came from the United States – it was by no means all of them. The location of some of the bots does not exactly read like a list of usual suspects, with only three in China and zero in Russia (see map below).
Cyabra
This is just one set of data based on one trial, so there are limitations to drawing broader conclusions. But we do know, of course, that conservatives have long been accused of jumping on the bot-propaganda train to boost their political fortunes. In fact, Cyabra noted last year that pro-Trump bots were even trying to sow division amongst Republicans and hurt Trump opponents like Nikki Haley.
Still, Cyabra’s research, both then and now, shows that supporters of both the left and the right are involved in the bot game – and that, in this case, much of the bot-generated content was negative about Trump, which contradicts assumptions that his supporters largely operate bots. It’s also a stark reminder to ensure you’re dealing with humans in your next online debate.
In the meantime, check out Cyabra’s findings in full by clicking the button below.
Social media's AI wave: Are we in for a “deepfakification” of the entire internet?
In this episode of GZERO AI, Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, looks into the phenomenon he terms the "deepfakification" of social media. He points out the evolution of our social feeds, which began as platforms primarily for sharing updates with friends, and are now inundated with content generated by artificial intelligence.
So 2024 might just end up being the year of the deepfake. Not some fake Joe Biden video or deepfake pornography of Taylor Swift. Definitely problems, definitely going to be a big thing this year. But what I would see is a bigger problem is what might be called the “deepfakification” of the entire internet and definitely of our social feeds.
Cory Doctorow has called this more broadly the “enshittification” of the internet. And I think the way AI is playing out in our social media is a very good example of this. So what we saw in our social media feeds has been an evolution. It began with information from our friends that they shared. It then merged the content that an algorithm thought we might want to see. It then became clickbait and content designed to target our emotions via these same algorithmic systems. But now, when many people open their Facebook or their Instagram or their talk feeds, what they're seeing is content that's been created by AI. AI Content is flooding Facebook and Instagram.
So what's going on here? Well, in part, these companies are doing what they've always been designed to do, to give us content optimized to keep our attention.
If this content happens to be created by an AI, it might even do that better. It might be designed in a way by the AI to keep our attention. And AI is proving a very useful tool for doing for this. But this has had some crazy consequences. It's led to the rise, for example, of AI influencers rather than real people selling us ideas or products. These are AIs. Companies like Prada and Calvin Klein have hired an AI influencer named Lil Miquela, who has over 2.5 million followers on TikTok. A model agency in Barcelona, created an AI model after having trouble dealing with the schedules and demands of primadonna human models. They say they didn't want to deal with people with egos, so they had their AI model do it for them.
And that AI model brings in as much as €10,000 a month for the agency. But I think this gets at a far bigger issue, and that's that it's increasingly difficult to tell if the things we're seeing are real or if they're fake. If you scroll from the comments of one of these AI influencers like Lil Miquela’s page, it's clear that a good chunk of her followers don't know she's an AI.
Now platforms are starting to deal with this a bit. TikTok requires users themselves to label AI content, and Meta is saying they'll flag AI-generated content, but for this to work, they need a way of signaling this effectively and reliably to us and users. And they just haven't done this. But here's the thing, we can make them do it. The Canadian government in their new Online Harms Act, for example, demands that platforms clearly identify AI or bot generated content. We can do this, but we have to make the platforms do it. And I don't think that can come a moment too soon.
- Why human beings are so easily fooled by AI, psychologist Steven Pinker explains ›
- The geopolitics of AI ›
- AI and Canada's proposed Online Harms Act ›
- AI at the tipping point: danger to information, promise for creativity ›
- Will Taylor Swift's AI deepfake problems prompt Congress to act? ›
- Deepfake porn targets high schoolers ›
FILE PHOTO: General view shows the United States Supreme Court, in Washington, U.S., February 8, 2024.
Can the government dictate what’s on Facebook?
The Supreme Court heard arguments on Monday from groups representing major social media platforms which argue that new laws in Florida and Texas that restrict their ability to deplatform users are unconstitutional. It’s a big test for how free speech is interpreted when it comes to private technology companies that have immense reach as platforms for information and debate.
Supporters of the states’ laws originally framed them as measures meant to stop the platforms from unfairly singling out conservatives for censorship – for example when X (then Twitter) booted President Donald Trump for his tweets during January 6.
What do the states’ laws say?
The Florida law prevents social media platforms from banning any candidates for public office, while the Texas one bans removing any content because of a user’s viewpoint. As the 5th Circuit Court of Appeals put it, Florida “prohibits all censorship of some speakers,” while Texas “prohibits some censorship of all speakers.”
Social media platforms say the First Amendment protects them either way, and that they aren't required to transmit everyone’s messages, like a telephone company which is viewed as a public utility. Supporters of the laws say the platforms are essentially a town square now, and the government has an interest in keeping discourse totally open – in other words, more like a phone company than a newspaper.
What does the court think?
The justices seemed broadly skeptical of the Florida and Texas laws during oral arguments. As Chief Justice John Roberts pointed out, the First Amendment doesn’t empower the state to force private companies to platform every viewpoint.
The justices look likely to send the case back down to a lower court for further litigation, which would keep the status quo for now, but if they choose to rule, we could be waiting until June.TikTok's CEO Shou Zi Chew testifies during the Senate Judiciary Committee hearing on online child sexual exploitation, at the U.S. Capitol, in Washington, U.S., January 31, 2024.
TikTok videos go silent amid deafening calls for safety guardrails
It's time for TikTokers to enter their miming era. Countless videos suddenly went silent as music from top stars like Drake and Taylor Swift disappeared from the popular app on Thursday. The culprit? Universal Music Group – the world’s largest record company – could not secure a new licensing deal with the powerful information-sharing video platform.
In an open letter published by UMG, it blamed TikTok for “trying to build a music-based business, without paying fair value for the music.” UMG claimed TikTok “responded first with indifference, and then with intimidation” after being pressured not only on artist royalties, but also restrictions about AI-generated content, and a push for user safety.
It’s been a rough week for CEO Shou Zi Chew. He joined CEOs from Meta, X, and Discord for a grilling on Capitol Hill this week over the dangers of abuse and exploitation children are facing on their platforms. Sen. Lindsey Graham went so far as to say these companies have “blood on their hands.” The hearing followed last year’s public health advisory released by the Surgeon General that argued social media presents “a risk of harm” to youth mental health and called for “urgent action” from these companies.
The big takeaway: It appears social media companies are quite agile when under pressure and can change the user experience for billions of people at the drop of a hat, especially when profit margins are involved. Imagine what these companies could do if they put that energy into the health of their users instead.Graphic Truth: Where does the US get its online news?
The site has a well-documented history of being a breeding ground for misinformation, which continues to be a topic of concern in Washington with the 2024 election on the horizon.
Pew found that half of US adults get their news from social media at least some of the time, while 30% regularly get their news from Facebook. Next up was YouTube, followed by Instagram, TikTok, and X, formerly known as Twitter. Like Facebook, all of these platforms have also faced issues with the spread of disinformation as well as rampant hate speech.
Fighting online hate: Global internet governance through shared values
After a terrorist attack on a mosque in Christchurch, New Zealand was live-streamed on the internet in 2019, the Christchurch Call was launched to counter the increasing weaponization of the internet and to ensure that emerging tech is harnessed for good.
In a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, former New Zealand Prime Minster Dame Jacinda Ardern discussed the challenges and disparities inherent in the ever-evolving digital age, ranging from unrestricted online platforms in liberal democracies to severe content limitations in certain countries.
“If you look beyond just liberal democracies, on the one hand you have the discussion about free speech and the view that some hold around being able to use online platforms to publish just about anything. Then in some countries, the inability to publish anything at all,” said Ardern.
In her new role, as Special Envoy for the Christchurch Call, she advocated for departing from conventional country-centric strategies and proposed a foundation built upon shared values instead, prioritizing the safeguarding of human rights and the preservation of an open internet over national interests. “Let's establish the value set, the common problem identification to bring everyone around the table.”
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- Hearing the Christchurch Call ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- What We’re Watching: Ardern's shock exit, sights on Crimea, Bibi’s budding crisis, US debt ceiling chaos ›
- Jacinda Ardern on the Christchurch Call: How New Zealand led a movement ›
Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed
Artificial Intelligence promises revolutionary advances in the way we work, live and govern ourselves, but is it all a rosy picture?
United Nations Deputy Secretary-General Amina Mohammed says that while the potential benefits are enormous, “so is the dark side.” Without thoughtful leadership, the world could lose a precious opportunity to close major social divides. She spoke during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly. The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
She says it will take a “transformative mindset” and an eagerness to tackle more and bigger problems to pull off the transition, and emphasizes the severe mismatch of capable leadership with positions of power.
"Where there is leadership, there's not much power. And where there is power, that leadership is struggling,” she said.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The UN will discuss AI rules at this week's General Assembly ›
- Ian Bremmer: How AI may destroy democracy ›
- AI at the tipping point: danger to information, promise for creativity ›
- Can data and AI save lives and make the world safer? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- Governing AI Before It’s Too Late ›
- The AI power paradox: Rules for AI's power ›