We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Can the government dictate what’s on Facebook?
The Supreme Court heard arguments on Monday from groups representing major social media platforms which argue that new laws in Florida and Texas that restrict their ability to deplatform users are unconstitutional. It’s a big test for how free speech is interpreted when it comes to private technology companies that have immense reach as platforms for information and debate.
Supporters of the states’ laws originally framed them as measures meant to stop the platforms from unfairly singling out conservatives for censorship – for example when X (then Twitter) booted President Donald Trump for his tweets during January 6.
What do the states’ laws say?
The Florida law prevents social media platforms from banning any candidates for public office, while the Texas one bans removing any content because of a user’s viewpoint. As the 5th Circuit Court of Appeals put it, Florida “prohibits all censorship of some speakers,” while Texas “prohibits some censorship of all speakers.”
Social media platforms say the First Amendment protects them either way, and that they aren't required to transmit everyone’s messages, like a telephone company which is viewed as a public utility. Supporters of the laws say the platforms are essentially a town square now, and the government has an interest in keeping discourse totally open – in other words, more like a phone company than a newspaper.
What does the court think?
The justices seemed broadly skeptical of the Florida and Texas laws during oral arguments. As Chief Justice John Roberts pointed out, the First Amendment doesn’t empower the state to force private companies to platform every viewpoint.
The justices look likely to send the case back down to a lower court for further litigation, which would keep the status quo for now, but if they choose to rule, we could be waiting until June.TikTok videos go silent amid deafening calls for safety guardrails
It's time for TikTokers to enter their miming era. Countless videos suddenly went silent as music from top stars like Drake and Taylor Swift disappeared from the popular app on Thursday. The culprit? Universal Music Group – the world’s largest record company – could not secure a new licensing deal with the powerful information-sharing video platform.
In an open letter published by UMG, it blamed TikTok for “trying to build a music-based business, without paying fair value for the music.” UMG claimed TikTok “responded first with indifference, and then with intimidation” after being pressured not only on artist royalties, but also restrictions about AI-generated content, and a push for user safety.
It’s been a rough week for CEO Shou Zi Chew. He joined CEOs from Meta, X, and Discord for a grilling on Capitol Hill this week over the dangers of abuse and exploitation children are facing on their platforms. Sen. Lindsey Graham went so far as to say these companies have “blood on their hands.” The hearing followed last year’s public health advisory released by the Surgeon General that argued social media presents “a risk of harm” to youth mental health and called for “urgent action” from these companies.
The big takeaway: It appears social media companies are quite agile when under pressure and can change the user experience for billions of people at the drop of a hat, especially when profit margins are involved. Imagine what these companies could do if they put that energy into the health of their users instead.Graphic Truth: Where does the US get its online news?
The site has a well-documented history of being a breeding ground for misinformation, which continues to be a topic of concern in Washington with the 2024 election on the horizon.
Pew found that half of US adults get their news from social media at least some of the time, while 30% regularly get their news from Facebook. Next up was YouTube, followed by Instagram, TikTok, and X, formerly known as Twitter. Like Facebook, all of these platforms have also faced issues with the spread of disinformation as well as rampant hate speech.
Fighting online hate: Global internet governance through shared values
After a terrorist attack on a mosque in Christchurch, New Zealand was live-streamed on the internet in 2019, the Christchurch Call was launched to counter the increasing weaponization of the internet and to ensure that emerging tech is harnessed for good.
In a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, former New Zealand Prime Minster Dame Jacinda Ardern discussed the challenges and disparities inherent in the ever-evolving digital age, ranging from unrestricted online platforms in liberal democracies to severe content limitations in certain countries.
“If you look beyond just liberal democracies, on the one hand you have the discussion about free speech and the view that some hold around being able to use online platforms to publish just about anything. Then in some countries, the inability to publish anything at all,” said Ardern.
In her new role, as Special Envoy for the Christchurch Call, she advocated for departing from conventional country-centric strategies and proposed a foundation built upon shared values instead, prioritizing the safeguarding of human rights and the preservation of an open internet over national interests. “Let's establish the value set, the common problem identification to bring everyone around the table.”
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- Hearing the Christchurch Call ›
- Facebook allows "lies laced with anger and hate" to spread faster than facts, says journalist Maria Ressa ›
- What We’re Watching: Ardern's shock exit, sights on Crimea, Bibi’s budding crisis, US debt ceiling chaos ›
- Jacinda Ardern on the Christchurch Call: How New Zealand led a movement ›
Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed
Artificial Intelligence promises revolutionary advances in the way we work, live and govern ourselves, but is it all a rosy picture?
United Nations Deputy Secretary-General Amina Mohammed says that while the potential benefits are enormous, “so is the dark side.” Without thoughtful leadership, the world could lose a precious opportunity to close major social divides. She spoke during a Global Stage livestream event at UN headquarters in New York on September 22, on the sidelines of the UN General Assembly. The discussion was moderated by Nicholas Thompson of The Atlantic and was held by GZERO Media in collaboration with the United Nations, the Complex Risk Analytics Fund, and the Early Warnings for All initiative.
She says it will take a “transformative mindset” and an eagerness to tackle more and bigger problems to pull off the transition, and emphasizes the severe mismatch of capable leadership with positions of power.
"Where there is leadership, there's not much power. And where there is power, that leadership is struggling,” she said.
Watch the full Global Stage conversation: Can data and AI save lives and make the world safer?
- The UN will discuss AI rules at this week's General Assembly ›
- Ian Bremmer: How AI may destroy democracy ›
- AI at the tipping point: danger to information, promise for creativity ›
- Can data and AI save lives and make the world safer? ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- Governing AI Before It’s Too Late ›
- The AI power paradox: Rules for AI's power ›
Why human beings are so easily fooled by AI, psychologist Steven Pinker explains
There's no question that AI will change the world, but the verdict is still out on exactly how. But one thing that is already clear: people are going to confuse it with humans. And we know this because it's already happening. That's according to Harvard psychologist Steven Pinker, who joined Ian Bremmer on GZERO World for a wide-ranging conversation about his surprisingly optimistic outlook on the world and the way that AI may affect it.
"People are too easily fooled. It doesn't take much to fool a user or an observer into attributing a lot of intelligence to the system that they're dealing with, even if it's rather stupid."
So what should regulators do to rein AI in? Especially when it comes to children?
Watch the GZERO World episode: Is life better than ever for the human race?
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld and on US public television. Check local listings.
- Emotional AI: More harm than good? ›
- Altman to Congress: Regulate me, please. ›
- Ian Bremmer explains: Should we worry about AI? ›
- Be very scared of AI + social media in politics ›
- Podcast: Tracking the rapid rise of human-enhancing biotech with Siddhartha Mukherjee - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
- Yuval Noah Harari: AI is a “social weapon of mass destruction” to humanity - GZERO Media ›
Christchurch Call had a global impact on tech giants - Microsoft's Brad Smith
The Christchurch killer livestreamed his heinous crimes, highlighting a macabre threat ensconced within the relatively new field of social media. Extremists could use the technology to get the attention of millions of people — and perhaps even find some incentive for their violence in that fact.
Microsoft Vice Chair and President Brad Smith, in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, says the technology industry set out to ensure extremists could “never again” reach mass audiences during massacres. Tech companies, governments and civil society groups work together on the so-called Content Incident Protocol, a sort of digital emergency response plan.
Now, people are on call 24/7 to intervene early, shut down broadcasts, and cooperate with authorities. Smith says the impact has been transformative and urged further efforts to enhance safety against online extremism.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
Jacinda Ardern on the Christchurch Call: How New Zealand led a movement
During a Global Stage livestream conversation hosted by GZERO in partnership with Microsoft on the sidelines of the UN General Assembly, the former New Zealand Prime Minister Jacinda Ardern revealed that when she reached for her phone to share the heartbreaking news of the Christchurch massacre, she found a horrifying surprise: A livestream of the massacre served to her on a social media platform.
For a period of 24 hours, copies of the footage were uploaded to YouTube as often as once per second, spreading the 17-minute massacre faster than tech companies could shut it down.
The experience drives her work at the Christchurch Call, combating online extremism and working with government and civil society to build guardrails against the exploitation of technology by extremists, , she explained during a Global Stage livestream conversation hosted by GZERO in partnership with Microsoft on the sidelines of the UN General Assembly.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call