Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
What’s behind Trump & Musk’s public feud?
Elon Musk and President Donald’s Trump’s White House bromance imploded in spectacular fashion last week in a feud that played out in full view of the public, with the two billionaires trading insults in real-time on social media. That fight appears to be cooling down, at least for now, but President Trump has made it clear he has no intention of mending the relationship any time soon. On a special edition of GZERO World, Semafor Co-Founder and Editor-in-Chief Ben Smith joins Ian Bremmer to discuss what led to the breakup and where the Trump administration’s relationship with Silicon Valley goes from here.
There’s a lot at stake: for Trump, Musk’s political funding ahead of the 2026 midterms; for Musk, billions in government contracts and subsidies for his companies. Smith and Bremmer discuss Trump and Musk’s alliance of convenience, where it all broke down, the role of political journalism in covering such a public conflict, and who ultimately has the upper hand in the battle between political leaders and tech oligarchs.
“[President] Trump's power to reward his friends and punish his enemies in the very short term, in very consequential and permanent ways is totally unprecedented,” Smith says, “Business leaders, civil society and the media can be intimidated and Trump knows it. He usually wins that game of chicken.”
GZERO World with Ian Bremmer, the award-winning weekly global affairs series, airs nationwide on US public television stations (check local listings).
New digital episodes of GZERO World are released every Monday on YouTube. Don't miss an episode: subscribe to GZERO's YouTube channel and turn on notifications (🔔).
Mexican social media influencer, Valeria Marquez, 23, who was brazenly shot to death during a TikTok livestream in the beauty salon where she worked in the city of Zapopan, looks on in this picture obtained from social media.
“Hey Vale” – a live-streamed killing and the scourge of femicide in Latin America
Last Wednesday afternoon, Valeria Márquez, a 23-year-old Mexican cosmetics and lifestyle influencer with more than 200,000 followers on social media, set up a camera and began livestreaming on TikTok from her beauty salon near Guadalajara, Mexico.
Moments later, as she spoke to her followers while holding a stuffed animal, a man entered the salon.
“Hey Vale?” He asks out of frame, using a casual nickname for Márquez as he apparently offers her a gift. He then shoots her to death, picks up the camera, and switches it off.
Several days later, Maria José Estupiñán, a Colombian model and social media star, was also gunned down in the doorway of her home in the border town of Cúcuta by an apparent stalker.
The killings of the two women, both relatively affluent, young, and with large public profiles, have shaken the two countries, throwing fresh attention on the wider problem in Latin America of femicide – the killing of women or girls because of their gender.
Mexican President Claudia Sheinbaum has assigned her top security team to investigate the killing of Márquez, which authorities have already classified as a femicide.
According to a study published late last year, roughly 11 women were murdered every day in femicides in the region in 2023. The most dangerous countries were Honduras, where 7.2 out of every 100,000 women died in femicides, and the Dominican Republic, where the rate was nearly 3. In Colombia, local watchdogs recorded nearly 900 femicides last year, a seven-year high.
As elsewhere in the world, the vast majority of these crimes are committed by men who are known to the victims – current, former, or aspiring romantic partners, as well as male family members.
But addressing the problem, experts say, is a complicated mix of changing laws and shaping minds.
Over the past three decades, countries throughout the region have passed at least some legislation to address violence against women, pushed both by United Nations conventions on violence against women, and activist movements like Ni Una Menos (“Not one woman less”), founded in Argentina a decade ago in response to the murder of a pregnant, 14-year-old girl at the hands of her boyfriend.
Some countries have gone further, developing specific frameworks for the documentation and prosecution of femicide. Mexico and Colombia, in fact, have some of the strictest laws on the books, says Beatriz García Nice, a gender-based violence expert based in Ecuador. But laws aren’t enough.
“It’s not that we lack laws,” she says, “it’s that there is impunity and the lack of enforcement.”
One part of that comes from deeply ingrained social norms, she says.
“We have to change cultural traits so that you’re not teaching kids, especially boys, that women are property, or that their only role in society is to belong to a man.”
Corruption also plays a role in a region where graft and nepotism are rampant in judiciary systems. In Mexico, for example, a study from 2023 showed nearly half of people lack confidence in the judiciary, and close to 90% of people said they didn’t report crimes for that reason.
This fuels reluctance to report gender-based crimes as well – more than 85% of women in Mexico, Honduras, and Ecuador say they don’t report episodes of physical or psychological violence. That matters because femicide, García Nice points out, is only the gruesome end of a long road that begins with other kinds of abuse.
The rise of the influencer economy can make things even worse, especially as legal frameworks addressing online harassment of women are still relatively weak in Latin America.
“Online violence bleeds into offline violence,” says Rangita de Silva de Alwis, a University of Pennsylvania law school professor who is on the UN committee that focuses on eradicating violence against women.
“The impunity that we see in the online world has real world consequences.”
SnapChat app displayed on a smart phone with in the background SnapChat My AI, seen in this photo illustration, on August 20, 2023, in Brussels, Belgium.
The FTC’s concern about Snapchat’s My AI chatbot
On Thursday, the US Federal Trade Commission referred a complaint to the Justice Department concerning Snapchat’s artificial intelligence chatbot, My AI. The FTC doesn’t usually disclose these referrals but felt it was in the public interest to do so, citing potential “risks and harms” to young users of the social media app.
My AI is a chatbot built on OpenAI and Google’s large language models and accessible as part of the Snapchat app. It’s been criticized for being “wildly inappropriate” for the largely teenage audience on Snapchat. The UK had also launched a privacy investigation over teen privacy concerns but closed it in May, issuing a warning to the entire tech industry to put privacy first before rolling out AI tools.
That said, the FTC has yet to disclose what the actual complaint against My AI is about. In response, Snapchat’s parent company told reporters that the complaint is “based on inaccuracies and lacks concrete evidence.” It also said there are “serious First Amendment concerns” and criticized the timing of the announcement — “on the last day of this administration.” It’s unclear whether Trump’s Justice Department leadership will take up a case against Snapchat based on this referral, but it adds a potential Big Tech AI case to Trump’s docket from day one.
TikTok CEO Shou Zi Chew testifies before the House Energy and Commerce Committee on Thursday, March 23, 2023 in Washington D.C.
Looks like the TikTok ban is coming. Probably. And with unintended consequences
Barring an eleventh-hour reprieve, TikTok’s operations in the US are likely to be shut down on Sunday. China is said to be considering a sale of its stateside outfit to X owner Elon Musk as the incoming administration seeks a pause on the ban so it can pursue a deal to keep it running. While both of those options look unlikely, at least in the short term, President-elect Donald Trump is considering an executive order that would delay enforcement of the ban for 60 to 90 days.
The Supreme Court hasn’t ruled on a challenge to the ban yet, nor is it required to by the Sunday deadline. The law, passed in April, only requires that US app stores no longer carry or permit updates of TikTok, and that internet service providers block access to the TikTok website. That would leave existing users with access to the platform, though it would degrade over time. But ByteDance, the social media platform’s owner, announced Wednesday that it is preparing to fully shut down the app in the US when the ban comes into effect.
Meanwhile, in a case of unintended consequences, TikTok users have been signing up en masse for China’s TikTok equivalent, RedNote — or Xiaohongshu, which translates to “little red book.” The shift is connecting US and Chinese social media users, which means that one of the aims of the TikTok ban, keeping US social media users away from China, may come up short of its goal. But it’s also exposing Chinese users to thousands of Western voices – something Beijing may not appreciate either.A person holds a placard on the day justices hear oral arguments in a bid by TikTok and its China-based parent company, ByteDance, to block a law intended to force the sale of the short-video app by Jan. 19 or face a ban on national security grounds, outside the U.S. Supreme Court, in Washington, U.S., January 10, 2025.
TikTok ban likely to be upheld
On Friday, the Supreme Court appeared poised to uphold the TikTok ban, largely dismissing the app’s argument that it should be able to exist in the US under the First Amendment’s free speech protections and favoring the government's concerns that it poses a national security threat.
Put simply, they see it as an issue of national security, not free speech.
“Congress doesn’t care about what’s on TikTok. They don’t care about the expression,” claimed Chief Justice John Roberts during questioning, clarifying “That’s shown by the remedy. They’re not saying TikTok has to stop. They’re saying the Chinese have to stop controlling TikTok.”
What’s the threat? US lawmakers are concerned about the Chinese government having access to enormous amounts of Americans’ data – and fear the app could be used to spread Beijing’s agenda. Facebook and other American social media platforms are notably banned in China – with Beijing taking a similar view to that of the US government. The justices seemed worried that TikTok could be used for espionage or even blackmail.
What does upholding the ban mean for the app? If the court rules against the app, it would mean that Bytedance, Tiktok’s parent company, must divest from the company before Jan. 19 or face a national ban on national security grounds. The app would no longer be available on the Google or Apple app stores.
But it won’t disappear from your phone if you already have it downloaded. The ban would only affect future downloads. Without the ability to update the app, however, it will likely degrade, and TikTok may block US users before that happens to avoid further legal issues. Incoming President Donald Trump has pledged to save the app, but there is no clear legal method to do so.
The decision could be an early reflection of one of this year’s Top Risks 2025 from our parent company, Eurasia Group: the breakdown of the US-China relationship. The world’s biggest superpowers increasingly distrust one another, and Trump’s return to office is likely to exacerbate the decoupling — increasing the risk of instability and crisis.
Experts say social media has a "Funhouse Mirror" effect on our perceptions of the offline world.
Opinion: Social media warped my perception of reality
Over the past week, the algorithms that shape my social media feeds have been serving up tons of content about the Major League Baseball playoffs. This because the algorithms know that I am a fan of the Mets, who have been -- you should know -- on a surreal playoff run for the last two weeks.
A lot of that content is the usual: sportswriter opinion pieces or interviews with players talking about how their teams are “a great group of guys just trying to go out there and win one game at a time,” or team accounts rallying their fan bases with slick highlight videos or “drip reports” on the players’ fashion choices.
But there’s been a lot of uglier stuff too: Padres and Dodgers fan pages threatening each other after some on-field tension between the two teams and their opposing fanbases last week. Or a Mets fan page declaring “war” on Phillies fans who had been filmed chanting “f*ck the Mets” on their way out of their home stadium after a win. Or a clip of a Philly fan’s podcast in which he mocked Mets fans for failing to make Phillies fans feel "fear" at the Mets' ballpark.
As a person who writes often about political polarization for a living, my first thought upon seeing all this stuff was: aha, further evidence that polarization is fueling a deep anger and violence in American life, which is now bleeding into sports, making players more aggressive and fans more violent.
But in fact, there isn’t much evidence for this. Baseball games and crowds are actually safer now than in the past.
I had fallen for social media reflections of the real world that were distorted. It’s what some experts call “The Funhouse Mirror” aspect of the internet.
One of those experts is Claire Robertson, a postgraduate research fellow in political psychology at NYU and the University of Toronto, who studies how the online world warps our understanding of the offline world.
Since Robertson recently published a new paper on precisely this subject, I called her up to ask why it’s so easy for social media to trick us into believing that things are worse than they actually are.
Part of the problem, she says, is that “the things that get the most attention on social media tend to be the most extreme ones.” And that’s because of a nasty feedback loop between two things: first, an incentive structure for social media where profits depend on attention and engagement; and second, our natural inclination as human beings to pay the most attention to the most sensational, provocative, or alarming content.
“We’ve evolved to pay attention to things that are threatening,” says Robertson. “So it makes more sense for us to pay attention to a snake in the grass than to a squirrel.”
And as it happens, a huge amount of those snakes are released into social media by a very small number of people. “A lot of people use social media,” says Robertson, “but far fewer actually post – and the most ideologically extreme people are the most likely to post.”
People with moderate opinions, which is actually most people, tend to fare poorly on social media, says Robertson. One study, of Reddit, showed that 33% of all content was generated by just 3% of accounts, which spew hate. Another revealed that 80% of fake news on Facebook came from just 0.1% of all accounts.
“But the interesting thing,” she says, “is, what’s happening to the 99.9% of people that aren’t sharing fake news? What's happening to the good actors? How does the structure of the internet, quite frankly, screw them over?”
In fact, we screw ourselves over, and we can’t help it. Blame our brains. For the sake of efficiency, our gray matter is wired to take some shortcuts when we seek to form views about groups of people in the world. And social media is where a lot of us go to form those opinions.
When we get there, we are bombarded, endlessly, with the most extreme versions of people and groups – “Socialist Democrats” or “Fascist Republicans” or “Pro-Hamas Arabs” or “Genocidal Jews” or “immigrant criminals” or “racist cops.” As a result, we start to see all members of these groups as hopelessly extreme, bad, and threatening in the real world too.
Small wonder that Democrats’ and Republicans’ opinions of each other in the abstract have, over the past two decades, gotten so much worse. We don’t see each other as ideological opponents with different views but, increasingly, as existential threats to each other and our society.
Of course, it only makes matters worse when people in the actual real world are committed to spreading known lies – say, that elections are stolen or that legal immigrants who are going hungry are actually illegal immigrants who are eating cats.
But what’s the fix for all of this? Regulators in many countries are turning to tighter rules on content moderation. But Robertson says that’s not effective. For one thing, it raises “knotty” philosophical questions about what should be moderated and by whom. But beyond that, it’s not practical.
“It's a hydra,” she says. “If you moderate content on Twitter, people who want to see extreme content are going to go to FourChan. If you moderate the content on FourChan, they're going to go somewhere else.”
Rather than trying to kill the supply of toxic crap on social media directly, Robertson wants to reduce the demand for it, by getting the rest of us to think more critically about what we see online. Part of that means stopping to compare what we see online with what we know about the actual human beings in our lives – family, friends, neighbors, colleagues, classmates.
Do all “Republicans” really believe the loony theory that Hurricane Milton is a man-made weather event? Or is that just the opinion of one particularly fringe Republican? Do all people calling for an end to the suffering in Gaza really “support Hamas,” or is that the view of a small fringe with outsized exposure on social media?
“When you see something that’s really extreme and you start to think everybody must think that, really think: ‘Does my mom believe that? Do my friends believe that? Do my classmates believe that?’ It will help you realize that what you are seeing online is not actually a true reflection of reality.”
The X account of Elon Musk in seen blocked on a mobile screen in this illustration after Brazil's telecommunications regulator suspended access to Elon Musk's X social network in the country to comply with an order from a judge who has been locked in a months-long feud with the billionaire investor, Sao Paulo, Brazil taken August 31, 2024.
Brazil vs. Musk: Now in low Earth orbit
The battle between Brazil and Elon Musk has now reached the stars — or the Starlink, at least — as the billionaire’s satellite internet provider refuses orders from Brazil’s telecom regulator to cut access to X.
The background: Brazil’s Supreme Court last week ordered all internet providers in Latin America’s largest economy to cut access to X amid a broader clash with the company over an order to suspend accounts that the court says spread hate speech and disinformation.
That order came after X racked up some $3 million in related fines, which Brazil has now tried to collect by freezing the local assets of Starlink, a separate company from X.
Starlink says it won’t comply with the order to block X until those assets are unfrozen and has offered Brazilians free internet service while the dispute continues.
Brazil is one of X’s largest markets, with about 40 million monthly users. But both sides have dug in as this becomes a high-profile battle over free speech vs. national sovereignty.
What’s next? It’s hard for the Brazilian government to stop Starlink signals from reaching users, but it could shutter about two dozen ground stations in the country that are part of the company’s network …Founder and CEO of Telegram Pavel Durov delivers a keynote speech during the Mobile World Congress in Barcelona, Spain, on Feb. 23, 2016.
Opinion: Pavel Durov, Mark Zuckerberg, and a child in a dungeon
Perhaps you have heard of the city of Omelas. It is a seaside paradise. Everyone there lives in bliss. There are churches but no priests. Sex and beer are readily available but consumed only in moderation. There are carnivals and horse races. Beautiful children play flutes in the streets.
But Omelas, the creation of science fiction writer Ursula Le Guin, has an open secret: There is a dungeon in one of the houses, and inside it is a starving, abused child who lives in its own excrement. Everyone in Omelas knows about the child, who will never be freed from captivity. The unusual, utopian happiness of Omelas, we learn, depends entirely on the misery of this child.
That’s not the end of the tale of Omelas, which I’ll return to later. But the story's point is that it asks us to think about the prices we’re willing to pay for the kinds of worlds we want. And that’s why it’s a story that, this week at least, has a lot to do with the internet and free speech.
On Saturday, French police arrested Pavel Durov, the Russian-born CEO of Telegram, at an airport near Paris.
Telegram is a Wild West sort of messaging platform, known for lax moderation, shady characters, and an openness to dissidents from authoritarian societies. It’s where close to one billion people can go to chat with family in Belarus, hang out with Hamas, buy weapons, plot Vladimir Putin’s downfall, or watch videos of Chechen warlord Ramzan Kadyrov shooting machine guns at various rocks and trees.
After holding Durov for three days, a French court charged him on Wednesday with a six-count rap sheet and released him on $6 million bail. French authorities say Durov refused to cooperate with investigations of groups that were using Telegram to violate European laws: money laundering, trafficking, and child sexual abuse offenses. Specifically, they say, Telegram refused to honor legally obtained warrants.
A chorus of free speech advocates has rushed to his defense. Chief among them is Elon Musk, who responded to Durov’s arrest by suggesting that, within a decade, Europeans will be executed for merely liking the wrong memes. Musk himself is in Brussels’ crosshairs over whether X moderates content in line with (potentially subjective) hate speech laws.
Somewhat less convincingly, the Kremlin – the seat of power in a country where critics of the government often wind up in jail, in exile, or in a pine box – raised the alarm about Durov’s arrest, citing it as an assault on freedom of speech.
I have no way of knowing whether the charges against Durov have merit. That will be up to the French courts to prove. And it is doubtless true that Telegram provides a real free speech space in some truly rotten authoritarian societies (I won’t believe the rumors of Durov’s collusion with the Kremlin until they are backed by something more than the accident of his birthplace.)
But based on what we do know so far, the free speech defense of Durov comes from a real-world kind of Omelas.
Even the most ferocious free speech advocates understand that there are reasonable limitations. Musk himself has said X will take down any content that is “illegal.”
Maybe some laws are faulty or stupid. Perhaps hate speech restrictions really are too subjective in Europe. But if you live in a world where the value of free speech on a platform like Telegram is so high that it should be functionally immune from laws that govern, say, child abuse, then you are picking a certain kind of Omelas that, as it happens, looks very similar to Le Guin’s. A child may pay the price for the utopia that you want.
But at the same time, there’s another Omelas to consider.
On Tuesday, Mark Zuckerberg sent a letter to Congress in which he admitted that during the pandemic, he had bowed to pressure from the Biden administration to suppress certain voices who dissented from the official COVID messaging.
Zuck said he regretted doing so – the sense being that the banned content wasn’t, in hindsight, really worth banning – and that his company would speak out “more forcefully” against government pressure next time.
Just to reiterate what he says happened: The head of the world’s most powerful government got the head of the world’s most powerful social media company to suppress certain voices that, in hindsight, shouldn’t have been suppressed. You do not have to be part of the Free Speech Absolutist Club™ to be alarmed by that.
It’s fair to say, look, we didn’t know then what we later learned about a whole range of pandemic policies on masking, lockdowns, school closures, vaccine efficacy, and so on. And there were plenty of absolutely psychotic and dangerous ideas floating around, to be sure.
What’s more, there are plenty of real problems with social media, hate, and violence – the velocity of bad or destructive information is immense, and the profit incentives behind echo-chambering turn the marketplace of ideas into something more like a food court of unchecked grievances.
But in a world where the only way we know how to find the best answers is to inquire and critique, governments calling audibles on what social media sites can and can’t post is a road to a dark place. It’s another kind of Omelas – a utopia of officially sanitized “truths,” where a person with a different idea about what’s happening may find themselves locked away.
At the end of Le Guin’s story, by the way, something curious happens. A small number of people make a dangerous choice. Rather than live in a society where utopia is built on a singular misery, they simply leave.
Unfortunately, we don’t have this option. We are stuck here.
So what’s the right balance between speech and security that won’t leave anyone in a dungeon?