We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Hard Numbers: X’s neo-Nazi problem, China’s export extravaganza, America’s economic bounce, Oreo’s antitrust woes, Russia’s bumpy flights
5.3: China’s economy, the world’s second-largest, grew more than many experts expected, expanding by 5.3% compared to the same period last year. That beat analysts' predictions by 0.7 points. The boom was driven largely by huge investments in manufacturing for export – in particular solar panels, cars, and steel. Concerns remain about the persistent weakness of China’s property sector, but at this pace, China will comfortably hit its “around 5%” growth target.
2.7: Meanwhile, the world’s largest economy, the US, is projected to grow 2.7% this year, according to new IMF figures. That’s not quite on China’s level, but it’s still double the rate of any fellow members of the G7, a club of the world’s largest democratic economies. Coupled with China’s strong showing, the US economic boom has helped to stave off a global recession.
340 million: The company that owns Oreos is about to get dunked, it seems. An EU antitrust probe has found that Mondelez, the US-based company that also makes Toblerone bars and Cadbury chocolates, deliberately restricted the flow of its products between European countries in a bid to keep prices higher. The company has reportedly set aside €340 million ($360 million) for the coming fine.
20*: Flying in Russia is getting more dangerous but less fatal. Western sanctions on plane parts and servicing have caused a sharp rise in aircraft malfunctions, but Russia’s 20 air travel deaths in 2023 were still the lowest in a decade. The asterisk is for the late Yevgeny Prigozhin, of Wagner insurrection fame, who was killed along with nine others when his plane went down last August. Authorities say the possibility of foul play means the incident isn’t included as a conventional air travel death.
Atwood and Musk agree on Online Harms Act
Space capitalist Elon Musk and Canadian literary legend Margaret Atwood are in agreement …. on warning that Canadian legislation to bring order to cyberspace threatens freedom of speech, which suggests that Justin Trudeau may have to go back to the drawing board.
The Liberals unveiled the Online Harms Act last month, proposing a digital safety commission to target hate speech, child porn, and other dangerous content. Advocates like Facebook whistleblower Frances Haugen have called for governments to pass similar laws, and both the EU and the UK are doing so.
But the Trudeau government got a black eye from its last attempt to regulate cyberspace when Meta yanked Canadian news from its platforms rather than pay a so-called “link tax.”
So far, big American tech companies have not reacted as forcefully to this bill, but Atwood, Musk, and many experts have objected to the draconian laws around hate speech, which would include life prison sentences and the use of peace bonds for potential hate speech.
Given the precariousness of Trudeau’s government, the humiliating defeat of its last big online law, and the criticisms coming from even those predisposed to support the law, the government will likely have to accept amendments in the legislative process if it wants to get this passed.Musk takes OpenAI to court
Tesla CEO Elon Musk sued OpenAI and its CEO Sam Altman late last week, saying that they breached the terms of a contract by prioritizing their profits over the public good. In 2015, Musk helped found and fund OpenAI, the artificial intelligence research lab-turned-industry leader. He resigned as co-chair of the company’s nonprofit board of directors in 2018, citing conflicts of interest with his own company, Tesla, which was investing heavily in AI.
Now, Musk alleges that OpenAI violated the terms under which he gave money to OpenAI, but no one seems to have written down those terms.
The Verge points out that the complaint hinges on the violation of a “Founding Agreement,” an alleged oral contract that Musk feels was formed in the course of business discussions. If a court finds that a contract was formed – and courts aren’t usually friendly to oral contracts – Musk is requesting that the court compel OpenAI to revert back to its original nonprofit mission, including making research data publicly available, instead of the profit-motivated one that’s turned it into a $80 billion juggernaut.
There’s one other thing that Musk-watchers should keep in mind: Musk currently runs an AI startup of his own, xAI, which has a chatbot called Grok. This means his business directly competes with OpenAI. Is it any wonder he’s resorting to litigation that could take OpenAI down a peg?
US CEOs too influential on China policy, says Rahm Emanuel
US CEOs are too cozy with Beijing, says US Ambassador to Japan Rahm Emanuel.
At the APEC summit last November in San Francisco, heads of state and diplomats from nations in the Asia-Pacific met to address a wide array of strategic interests and challenges. But no other meeting was as closely watched as that between US President Joe Biden and Chinese President Xi Jinping. As successful as that meeting may have been on a PR level (at least according to the delegations of each leader), one man present took special note of what happened afterward. US Ambassador to Japan, Rahm Emanuel, told Ian Bremmer about that summit during an exclusive interview in the latest episode of GZERO World, filmed at the Ambassador's residence in Tokyo, Japan.
"President Xi goes to have a meeting with American CEOs who give him a standing ovation, though he hasn't yet said anything," recounted Ambassador Emanuel. "The President of the United States goes to an event, and all the heads of state are there. That tells you about alliances, that tells you about the interests of China."
Bremmer then noted that it also tells you something about the interests of American CEOs. to which Emanuel responded: "I think the American CEOs are way too influential in American foreign policy in this region, way too influential."
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
UK AI Safety Summit brings government leaders and AI experts together
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she takes you behind the scenes of the first-ever UK AI Safety Summit.
Last week, the AI Summit took place, and I'm sure you've read all the headlines, but I thought it would be fun to also take you behind the scenes a little bit. So I arrived early in the morning of the day that the summit started, and everybody was made to go through security between 7 and 8 AM, so pretty early, and the program only started at 10:30. So what that led to was a longstanding reception over coffee where old friends and colleagues met, new people were introduced, and all participants from business, government, civil society, academia really started to mingle.
And maybe that was a part of the success of the summit, which then started with a formal opening with remarkably global representation. There had been some discussion about whether it was appropriate to invite the Chinese government, but indeed a Chinese minister, but also from India, from Nigeria, were there to underline that the challenges that governments have to deal with around artificial intelligence are a global one. And I think that that was an important symbol that the UK government sought to underline. Now, there was a little bit of surprise in the opening when Secretary Raimondo of the United States announced the US would also initiate an AI Safety Institute right after the UK government had announced its. And so it did make me wonder why not just work together globally? But I guess they each want their own institute.
And those were perhaps the more concrete, tangible outcomes of the conference. Other than that, it was more a statement to look into the risks of AI safety more. And ahead of the conference, there had been a lot of discussion about whether the UK government was taking a too-narrow focus on AI safety, whether they had been leaning towards the effective altruism, existential risk camp too much. But in practice, the program gave a lot of room for discussions, and I thought that was really important, about the known and current day risks that AI presents. For example, to civil rights, when we think about discrimination, or to human rights, when we think about the threats to democracy, from both disinformation that generative AI can put on steroids, but also the real question of how to govern it at all when companies have so much power, when there's such a lack of transparency. So civil society leaders that were worried that they were not sufficiently heard in the program will hopefully feel a little bit more reassured because I spoke to a wide variety of civil society representatives that were a key part of the participants among government, business, and academic leaders.
So, when I talked to some of the first generation of thinkers and researchers in the field of AI, for them it was a significant moment because never had they thought that they would be part of a summit next to government leaders. I mean, for a long time they were mostly in their labs researching AI, and suddenly here they were being listened to at the podium alongside government representatives. So in a way, they were a little bit starstruck, and I thought that was funny because it was probably the same the other way around, certainly for the Prime Minister, who really looked like a proud student when he was interviewing Elon Musk. And that was another surprising development, that actually briefly, after the press conference had taken place, so a moment to shine in the media with the outcomes of the summit, Prime Minister Sunak decided to spend the airtime and certainly the social media coverage interviewing Elon Musk, who then predicted that AI would eradicate lots and lots of jobs. And remarkably, that was a topic that barely got mentioned at the summit, so maybe it was a good thing that it got part of the discussion after all, albeit in an unusual way.
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- Elon Musk's geopolitical clout grows as he meets Modi ›
- Everybody wants to regulate AI ›
- Governing AI Before It’s Too Late ›
- Be very scared of AI + social media in politics ›
- Is AI's "intelligence" an illusion? ›
- The geopolitics of AI ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›
- AI regulation means adapting old laws for new tech: Marietje Schaake - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
Elon Musk's Starlink cutoff controversy
I think it's a fascinating question. And it gets to a point of what I call a technopolar world, not unipolar, not bipolar, not multipolar, technopolar. In other words, for all of our lives, we've talked about a world where nation states, where governments are the principal actors with sovereignty over outcomes that matter critically for national security. Now, here you have the Russians invading Ukraine. One of the biggest challenges to the geopolitical order since the Soviet Union collapsed in 1991. And yet, a core decision about whether or not Ukraine will be able to defend itself is being made not by the United States or NATO providing the military support, but by a technology company. Now, the Ukrainian government is being quite critical of some of the decisions that Elon Musk has made in restricting the use for Starlink, for the Ukrainians.
I don't think that's fair criticism by itself. I think we need to recognize that Starlink's availability to the Ukrainians was absolutely essential in helping the government and the military leaders actually communicate with their soldiers on the front lines. And if it wasn't for Starlink, and if it wasn't for the role of many other technology companies, largely in the United States, not at all clear to me that Zelensky would still be in power today. Certainly the Ukrainians would have lost a lot more territory and they'd be in much worse position than they are. So I think that the Ukrainians still owe Elon a significant debt. But I also raise a much bigger question, which is, should an individual CEO, should an individual centibillionaire be making these decisions about outcomes of life and death for 44 million Ukrainians?
And they're the answer is much more concerning. Because, of course, Elon and all of these technology companies, they're not treaty signatories with NATO. They don't have any obligation to do anything other than Netflix and chill. And yet they're absolutely indispensable for national security in these countries as increasingly national security becomes a matter of not just what happens with bombs and rockets, but also what happens in the digital world, what happens in cyberspace, what happens in communications, in the collection of intelligence. As Elon and others become principal actors in a military industrial technological complex, accountability for those decisions is very deeply concerning if it's only in the hands of those individuals. Now, I think it's a little easier with SpaceX, because SpaceX is, after all, a company that is overwhelmingly funded by the US government, by the Pentagon and by NASA. And so ultimately, either legally through regulation or informally through pressure on the basis of providing those contracts, there is certainly a level of influence that the US government would be able to have over a SpaceX to ensure that Starlink is made available fully to the Ukrainians as US. and NATO's allies see fit.
Just as the American government would take vigorous exception if SpaceX and Starlink were suddenly having their technologies made available to American adversaries. Having said that, keep in mind that there is no other viable technology that is presently available. So, if it's not Starlink, it's nothing for the Ukrainians. And what about a country like Taiwan? Very concerned increasingly that we see the status quo on Taiwan eroding from the United States, as Biden says that he would defend Taiwan and as the Americans put export controls on TSMC, the semiconductor company, and from the Chinese side, as the Chinese keep sending over drones and aircraft to invade Taiwanese airspace. Well, if there were cyber attacks from mainland China into Taiwan, would Starlink be made available in Taiwan the way it has been in Ukraine, even though imperfectly in Ukraine? And the answer to that, I suspect, would be absolutely not, because it would prevent Elon Musk from doing effective business in mainland China, including Tesla. Would the Chinese use that leverage against Elon in a way that the American government had not been against SpaceX?
Absolutely they would. And so what does that mean? Does it mean that that just means Taiwan doesn't get that ability to defend itself? Or does the US government have to somehow, through force majeure, nationalize the technology and take it away from SpaceX or force SpaceX to provide Starlink to Taiwan? Or does the US government have to build its own alternative, where it has direct ownership of such a company and technology. Look, the fact is this is a very, very messy piece of geopolitical power where increasingly technology companies are acting as sovereigns. And until and unless those questions are answered, we are increasingly living in a technopolar world.
That's it for me. And I'll talk to you all real soon.
What We’re Ignoring: Revenge of the nerds
There’s growing evidence that the much-ballyhooed mixed martial arts battle between X-Man Elon Musk and Meta CEO Mark Zuckerberg may actually take place.
Musk first posted that he would be up for a cage match against Zuckerberg in June. Since then, the two moguls have traded multiple barbs on the topic. Now Zuckerberg, who trains in jiu jitsu, has shared a screenshot of a conversation with his wife Priscilla Chan in which he crows about installing a training cage in their backyard. (Her response: “I have been working on that grass for two years.”)
Not to be outdone, Musk posted to X that he is preparing for the fight by “lifting weights throughout the day,” and that the "Zuck v Musk fight will be live-streamed on X. All proceeds will go to charity for veterans.”
Zuckerberg says he is "not holding his breath" because he offered a date of Aug. 26 but didn't hear back. No word yet on whether Threads will attempt a rival broadcast. Stay tuned. Or don’t.
Politics, trust & the media in the age of misinformation
Ahead of the 2024 US presidential election, GZERO World takes a hard look at the media’s impact on politics and democracy itself.
In 1964, philosopher Marshall McLuhan coined the phrase, “the media is the message.” He meant that the way content is delivered can be more powerful than the content itself.
A lot’s changed since 1964, but the problem has only gotten worse. The ‘80s and ‘90s saw the rise of a 24/7 cable news cycle and hyper-partisan radio talk shows. The 21st century has thus far given us podcasts, political influencers, and the endless doom scroll of social media. And now, we’re entering the age of generative AI.
All of this has created the perfect ecosystem for information––and disinformation––overload. But there might be a bright spot at the end of the tunnel. In the world where it’s getting harder and harder to tell fact from fiction, news organizations, credible journalists, and fact-checkers will be more important than ever.
How has media changed our idea of truth and reality? And how can we better prepare ourselves for the onslaught of misinformation and disinformation that is almost certain to spread online as the 2024 US presidential election gets closer? Can trust in American’s so-called “Fourth Estate” be restored?
Ian Bremmer sits down with journalist and former CNN host Brian Stelter and Nicole Hemmer, a Vanderbilt University professor specializing in political history and partisan media.
Watch GZERO World with Ian Bremmer at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Coronavirus is "the Super Bowl of disinformation" ›
- Artificial intelligence and the importance of civics ›
- Should the US government be involved with content moderation? ›
- Be very scared of AI + social media in politics ›
- Who runs the world? ›
- Can we trust AI to tell the truth? - GZERO Media ›
- Will consumers ever trust AI? Regulations and guardrails are key - GZERO Media ›