We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
People walk behind the logo of SoftBank Corp in Tokyo.
Hard Numbers: SoftBank’s hardy investment, Grok gets cash infusion, Humane’s rescue plan, Kenya’s tech upgrade, News Corp and OpenAI strike a deal
6 billion:Elon Musk’s AI startup, xAI, has raised $6 billion from venture capital investors such as Andreessen Horowitz and Sequoia Capital, plus Saudi Arabia’s Prince Alwaleed bin Talal and Kingdom Holding Company. The new funding round boosts the value of xAI, which makes the AI chatbot Grok, to $24 billion. Musk is a cofounder of OpenAI but severed ties with the firm in 2018 and has since sued the ChatGPT maker, alleging it abandoned its founding principles.
750 million: Humane, the company that recently released an AI-powered pin to scathing reviews, is reportedly looking for a buyer to swoop in. While customers have to cough up $699 for the signature pin, a corporate buyer would need to pay between $750 million and $1 billion — if the company’s current management fetches any interest, that is.
1 billion: Microsoft and the UAE-based tech giant G42 are pouring $1 billion into a geothermal-powered data center in Kenya. This East African investment is the first big announcement since Microsoft invested $1.5 billion in G42 in April, a deal brokered by the Biden administration. Microsoft and G42 also pledged to work on local language and skills training initiatives with the Kenyan government and companies in the country.
250 million: OpenAI struck a licensing deal with News Corp., the parent company of The Wall Street Journal, reportedly worth $250 million over five years. News Corp’s stock rose on the announcement, and the deal represents a burgeoning revenue stream for news companies. But the deal isn’t without critics: The Information’s founder Jessica Lessin wrote that publishers like News Corp need to know their worth with AI companies, hungry for content, and not rush into any deal for “relative pennies.”
Tesla CEO Elon Musk steps out of a vehicle, during his visit to China, in Beijing, China, April 28, 2024, in this screen grab taken from a video.
Beijing gives Blinken cold shoulder, extends warm welcome to Musk
Last week, US Secretary of State Antony Blinken made a high-profile visit to China, marked by terse talk and some tough symbols. Two days ahead of Blinken’s arrival, China launched a submarine-based ballistic missile test, and as he departed, the Chinese air force flew jets over the Taiwan Strait. Beijing was not amused by the US Congress passing a supplemental spending bill last week, including billions in military assistance to Taipei.
In contrast, Tesla founder Elon Musk's surprise visit starting Sunday was all smiles. Musk posted to X about the honor of meeting Chinese Premier Li Qiang, who heralded Tesla as a pillar of US-China economic cooperation. Tesla has sold more than 1.7 million cars in China since it entered the market a decade ago, and its largest factory is in Shanghai.
Musk wants to roll out Tesla’s Full Self-Driving technology in China before Chinese automakers deploy similar capabilities. Musk is also seeking approval to transfer data collected in China to the US to train algorithms for FSD tech. Market watchers called the unexpected visit "a major moment for Tesla" as the company struggles with layoffs and slumping sales.Elon Musk, CEO of SpaceX and Tesla and owner of X, formerly known as Twitter, attends the Viva Technology conference dedicated to innovation and startups at the Porte de Versailles exhibition centre in Paris, France, June 16, 2023.
Hard Numbers: X’s neo-Nazi problem, China’s export extravaganza, America’s economic bounce, Oreo’s antitrust woes, Russia’s bumpy flights
5.3: China’s economy, the world’s second-largest, grew more than many experts expected, expanding by 5.3% compared to the same period last year. That beat analysts' predictions by 0.7 points. The boom was driven largely by huge investments in manufacturing for export – in particular solar panels, cars, and steel. Concerns remain about the persistent weakness of China’s property sector, but at this pace, China will comfortably hit its “around 5%” growth target.
2.7: Meanwhile, the world’s largest economy, the US, is projected to grow 2.7% this year, according to new IMF figures. That’s not quite on China’s level, but it’s still double the rate of any fellow members of the G7, a club of the world’s largest democratic economies. Coupled with China’s strong showing, the US economic boom has helped to stave off a global recession.
340 million: The company that owns Oreos is about to get dunked, it seems. An EU antitrust probe has found that Mondelez, the US-based company that also makes Toblerone bars and Cadbury chocolates, deliberately restricted the flow of its products between European countries in a bid to keep prices higher. The company has reportedly set aside €340 million ($360 million) for the coming fine.
20*: Flying in Russia is getting more dangerous but less fatal. Western sanctions on plane parts and servicing have caused a sharp rise in aircraft malfunctions, but Russia’s 20 air travel deaths in 2023 were still the lowest in a decade. The asterisk is for the late Yevgeny Prigozhin, of Wagner insurrection fame, who was killed along with nine others when his plane went down last August. Authorities say the possibility of foul play means the incident isn’t included as a conventional air travel death.
FILE PHOTO: Elon Musk, CEO of SpaceX and Tesla and owner of X, formerly known as Twitter, attends the Viva Technology conference dedicated to innovation and startups at the Porte de Versailles exhibition centre in Paris, France, June 16, 2023.
Atwood and Musk agree on Online Harms Act
Space capitalist Elon Musk and Canadian literary legend Margaret Atwood are in agreement …. on warning that Canadian legislation to bring order to cyberspace threatens freedom of speech, which suggests that Justin Trudeau may have to go back to the drawing board.
The Liberals unveiled the Online Harms Act last month, proposing a digital safety commission to target hate speech, child porn, and other dangerous content. Advocates like Facebook whistleblower Frances Haugen have called for governments to pass similar laws, and both the EU and the UK are doing so.
But the Trudeau government got a black eye from its last attempt to regulate cyberspace when Meta yanked Canadian news from its platforms rather than pay a so-called “link tax.”
So far, big American tech companies have not reacted as forcefully to this bill, but Atwood, Musk, and many experts have objected to the draconian laws around hate speech, which would include life prison sentences and the use of peace bonds for potential hate speech.
Given the precariousness of Trudeau’s government, the humiliating defeat of its last big online law, and the criticisms coming from even those predisposed to support the law, the government will likely have to accept amendments in the legislative process if it wants to get this passed.FILE PHOTO: Tesla and SpaceX's CEO Elon Musk pauses during an in-conversation event with British Prime Minister Rishi Sunak in London, Britain, Thursday, Nov. 2, 2023.
Musk takes OpenAI to court
Tesla CEO Elon Musk sued OpenAI and its CEO Sam Altman late last week, saying that they breached the terms of a contract by prioritizing their profits over the public good. In 2015, Musk helped found and fund OpenAI, the artificial intelligence research lab-turned-industry leader. He resigned as co-chair of the company’s nonprofit board of directors in 2018, citing conflicts of interest with his own company, Tesla, which was investing heavily in AI.
Now, Musk alleges that OpenAI violated the terms under which he gave money to OpenAI, but no one seems to have written down those terms.
The Verge points out that the complaint hinges on the violation of a “Founding Agreement,” an alleged oral contract that Musk feels was formed in the course of business discussions. If a court finds that a contract was formed – and courts aren’t usually friendly to oral contracts – Musk is requesting that the court compel OpenAI to revert back to its original nonprofit mission, including making research data publicly available, instead of the profit-motivated one that’s turned it into a $80 billion juggernaut.
There’s one other thing that Musk-watchers should keep in mind: Musk currently runs an AI startup of his own, xAI, which has a chatbot called Grok. This means his business directly competes with OpenAI. Is it any wonder he’s resorting to litigation that could take OpenAI down a peg?
US CEOs too influential on China policy, says Rahm Emanuel
US CEOs are too cozy with Beijing, says US Ambassador to Japan Rahm Emanuel.
At the APEC summit last November in San Francisco, heads of state and diplomats from nations in the Asia-Pacific met to address a wide array of strategic interests and challenges. But no other meeting was as closely watched as that between US President Joe Biden and Chinese President Xi Jinping. As successful as that meeting may have been on a PR level (at least according to the delegations of each leader), one man present took special note of what happened afterward. US Ambassador to Japan, Rahm Emanuel, told Ian Bremmer about that summit during an exclusive interview in the latest episode of GZERO World, filmed at the Ambassador's residence in Tokyo, Japan.
"President Xi goes to have a meeting with American CEOs who give him a standing ovation, though he hasn't yet said anything," recounted Ambassador Emanuel. "The President of the United States goes to an event, and all the heads of state are there. That tells you about alliances, that tells you about the interests of China."
Bremmer then noted that it also tells you something about the interests of American CEOs. to which Emanuel responded: "I think the American CEOs are way too influential in American foreign policy in this region, way too influential."
Catch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
UK AI Safety Summit brings government leaders and AI experts together
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode, she takes you behind the scenes of the first-ever UK AI Safety Summit.
Last week, the AI Summit took place, and I'm sure you've read all the headlines, but I thought it would be fun to also take you behind the scenes a little bit. So I arrived early in the morning of the day that the summit started, and everybody was made to go through security between 7 and 8 AM, so pretty early, and the program only started at 10:30. So what that led to was a longstanding reception over coffee where old friends and colleagues met, new people were introduced, and all participants from business, government, civil society, academia really started to mingle.
And maybe that was a part of the success of the summit, which then started with a formal opening with remarkably global representation. There had been some discussion about whether it was appropriate to invite the Chinese government, but indeed a Chinese minister, but also from India, from Nigeria, were there to underline that the challenges that governments have to deal with around artificial intelligence are a global one. And I think that that was an important symbol that the UK government sought to underline. Now, there was a little bit of surprise in the opening when Secretary Raimondo of the United States announced the US would also initiate an AI Safety Institute right after the UK government had announced its. And so it did make me wonder why not just work together globally? But I guess they each want their own institute.
And those were perhaps the more concrete, tangible outcomes of the conference. Other than that, it was more a statement to look into the risks of AI safety more. And ahead of the conference, there had been a lot of discussion about whether the UK government was taking a too-narrow focus on AI safety, whether they had been leaning towards the effective altruism, existential risk camp too much. But in practice, the program gave a lot of room for discussions, and I thought that was really important, about the known and current day risks that AI presents. For example, to civil rights, when we think about discrimination, or to human rights, when we think about the threats to democracy, from both disinformation that generative AI can put on steroids, but also the real question of how to govern it at all when companies have so much power, when there's such a lack of transparency. So civil society leaders that were worried that they were not sufficiently heard in the program will hopefully feel a little bit more reassured because I spoke to a wide variety of civil society representatives that were a key part of the participants among government, business, and academic leaders.
So, when I talked to some of the first generation of thinkers and researchers in the field of AI, for them it was a significant moment because never had they thought that they would be part of a summit next to government leaders. I mean, for a long time they were mostly in their labs researching AI, and suddenly here they were being listened to at the podium alongside government representatives. So in a way, they were a little bit starstruck, and I thought that was funny because it was probably the same the other way around, certainly for the Prime Minister, who really looked like a proud student when he was interviewing Elon Musk. And that was another surprising development, that actually briefly, after the press conference had taken place, so a moment to shine in the media with the outcomes of the summit, Prime Minister Sunak decided to spend the airtime and certainly the social media coverage interviewing Elon Musk, who then predicted that AI would eradicate lots and lots of jobs. And remarkably, that was a topic that barely got mentioned at the summit, so maybe it was a good thing that it got part of the discussion after all, albeit in an unusual way.
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect ›
- Elon Musk's geopolitical clout grows as he meets Modi ›
- Everybody wants to regulate AI ›
- Governing AI Before It’s Too Late ›
- Be very scared of AI + social media in politics ›
- Is AI's "intelligence" an illusion? ›
- The geopolitics of AI ›
- AI's impact on jobs could lead to global unrest, warns AI expert Marietje Schaake - GZERO Media ›
- AI regulation means adapting old laws for new tech: Marietje Schaake - GZERO Media ›
- AI & human rights: Bridging a huge divide - GZERO Media ›
Elon Musk's Starlink cutoff controversy
I think it's a fascinating question. And it gets to a point of what I call a technopolar world, not unipolar, not bipolar, not multipolar, technopolar. In other words, for all of our lives, we've talked about a world where nation states, where governments are the principal actors with sovereignty over outcomes that matter critically for national security. Now, here you have the Russians invading Ukraine. One of the biggest challenges to the geopolitical order since the Soviet Union collapsed in 1991. And yet, a core decision about whether or not Ukraine will be able to defend itself is being made not by the United States or NATO providing the military support, but by a technology company. Now, the Ukrainian government is being quite critical of some of the decisions that Elon Musk has made in restricting the use for Starlink, for the Ukrainians.
I don't think that's fair criticism by itself. I think we need to recognize that Starlink's availability to the Ukrainians was absolutely essential in helping the government and the military leaders actually communicate with their soldiers on the front lines. And if it wasn't for Starlink, and if it wasn't for the role of many other technology companies, largely in the United States, not at all clear to me that Zelensky would still be in power today. Certainly the Ukrainians would have lost a lot more territory and they'd be in much worse position than they are. So I think that the Ukrainians still owe Elon a significant debt. But I also raise a much bigger question, which is, should an individual CEO, should an individual centibillionaire be making these decisions about outcomes of life and death for 44 million Ukrainians?
And they're the answer is much more concerning. Because, of course, Elon and all of these technology companies, they're not treaty signatories with NATO. They don't have any obligation to do anything other than Netflix and chill. And yet they're absolutely indispensable for national security in these countries as increasingly national security becomes a matter of not just what happens with bombs and rockets, but also what happens in the digital world, what happens in cyberspace, what happens in communications, in the collection of intelligence. As Elon and others become principal actors in a military industrial technological complex, accountability for those decisions is very deeply concerning if it's only in the hands of those individuals. Now, I think it's a little easier with SpaceX, because SpaceX is, after all, a company that is overwhelmingly funded by the US government, by the Pentagon and by NASA. And so ultimately, either legally through regulation or informally through pressure on the basis of providing those contracts, there is certainly a level of influence that the US government would be able to have over a SpaceX to ensure that Starlink is made available fully to the Ukrainians as US. and NATO's allies see fit.
Just as the American government would take vigorous exception if SpaceX and Starlink were suddenly having their technologies made available to American adversaries. Having said that, keep in mind that there is no other viable technology that is presently available. So, if it's not Starlink, it's nothing for the Ukrainians. And what about a country like Taiwan? Very concerned increasingly that we see the status quo on Taiwan eroding from the United States, as Biden says that he would defend Taiwan and as the Americans put export controls on TSMC, the semiconductor company, and from the Chinese side, as the Chinese keep sending over drones and aircraft to invade Taiwanese airspace. Well, if there were cyber attacks from mainland China into Taiwan, would Starlink be made available in Taiwan the way it has been in Ukraine, even though imperfectly in Ukraine? And the answer to that, I suspect, would be absolutely not, because it would prevent Elon Musk from doing effective business in mainland China, including Tesla. Would the Chinese use that leverage against Elon in a way that the American government had not been against SpaceX?
Absolutely they would. And so what does that mean? Does it mean that that just means Taiwan doesn't get that ability to defend itself? Or does the US government have to somehow, through force majeure, nationalize the technology and take it away from SpaceX or force SpaceX to provide Starlink to Taiwan? Or does the US government have to build its own alternative, where it has direct ownership of such a company and technology. Look, the fact is this is a very, very messy piece of geopolitical power where increasingly technology companies are acting as sovereigns. And until and unless those questions are answered, we are increasingly living in a technopolar world.
That's it for me. And I'll talk to you all real soon.