We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Biden, Microsoft, and the United Arab Emirates
Microsoft has quickly become the most important investor in artificial intelligence technology, holding a $13 billion stake in ChatGPT-maker OpenAI. It’s a peculiar deal with a revenue-sharing agreement that’s raised eyebrows from global regulators. But its latest billion-dollar investment is perhaps even more of an eyebrow-raiser.
The US tech giant announced last week that it would invest $1.5 billion in G42, a leading artificial intelligence holding company based in Abu Dhabi. The deal was “largely orchestrated” by the Biden administration, according to the New York Times, an effort to beat back China and gain influence in the Persian Gulf.
“There’s no question the investment was made to try and box out Chinese investment” in artificial intelligence in the Middle East, said Alexis Serfaty, a director in Eurasia Group’s geo-technology practice.
Under the terms of the new deal, Microsoft will let G42 sell its generative AI services and, in exchange, G42 will use Microsoft’s Azure cloud services. It also agreed to stricter assurances with the US government to further cut off China and remove their products and technology from use.
It’s not every day that the White House plays corporate dealmaker, but the administration hasn’t been shy about making AI — and the chips needed to power it — an economic and national security priority. Serfaty said the closest parallel he could think of was the proposed Trump administration deal to hand a stake of TikTok to the US software and cloud giant Oracle. (TikTok’s Chinese parent company ByteDance never sold a stake of its social media app to Oracle, but it did strike a deal to host its US user data on Oracle servers). Plus, the US has recently given massive grants and favorable loans to global chip manufacturers—like TSMC and Samsung—for moving production to the US.
The Biden administration has imposed strict export controls on US-made chips going to China, especially powerful ones used to run artificial intelligence models. The goal: cut off China and hamper their ability to build powerful AI. Tech investments in the Persian Gulf have been something of a casualty of this Cold War over AI. G42 announced in December 2023 that it would cut ties with China in order to keep working with US industry.
“For better or worse, as a commercial company, we are in a position where we have to make a choice,” G42 CEO Peng Xiao told the Financial Times. “We cannot work with both sides. We can’t.”
Serfaty said that the deal signals that the US government is going to increasingly treat artificial intelligence like defense technology, and play a more hands-on role in its commercial affairs and investment.
“When it comes to emerging technology, you cannot be both in China’s camp and our camp,” Commerce Secretary Gina Raimondo told the Times.
Hard Numbers: Microsoft’s big Gulf investment, Amazon’s ambitions, Mammogram-plus, Adobe pays up, Educating Don Beyer
1.5 billion: Microsoft has announced a deal to invest $1.5 billion in G42, an artificial intelligence firm based in the United Arab Emirates that recently cut ties with Chinese suppliers that had raised US security concerns. Washington and Abu Dhabi relations have been strained over the UAE’s ties to Chinese tech companies. But this deal – which grants Microsoft a minority stake in the company – could signal a new era of relations with the US.
33: Amazon is talking about artificial intelligence – like, a lot. In his recently published annual letter to shareholders, Amazon CEO Andy Jassy mentioned AI 33 times. The company invested $4 billion in Anthropic, which makes the Claude chatbot, and will host Anthropic on Amazon Web Services. Jassy said the company wants to build AI models more so than applications (think GPT-4 instead of ChatGPT) and sell directly to enterprise clients.
40: Clinics are starting to offer an AI-assisted add-on to typical mammograms. Interested patients typically incur an out-of-pocket charge between $40 and $100 to have an AI model scan their breast screening for additional insights — even, possibly, early breast cancer detection.
3: Adobe is planning to compete with OpenAI’s Sora video model. To do so, it’s offering photographers and videographers $3 per minute to upload videos of people doing everyday activities like walking around or sitting down, or simple shots of hands, feet, or eyes to train their new generative AI model. It’s an expensive but cautious approach intended to build up a comprehensive database while staying on the right side of copyright law and avoiding potential imbroglios like the one OpenAI faces for using YouTube videos to train its models
73: Congressman Don Beyer, a Democrat from Virginia, decided he wanted to return to school to learn more about AI. So, that’s what he did. The 73-year-old car dealership mogul-turned-politician recently enrolled in a master’s degree program in machine learning at George Mason University. He’s even learning to code, which he says is helping him better think about all kinds of problems in Washington.Congress keeps it old school
Last June, the House of Representatives banned staff use of ChatGPT — the free version at least. Now, it’s telling staffers that use of Microsoft’s Copilot, a tool built on the same large language model as ChatGPT, is also prohibited.
“The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services,” House Chief Administrative Officer Catherine Szpindorwrote in a guidance distributed to congressional offices. In response, Microsoft said it’s working on a government-specific version of the product with greater data security set to release later this year.
The Departments of Energy, Veterans Affairs, and Agriculture have also taken steps to ban generative AI tools in recent months. So has the Social Security Administration. Governments need to be able to make sure that allowing such systems into their workplaces, interacting with sensitive or even classified data, won’t lead to that information leaking to a broader commercial or consumer user base.Hard Numbers: Amazon’s AI ambitions, what to use ChatGPT for, energy crisis, Enter Stargate
2.75 billion: Amazon invested an additional $2.75 billion in the AI startup Anthropic, which makes the popular chatbot Claude, brings their total investment to around $4 billion, while Google also has a $2 billion stake in the company. The big tech giants like Amazon, Google, and Microsoft, with its $13 billion deal with OpenAI, have chosen investments and strategic partnerships instead of buying startups outright. Amazon also announced it’ll spend $150 billion on data centers over the next 15 years to support its AI ambitions.
2: 20% of US adults say they’ve used ChatGPT for work, up from 12% just six months ago, according to a new survey by Pew Research Center. But only 2% of Americans surveyed said they’ve used the chatbot to gather information about the country’s upcoming elections—a good sign for people worrying about the immediate impact of AI tools that have a tendency to make stuff up.
4: The electricity used by data centers, cryptocurrency, and artificial intelligence represented nearly 2% of global energy use in 2022, according to the International Energy Agency. That number could double to 4% by 2026 if current trends continue.
100 billion: Microsoft and OpenAI are reportedly teaming up to build data centers along with a supercomputer, nicknamed “Stargate,” to power their artificial intelligence systems. The project, which still has yet to be greenlit, could cost a staggering $100 billion.
An inflection point for Microsoft
Microsoft made headlines last week, hiring Mustafa Suleyman to lead its internal AI group. Suleyman is a big name in the world of artificial intelligence, namely because he co-founded the influential British research lab DeepMind that was acquired by Google in 2014 for over $500 million. But in hiring Suleyman, Microsoft also kinda, maybe, sorta acquired his current AI startup, called Inflection AI.
Microsoft didn’t just hire Suleyman and co-founder Karén Simonyan, but it hired “most of the staff” of the $4 billion startup. It then paid the remaining husk of Inflection $650 million to license its technology, which Inflection is using to pay off its remaining investors. It’s as close to an acquisition as you can get without actually buying a company. And there's a good reason for this: The current antitrust environment is tough for tech. The government has a watchful eye on mergers and so, Big Tech has often opted against buying startups outright: We’ve seen Microsoft invest $13 billion in OpenAI, while Amazon and Google have each poured billions each into Anthropic.
But the government has broad authority over mergers, even if they’re partial or untraditional in nature, experts told GZERO recently. Put simply, we’d be surprised if this acqui-hire of sorts is enough to deter the government’s antitrust enforcers, who are already sniffing around Microsoft’s investment and power over OpenAI.Microsoft's big-name hire
What a splash! Microsoft announced earlier today that it has hired one of the most prominent figures in the AI revolution: Mustafa Suleyman. Suleyman co-founded the British AI research lab DeepMind, which Google acquired for £400 million in 2014 (~$656 million).
Suleyman will run a new division called Microsoft AI, overseeing its Copilot and Bing products, among others. Microsoft has become a major player in generative AI through its $13 billion investment in ChatGPT-maker OpenAI, whose deep-learning language models now fuel Microsoft's own AI offerings. He will focus on advancing consumer products — in other words, getting you to use this cutting-edge tech.
OpenAI’s Altman incident under investigation
Two investigations may soon shed light on one of the biggest mysteries in Silicon Valley: Why was Sam Altman fired from OpenAI?
To recap, the OpenAI board fired Altman in November, saying he was not “consistently candid in his communications,” but it failed to provide specifics (the big mystery). OpenAI’s staff and lead investor, Microsoft, immediately protested the ouster and successfully campaigned for Altman’s reinstatement – and for fresh faces on the nonprofit board.
The US Securities and Exchange Commission is now investigating whether OpenAI misled its investors in firing Altman. Meanwhile, the law firm WilmerHale is conducting an internal investigation of the Altman firing and will soon present its findings to the current board of directors, which commissioned the review.
Altman’s alleged deceit may have something to do with his plans to raise trillions of dollars for a chip venture, something that’s come to light in the months since this debacle. We have our ear to the ground for where the investigations are headed, and what it could mean for the giant of genAI.2024 is the ‘Voldemort’ of election years, says Ian Bremmer
Critical elections are occurring across the globe this year, with a record number of people — roughly half the global population — set to head to the polls in dozens of countries.
During a Global Stage panel at the Munich Security Conference, Eurasia Group Founder and President Ian Bremmer described 2024 as the “Voldemort of election years.”
“Voldemort is the name that should not be spoken in the ‘Harry Potter’ series … This is the year that people have been very concerned about but have kind of hoped that they could push off,” says Bremmer. This is not just because there are so many elections occurring amid historic levels of distrust in key institutions, but also because the United States — the most powerful country in the world — is also “one of the most politically dysfunctional,” he explains.
Bremmer says the 2024 US presidential election is “maximally distrust-laden,” adding that this is “driving a level of concern that borders on panic from American allies all over the world.”
The conversation was part of the Global Stage series, produced by GZERO in partnership with Microsoft. These discussions convene heads of state, business leaders, technology experts from around the world for critical debate about the geopolitical and technology trends shaping our world.