We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Does AI’s power problem have a nuclear solution?
Sam Altman, the co-founder and CEO of OpenAI, has broad ambitions to solve all of the problems of AI, from algorithms to high-tech chips. But there’s one more problem on his plate: energy. Altman is backing a series of companies that hope to find a way to power the revolutionary tech, literally.
One of the startups Altman invested in is called Oklo, which is building a nuclear power plant in Idaho that could eventually power energy-guzzling data centers that AI depends on, but there is no clear public timeline for the project. Google and Microsoft have also partnered with nuclear power firms for their energy needs.
Nuclear energy comes with risks, of course, and Oklo has had trouble with regulators, which rejected applications in the past based on the lack of safety and security information provided. But going nuclear — if companies like Oklo can get it right — is also a cleaner alternative to more carbon-emitting energy sources.Hard Numbers: Understanding the universe, Opening up OpenAI, Bioweapon warning, Independent review, AI media billions
100 million: AI is helping researchers better map outer space. One recent simulation led by a University College London researcher was able to show 100 million galaxies just across a quarter of the Earth’s southern hemisphere sky. This is part of a wider effort to understand dark energy, the mysterious force causing the expansion of the universe.
30,000: The law firm WilmerHale, which completed its investigation of Sam Altman’s brief December ouster from OpenAI, examined 30,000 documents as part of its review. The contents of the report haven’t been made public, but new board chairman Bret Taylor said that the review found the prior board acted in good faith but didn’t anticipate the reaction to removing Altman, who is now rejoining the board. The SEC, meanwhile, is still investigating whether OpenAI deceived investors, but it’s unclear whether WilmerHale will give their findings to the agency.
90: More than 90 scientists have pledged not to use AI to develop bioweapons as part of an agreement forged somewhat in response to congressional remarks given by Anthropic CEO Dario Amodei last year. Amodei said while the current generation of AI technology couldn’t handle such a task, it’s only two or three years away.
100: More than 100 AI researchers have signed an open letter asking the leading companies to allow independent investigators access to their models to ensure that risk assessment is thorough. “Generative AI companies should avoid repeating the mistakes of social media platforms, many of which have effectively banned types of research aimed at holding them accountable,” the letter said.
8 billion: The media company Thomson Reuters says it has an $8 billion “war chest” to spend on AI-related acquisitions. In addition to publishing the Reuters newswire, the company sells access to services like Westlaw, a popular legal research platform. It’s also committed to spending at least $100 million developing in-house AI technology to integrate into its news and data offerings.
Sam Altman’s wish on a $7 trillion star
Sam Altman, CEO of OpenAI, needs more chips. He needs a lot more chips. The only thing stopping his $100 billion startup — if you can still call it a startup — may be the current supply of powerful chips.
The semiconductor fabrication process is notoriously slow and expensive, and the global supply chain runs through a few big, highly specialized firms. There are only a small number of companies that actually design chips made for generative AI — AMD, Intel, and Nvidia. And they’re pricy: Nvidia, which is set to take 85% of the market next year by one estimate, sells its H100 chips for about $40,000 a pop.
Naturally, Altman wants to make his own chips, but to make that dream a reality, he’s asking for an obscene amount of money.
How much does Altman want to raise?: According to the Wall Street Journal, Altman is deep in talks with investors with the goal of raising $5-7 trillion for a new chip venture.
“The dollar amount he’s reportedly trying to raise — $7 trillion — eclipses not just the semiconductor investments made by governments, including the United States’ $39 billion investment in chip manufacturing, but also the size of the entire semiconductor industry,” says Hanna Dohmen, a research analyst at Georgetown University's Center for Security and Emerging Technology. “It cannot be overstated how massive this sum of money is.”
Eurasia Group’s Director of Geotechnology Alexis Serfaty calls the sum “preposterously high and also seemingly arbitrary,” and says while it helps that OpenAI would be a built-in customer for this new chipmaker, the semiconductor industry is a difficult one with a propensity for demand gluts and supply chokepoints at every turn. Also, it would require strong leadership. “There are only so many people in the world with the expertise and experience to run an advanced fab, let alone the 300 [facilities] that $7 trillion would buy,” he adds.
Money can buy a lot — but it might not be able to solve the problems that every chipmaker already faces.
Who’s going to give him all that money? Altman has reportedly met with Masayoshi Son, CEO of the influential Japanese investment company SoftBank, and officials from Taiwan Semiconductor Manufacturing Company, one of the world’s largest chip fabrication companies, about investing in his new venture. Altman reportedly wants to “raise the money from Middle East investors and have TSMC build and run” new chip fabrication plants.
But the real eyebrow-raising potential investor isn’t in East Asia; it’s in the Middle East. In recent weeks, Altman has reportedly met with Sheikh Tahnoun bin Zayed al Nahyan, the United Arab Emirates’ security chief, to discuss the venture. OpenAI already struck a deal in October with the Emirati technology company, G42, to bring AI solutions to the Middle Eastern market, laying the foundation for additional business support from the wealthy nation.
This is going to cause geopolitical headaches, right? Almost definitely. Washington is extremely touchy about foreign investment in US companies and even more hesitant when it comes to scarce critical infrastructure such as semiconductors.
“While the US government is eager to bring chip manufacturing to the United States, it would likely be reluctant to do so with the involvement of the UAE government given existing concerns about Emirati companies’ relations with Chinese counterparts,” says Dohmen, who notes that, under US law, companies need licenses to even export certain semiconductors to the UAE.
America’s number one concern is China. Not only has the Biden administration invested heavily in the US chip industry, but it has launched a no-holds-barred campaign to prevent China from getting its hands on chips or even cloud-based AI. Over the past few years, the Biden administration has exacted stringent export controls that seek to prevent any global semiconductor technology, if it’s made with US parts, to do business with China, who it fears will use AI to supercharge its military. Dohmen adds that lawmakers are worried that G42 is already “dealing with blacklisted Chinese firms.”
Simply put, Serfaty says, “Altman’s partnerships with foreign governments could conflict with this US national security strategy.”
Could the US take action against this new venture? Yes. The US government has taken the extraordinary step to block foreign investment in chip companies. In 2018, the Trump administration blocked the sale of the US-based Qualcomm to the then-Singapore-based Broadcom, citing national security concerns. (Broadcom has since moved its headquarters to the US). That administration also blocked the sale of Lattice Semiconductor to a US private equity firm funded by Chinese capital.
Altman could be inviting antitrust scrutiny, as well. If he controls both the country’s most important generative AI company and the chip supply chain it relies upon, he’ll raise eyebrows with any antitrust regime — even if it’s not the current tech-hungry one overseen by the FTC’s Lina Khan and the DOJ’s Jonathan Kanter. The government is already starting to look into Microsoft’s $13 billion investment in OpenAI.
In short, all eyes are on OpenAI. The ChatGPT maker and its once-embattled, now-emboldened chief have their sights set on global AI domination. Whether it’s $7 trillion or far less, they’re due to make a real attempt to solve the chip problem that appears to stand in the way of true unbridled success.
Sam Altman’s chip ambitions
The chipmaking process is notoriously difficult and expensive. AI developers like OpenAI depend on powerful chips from firms like NVIDIA and AMD. Fabrication often runs through Taiwan Semiconductor or the Korean-based Samsung, the two biggest companies by market share.
With this new venture, known by the code name Tigris, Altman wants to add another major player in the chipmaking process, which has been prone to bottlenecking in recent years. The global supply chain crisis coincided with a global chip shortage, leading to low supplies of appliances, computers, cars, and video game systems. Altman is in talks to raise funds from global players including Japan’s SoftBank and the UAE’s G42, promising to make its network of fabs global in scope.
For generative AI developers, they need the most powerful chips on the market — and they need as many as they can get.
Different views: Altman's optimism vs. IMF's caution
Much of the buzz in Davos this year has been around artificial intelligence and the attendance of precocious talents like Open AI’s Sam Altman, who has helped pioneer the biggest technological breakthrough since the personal computer. The World Economic Forum’s Chief Economists Outlook suggested near unanimity in the belief that productivity gains from AI will become economically significant in the next five years in high-income economies. And Altman himself has said he is motivated to “create tech-driven prosperity.”
But there are less rosy predictions around AI. The International Monetary Fund has warned that 40% of jobs worldwide could be adversely impacted and overall inequality could worsen. In the current feverish climate, such warnings have been dismissed.
“Contemporaneous accounts of tech revolutions are always wrong,” Altman said on a panel this week with Microsoft’s Satya Nadella.
Many AI proponents talked about three-phase adoption – firstly, actively using the technology to assist workers; secondly, watching the technology in its autopilot mode to assess its accuracy; and, thirdly, letting it go and trusting it will work. Altman said the three-phase approach should make AI less scary. “This is much more of a tool than I expected. It’ll get better, but it’s not yet replacing jobs. It is this incredible tool for productivity … it lets people do their jobs better.”
In an earlier panel, he was asked about his ouster from Open AI and subsequent reinstatement and said he was “super-confused” at being fired. But he balked at talking further about the “soap opera,” rather than the prospects of AI. He likened the progression of ChatGPT to that of the iPhone, in that the iPhone 1 from 2007 is a very rudimentary device compared to the current iPhone 15. “Eventually we will have a good one,” he said.
Altman said much of the fear-mongering has been overdone. “There was a two-week freak-out with GPT-4 – that it will change everything. Now it’s like ‘Why is it so slow?’ … GPT-4 is a big deal in some sense, but it did not change the world. We are making a tool that is impressive, but humans will go on doing human things.”
AI takes center stage at Davos
Artificial intelligence is a hot topic in Davos, Switzerland, this week, as government officials and industry leaders gather for the 54th edition of the World Economic Forum summit. There are more than 30 scheduled events about AI concerning jobs, healthcare, ethics, chips, and access.
Among the most "sought-after" attendees are AI executives, including OpenAI's Sam Altman, Inflection AI's Mustafa Suleyman, Google DeepMind's Lila Ibrahim, Cohere's Aidan Gomez, and Mistral AI's Florian Bressand. Altman, who will speak about the benefits and risks of AI on Thursday, gave a recent podcast interview with Microsoft founder Bill Gates, sharing his thoughts on AI regulation.
Altman said that he's interested in the idea of a "global regulatory body that looks at those super-powerful systems" – ones far more powerful than current models like GPT-4 – and suggested that the IAEA, the nuclear regulatory model, might be a good model. "This needs a global agency of some sort because of the potential for global impact.”
The world of AI in 2024
2. Labor tensions: The acceleration of AI will continue to reshape industries, automating jobs and displacing workers. That will lead to widespread tension in various sectors of the economy. Union leaders could make AI the centerpiece of their strikes, and you might hear a lot of talk about “reskilling” workers on the lips of lawmakers heading into the 2024 election. This time it’s sure to work …
3. Copyright clarity: We don’t really know how AI models are trained, but we know they’re at least partially trained on unlicensed copyrighted material. Clarity is coming in Europe: The forthcoming AI Act mandates some transparency about training data. But in the US, where regulation is sparse, the courts are considering a big legal question about whether using copyrighted material as training data violates the law. At issue is whether the output is “transformative enough.” The answer to this legal question has extremely high stakes. Look for authors and artists to keep suing. But also look for companies, under pressure from lawmakers, to start opening up about how their systems are trained, whether copyrighted material is used, and why they think the stuff their models spit out does not constitute copyright infringement. We at GZERO aren’t holding our breath for writers' royalties (but we’d sure take ’em).
4. A big new law in Europe: The European Union’s AI Act is set to become law in the spring of 2024. Of course, lawmakers could falter before hitting the finish line, but an agreement this month made that unlikely. What’s ahead: The EU just held the first of 11 sessions to hammer out the details of the law, which will lead to a “four-column document” by February, reconciling proposals from the three EU legislative bodies. Only after that will country representatives vote to finalize the act. But this landmark law won’t have teeth in 2024 even if everything goes to plan because there’s a 12-month grace period for companies to comply. It’s all hurry up and wait.
5. The hype cycle continues: Major investment in AI won’t be a flash in the pan for 2023. With hints of lower interest rates, and still-palpable interest in AI from tech investors hungry for massive returns, expect the billion-dollar valuations, IPOs, mergers and acquisitions, and the big-moneyed investment from top tech firms in startups all to accelerate.
6. Congress does something: The US Congress does more bickering than lawmaking today. But there’s real political will to not get left behind on AI regulation. Lawmakers have been regularly discussing AI, grilling its corporate leaders, and brainstorming ideas for governance. They’ve proposed removing red tape for chipmakers, mandating disclosures for AI-generated political ads, and even considered a “light-touch” law-making AI developers self-certify for safety. It’s not necessarily likely that the US will pass something sprawling like the EU’s AI Act, but Congress will likely pass something about AI in the coming year. More than 50 different AI-related bills have been introduced since the 118th Congress began last year, but none have passed through either house of Congress.
7. Antitrust comes for AI: Regulators are circling. The US government sued Google for allegedly abusing its monopolies in search and advertising technology, Amazon for hurting competition on its e-commerce platform, and Meta for buying dominant market power through its Instagram and WhatsApp acquisitions. That’s the hallmark of current FTC Chair Lina Khan and Justice Department antitrust chief Jonathan Kanter, who have been set on enforcing antitrust law against Big Tech. And that fervor is likely to hit AI in 2024. There’s lots of political will to use antitrust law in the UK and Europe, which means scrutiny will soon come to AI. In fact, it’s already here. The FTC and the UK’s Competition and Markets Authority are reportedly probing Microsoft’s investment into OpenAI – it’s not a full-fledged investigation yet, but in 2024 antitrust regulators will be watching AI very closely.
8. Election problems: In 2024, an unprecedented number of countries – some 40-plus – will head to the polls, and many will have their eyes on places like the United States and India for the use of AI in disinformation campaigns ahead of Election Day. There is concern about deepfake technology fueling confusion or contributing to an already-challenging misinformation problem. We’ve already seen deepfake songs impersonating Indian Prime Minister Narendra Modi and videos portraying US President Joe Biden. But what we haven’t seen yet is AI disrupting an election. Will 2024 be the year that AI-generated words, videos, images, and music play a surprising role in elections?
9. New companies you’ve never heard of. By the end of 2024, the top companies in AI may be the same as today: Anthropic, Google, Meta, Microsoft, and OpenAI. But chances are there will be a startup that you've never heard of on the list. Why? Not only is innovation an everyday reality in AI, but investors are excited to fund these projects to reap potential rewards. In the first half of 2023, AI's share of total startup funding in the US more than doubled from 11% to 26% compared to the same period in 2022. That includes household names and challengers you might have already heard of, such as OpenAI ($29 billion) and Anthropic ($5 billion), which had big funding rounds this year. But there are 15 new AI "unicorns" (billion-dollar companies) that could break into the mainstream, including the enterprise AI firm Cohere ($2.2 billion) and the research lab Imbue ($1 billion). Even in a high-interest rate environment, AI startups have fetched big valuations despite still-paltry revenue estimates — at a time when “easy money” has vanished from the broader tech sector. Expecting stasis would be foolish.
10. The real reason Sam Altman was fired: Expect to learn why OpenAI really fired Sam Altman in 2024. It’s perhaps the great mystery in AI, but it can’t remain a secret forever. If anyone knows the answer, please let us know.
2023: The Year of AI
Art: Courtesy of Midjourney
The Trends
1. Chatbot mania: OpenAI brought AI to the masses with ChatGPT. Though it debuted in late 2022, it truly hit its stride this year, especially when it started charging $20 a month in February for access to its latest and greatest version, which was then upgraded with GPT-4 in March. Google also released Bard, Microsoft launched Bing Chat, and the startup Anthropic introduced us to Claude. Each chatbot has its strength: While ChatGPT is strong on creative writing and inductive reasoning, Bing is best used as a replacement for internet search engines, and Bard’s latest upgrade – to its new language model Gemini – strives for commonsense reasoning and logic. Anthropic's Claude rivals ChatGPT for complex tasks like organizing huge chunks of text. For now, ChatGPT is top dog, but the younger pups are nipping at its heels.
2. Regulators ready their lassos: Following years of debate, the European Union finally reached an agreement in December on the scope of its landmark AI Act, the first major regulation for AI models. Next door, the United Kingdom has proceeded with a hands-off approach, more concerned with courting AI firms than reining them in. Rishi Sunak’s Bletchley summit, which produced a voluntary agreement on AI safety, was a political winner for the PM. The US, by contrast, falls somewhere between the UK and Europe in its approach: Months after President Joe Biden secured voluntary commitments from major AI firms to stave off the worst risks from AI, he issued an executive order to start codifying those protections. There’s no forceful regulation on the books yet — but the wheels are finally in motion.
3. The chip race heats up: AI models are nothing without the semiconductors, aka chips, that power them. Making them, however, is difficult and expensive, and there’s always some kind of holdup. The most powerful AI relies on the most powerful graphics chips, like those produced by NVIDIA and AMD. Recently, OpenAI had to halt new signups for the paid version of ChatGPT for a month because it didn’t have enough graphics chips to accommodate new users. The US, fearful of China catching up technologically and using AI for military purposes, has placed strict export controls on the flow of US-made chips, rules that were tightened this fall. For now, the US maintains its major advantage in the chip wars.
The Moments
4. Puffer pontiff: Who says the pope can’t sport a bit of bling? In March, a photo of Pope Francis wearing a long white Balenciaga puffer coat (sells for $4,350), complete with an oversized crucifix necklace, went mega-viral. It was an outfit more befitting of a rap god than the bishop of Rome, and the fake image of the athleisure pope became a seminal example of the ways generative AI can fool people. It was ultimately harmless, but deepfake technology is getting better and better, and experts have long warned that it could cause chaos in fragile political environments, especially around elections. On Dec. 14, the pope, perhaps bothered by the uproar about his fantastical drip, called for an international treaty to ensure the ethical deployment of AI technologies, warning that it could disrupt democracy or enhance already deadly weapons of war.
5. The open letter: In March, a group of AI scientists and researchers called for a six-month pause on all AI development. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” their letter said. It was signed by technology luminaries and corporate leaders like Apple co-founder Steve Wozniak, MIT professor Max Tegmark, investor Ian Hogarth (who now leads the UK’s AI task force), and Elon Musk (who notably then launched his own AI company). Of course, development wasn’t stymied, but the letter did send a message that there are real and present dangers to the unmitigated development of artificial intelligence. They could have just watched “The Terminator” if you ask me.
6. The Hollywood strike: Actors and writers hit the picket lines for months this year, putting many of our favorite shows on ice. The strike was inspired in no small part by threats from the major studios to use AI to replace union labor. At issue for the Screen Actors Guild was the use of AI to digitally replicate union talent without compensation, while the Writers Guild was more concerned with the use of AI writing tools to shrink writers’ rooms and automate their work. Instead of banning the use of AI, however, both guilds struck deals with the studios that effectively ensure they don’t lose work or money because of the advent of AI. Digital replicas are okay, for example, if the actor is properly compensated.
7. OpenAI’s blowup: What in the world happened at OpenAI? The company’s nonprofit board of directors in late November suddenly and inexplicably fired Sam Altman, the face of the company and CEO of its for-profit arm, for being dishonest with them. But the board never really explained itself. After a weekend of pressure from Altman, OpenAI’s lead investor Microsoft, and 700 of the 770 employees at OpenAI who threatened to quit and work instead at Microsoft, the board reinstated Altman and some of its members resigned. There are still big questions about what happened, but for a brief moment, the most unstoppable company in tech seemed extremely fragile.
The People
8. Sam Altman: Altman is the face of AI. He helms OpenAI, the company that makes the GPT series of large language models, the chatbot ChatGPT, and the image generator DALL-E. But he has also been the AI whisperer for regulators in the US and around the world. Altman played a hands-on role in calling for regulation – as long as it was the kind he likes, such as government licensing for AI developers – and that’s been effective in helping shape global governance of this emerging technology. Of course, Altman was fired and then reinstated (see above), and that was a never-before-seen drama in Silicon Valley. But the ordeal was so surreal and so shocking because Altman isn’t just the head of the most important company in AI; he’s the poster boy for the entire technology.
9. Jensen Huang: OpenAI might be the most important software company in AI, but NVIDIA rules hardware. Under Huang’s guidance, NVIDIA has gone from a little-known company making graphics cards for computer gamers to one of the most critical semiconductor firms in the world. NVIDIA’s graphics chips, or GPUs, are necessary for high-powered computing operations like training and running AI systems. Sure, there’s competition — the chipmaker AMD has grand ambitions to compete directly on AI-ready graphics chips. But it’s leading an industry with very little supply and a ton of demand. That’s one of the reasons why Huang led NVIDIA to a trillion-dollar valuation this year. He’s not as public a figure as Altman, but his work has proven invaluable this year.
10. Geoffrey Hinton: Known as the “godfather of AI,” Hinton distinguished himself over a long career as one of the most prolific and accomplished researchers of artificial neural networks, a set of technologies that powers machine learning. He even won the Turing Award in 2018 — the most esteemed prize in computer science. But this year, Hinton made headlines in May after quitting his job at Google and citing the risks of unfettered development of AI. A whistleblower of sorts, Hinton’s message is extra potent — because, in many respects, he made the breakthroughs that led to present-day AI.