We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy .
GZERO AI
OpenAI logo seen in illustration.
Despite the recent Sam Altman ouster and reinstatement, OpenAI’s long-planned sale of private company stock is set to proceed . Before the turmoil, investors — led by venture capital firms such as Thrive Capital, Sequoia Capital, and Khosla Ventures — were hoping for a $86 billion valuation. That sum would make it among the most valuable private tech companies in the world, ahead of payments company Stripe ( $50 billion ), and Fortnite maker Epic Games ( $32 billion ), though it still falls short of the largest pre-IPO firms like TikTok parent ByteDance ( $223.5 billion ) and SpaceX (which expects a $150 billion valuation in an upcoming stock sale).
OpenAI was most recently valued at
$29 billion
in April after Microsoft invested $10 billion in the company.
With the stock sale set to move forward, the biggest question is whether investor enthusiasm for the shares are at all tempered by the turmoil at the company. Or, rather, has the ordeal left power so consolidated with Altman and Microsoft that those rooting for OpenAI to fulfill its free-market potential are further enamored by the prospect of investing that they blow past the $86 billion mark and toward atmospheric heights?
At the height of the Altman firing saga, investors were reportedly considering writing down the value of their OpenAI shares to $0 — that’s how dire the outlook was. Now, investor Vinod Khosla says OpenAI is “the same or better off than it was last Thursday” and should warrant the $86 billion price tag.” Bloomberg columnist Matt Levine says OpenAI’s true valuation is anyone’s guess but cautions that even though, for now, Altman and Microsoft triumphed over the will of the parent company’s nonprofit board, these not-for-profit motives make OpenAI an eternally risky investment.
President Vladimir Putin on Friday warned that the West should not be allowed to develop a monopoly in the sphere of artificial intelligence and s ai d that a much more ambitious Russian strategy for the development of AI would be approved shortly.
Putin’s speech was both a
statement of intent
and a critique of the West’s dominance of modern technology. “Monopolistic dominance of such foreign technology in Russia is unacceptable, dangerous, and inadmissible,” Putin
said
, noting that “monopoly and domination” of AI by foreign powers is “unacceptable and dangerous.”
Russia is lagging in the AI race. By one count of “significant machine learning systems” cited by Stanford University’s Institute for Human-Centered AI, the US leads the world with 16 such systems, followed by the UK with eight, and China with three. Russia, meanwhile, has just one.
Russia has its own AI chatbots hoping to rival OpenAI’s ChatGPT, such as GigaChat from the state-owned financial services company Sberbank. But Moscow has meddled in the affairs of its private technology firms, including Yandex, the so-called “Google of Russia,” for its namesake search engine. Yandex, now owned by a Dutch holding company, is in the process of divesting from its Russian assets after clashing with Moscow’s censors . With Yandex largely left out of Moscow’s AI planning due to deep-seated distrust, Russia has funneled its AI ambitions through state-owned firms like Sberbank and made limited progress in jumpstarting its domestic AI development.
Moscow may be serious about funding AI development, but that would require Putin to loosen his chokehold on Russian industry – which is about as likely as him sharing eggnog with Zelensky this Christmas.
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the OpenAI-Sam Altman drama.
Hi, I'm Taylor Owen. This is GZERO AI. So if you're watching this video, then like me, you're probably glued to your screen over the past week, watching the psychodrama play out at OpenAI, a company literally at the center of the current AI moment we're in.
Sam Altman, the CEO of OpenAI, was kicked out of his company by his own board of directors. Under a week later, he was back as CEO, and all but one of those board members was gone. All of this would be amusing, and it certainly was in a glib sort of way, if the consequences weren't so profound. I've been thinking a lot about how to make sense of all this, and I keep coming back to this profound sense of deja vu.
First, though, a quick recap. We don't know all of the details, but it really does seem to be the case that at the core of this conflict was a tension between two different views of what OpenAI was and will be in the future. Remember, OpenAI was founded in 2015 as a nonprofit, and a nonprofit because it was choosing a mission of building technologies to benefit all of humanity over a private corporate mission of increasing value for shareholders. When they started running out of money, though, a couple of years later, they embedded a for-profit entity within this nonprofit structure so that they could capitalize on the commercial value of the products that the nonprofit was building. This is where the tension lied, between the incentives of a for-profit engine and the values and mission of a nonprofit board structure.
All of this can seem really new. OpenAI was building legitimately groundbreaking technologies, technologies that could transform our world. But I think the problem and the wider problem here is not a new one. This is where I was getting deja vu. Back in the early days of Web 2.0, there was also a huge amount of excitement over a new disruptive technology. In this case, the power of social media. In some ways, events like the Arab Spring were very similar to the emergence of ChatGPT, a seismic of event that demonstrated to broader society the power of an emerging technology.
Now I spent the last 15 years studying the emergence of social media, and in particular how we as societies can balance the immense benefits and upside of these technologies with also the clear downside risks as they emerged. I actually think we got a lot of that balance wrong. It's times like this when a new technology emerges that we need to think carefully about what lessons we can learn from the past. I want to highlight three.
First, we need to be really clear-eyed about who has power in the technological infrastructure we're deploying. In the case of OpenAI, it seems very clear that the profit incentives won over the more broader social mandate. Power is also, though, who controls infrastructure. In this case, Microsoft played a big role. They controlled the compute infrastructure, and they wielded this power to come out on top in this turmoil.
Second, we need to bring the public into this discussion. Ultimately, a technology will only be successful if it has legitimate citizen buy-in, if it has a social license. What are citizens supposed to think when they hear the very people building these technologies disagreeing over their consequences? Ilya Sutskever, for example, said just a month ago, "If you value intelligence over all human qualities, you're going to have a bad time," when talking about the future of AI. This kind of comment coming from the very people that are building the technologies is just exacerbating an already deep insecurity many people feel about the future. Citizens need to be allowed and be enabled and empowered to weigh into the conversation about the technologies that are being built on their behalf.
Finally, we simply need to get the governance right this time. We didn't last time. For over 20 years, we've largely left the social web unregulated, and it's had disastrous consequences. This means not being confused by technical or systemic complexity masking lobbying efforts. It means applying existing laws and regulations first ... In the case of AI, things like copyright, online safety rules, data privacy rules, competition policy ... before we get too bogged down in big, large-scale AI governance initiatives. We just can't let the perfect be the enemy of the good. We need to iterate, experiment, and countries need to learn from each other in how they step into this complex new world of AI governance.
Unfortunately, I worry we're repeating some of the same mistakes of the past. Once again, we're moving fast and we're breaking things. If the new board of OpenAI is any indication about how they're thinking about governance and how the AI world in general values and thinks about governance, there's even more to worry about. Three white men calling the shots at a tech company that could very well transform our world. We've been here before, and it doesn't end well. Our failure to adequately regulate social media had huge consequence. While the upside of AI is undeniable, it's looking like we're making many of the same mistakes, only this time the consequences could be even more dire.
I'm Taylor Owen, and thanks for watching.
There is no more disruptive or more remarkable technology than AI, but let’s face it, it is incredibly hard to keep up with the latest developments. Even more importantly, it’s almost impossible to understand what the latest AI innovations actually mean. How will AI affect your job? What do you need to know? Who will regulate it? How will it disrupt work, the economy, politics, war?
That's where our new weekly GZERO AI newsletter comes in to help. GZERO AI will give you the first key insights you need to know, putting perspective on the hype and context on the AI doomers and dreamers. Featuring the world class analysis that is the hallmark of GZERO and its founder, Ian Bremmer--who himself is a leading voice in the AI space--GZERO AI is the essential weekly read of the AI revolution.
Our goal is to deliver understanding as well as news, to turn information into perspective and data into insights. GZERO AI will feature some of the world’s most important voices on technology, such as our weekly data columnist Azeem Azhar, and our video columnists Marietje Schaake and Taylor Owen . GZERO AI is your essential tool to understanding the technology that...is understanding you!
Sign up now
for GZERO AI (along with GZERO's other newsletters.)
“Like asking the butcher how to test his meat”: Q&A on the OpenAI fiasco and the need for regulation
AI-generated art courtesy of Midjourney
The near-collapse of OpenAI, the world’s foremost artificial intelligence company, shocked the world earlier this month. Its nonprofit board of directors fired its high-profile and influential CEO, Sam Altman, on Friday, Nov. 17, for not being “consistently candid” with them. But the board never explained its rationale. Altman campaigned to get his job back and was joined in his pressure campaign by OpenAI lead investor Microsoft and 700 of OpenAI’s 770 employees. Days later, multiple board members resigned, new ones were installed, and Altman returned to his post.
To learn more about what the blowup means for global regulation, we spoke to Marietje Schaake, a former member of the European Parliament who serves as the international policy director of the Cyber Policy Center at Stanford University and as president of the Cyber Peace Institute. Schaake is also a host of the GZERO AI video series.
The interview has been edited for clarity and length.
GZERO: What are you taking away from the OpenAI debacle?
Schaake: This incident makes it crystal clear that companies alone are not the legitimate or most fit stakeholder to govern over powerful AI. The confrontation between the board and the executive leadership at OpenAI seems to have at least included disagreement about the impact of next-generation models on society. To weigh what is and is not an acceptable risk to accept, there needs to be public research and scrutiny, based on public policy. I am hoping the soap opera we watched at OpenAI underlines the need for democratic governance, not corporate governance.
Was there any element that was particularly concerning to you?
The governance processes seem underdeveloped in light of the stakes. And there are probably many other parts of OpenAI that lack the maturity to deal with the many impacts their products will have around the world. I am even more concerned than I was two weeks ago.
Microsoft exerted its power by pressuring OpenAI's nonprofit board to partially resign and reinstate Altman. Should we be concerned about Microsoft's influence in the AI industry?
I do not like the fact that with the implosions of OpenAI's governance, the entire notion of giving less power to investors may now lose support. For Microsoft to throw around the weight of its financial resources is not surprising, but also hardly reassuring. Profit motives all too often clash with the public interest, and the competition between companies investing in AI is almost as fierce as that between the developers of AI applications. The drive to outgame competitors rather than to consider multiple stakeholders and factors in society is a perverse one. But instead of looking at the various companies in the ecosystem, we need to look to government to assert itself, and to develop a mechanism of independent oversight.
Sam Altman has been an incredibly visible ambassador for this technology in the US and on the world stage. How would you describe the role he played over the past year with regard to shaping global regulation of AI?
Altman has become the face of the industry, for better and worse. He has made conflicting statements on how he sees regulation as impacting the company. In the same week, he encouraged Congress to adopt regulation, and threatened OpenAI would leave the EU because of the EU AI Act – regulation. It is a reminder for anyone who needs it that a brilliant businessman should not be the one in charge of deciding on regulation. This anecdote also shows we need a more sophisticated debate about regulation. Just claiming to be in favor or against means little, it is about the specific objectives of a given piece of regulation, the trade offs, and the enforcement.
In your view, has his lobbying been successful? Was his message more successful with certain regulators as opposed to others? Did politicians listen to him?
He cleverly presented himself as an ally to regulators, when he appeared before Congress. That is a lesson he may well have learned from Microsoft. In that sense, Altman got a much more friendly reception than Mark Zuckerberg ever got. It seems members of Congress listened and even asked him for advice on how AI should be regulated. It is like asking the butcher how to test his meat. I hope politicians stop asking CEOs for advice and rather feel empowered to consider many more experts and people impacted by the rollout of AI, to serve the public interest, and to prevent harms, protect rights, competition, and national security.
Given what you know now, do you think Altman will continue being the posterboy for AI and an active player in shaping AI regulation?
There are already different camps with regard to what success or danger looks like around AI. There will surely be tribes that see Altman as having come out stronger from this episode. Others will underline the very cynical dealings we saw on display. We should not forget that there is a lot of detail we do not even know about what went down.
I feel like everyone is the meme of Michael Jackson eating popcorn , fascinated by this bizarre series of events, desperately trying to understand what's going on. What are you hoping to learn next? What answers do the people at the center of this ordeal owe to the public?
Actually, we should not be distracted by the entertainment aspect of this soap of a confrontation, complete with cliffhangers and plot twists. Instead, if the board, which had a mandate emphasizing the public good, has concerns about OpenAI’s new models, they should speak out. Even if the steps taken appeared hasty and haphazardly, we must assume there were reasons behind their concerns.
If you were back in the European Parliament, how would you be responding?
I would work on regulation, before, during, and after this drama. In other words, I would not have changed my activities because of it.
What final message would you like to leave us with?
Maybe just to repeat that this saga underlines the key problems of a lack of transparency, of democratic rules, and of independent oversight over these companies. If anyone needed a refresher of why those are urgently needed, we can thank the OpenAI board and Sam Altman for sounding the alarm bell once more.
Hard Numbers: Delayed chip exports, Three-day workweek, Tim Cook’s view on regulation, Concern vs. excitement, Security pact
Illustration of the NVIDIA logo.
1.9%: NVIDIA is building new computer chips to sell to China that are compliant with updated US export regulations. But the California-based company recently announced a delay in the release of those chips until Q1 2024, citing technical problems. In response, NVIDIA’s high-flying stock, which took the company’s valuation north of $1 trillion this year, fell 1.9% on Friday.
3:
Microsoft co-founder Bill Gates doesn’t think AI is going to take everyone’s job, but he does think it could lead to a
three-day
workweek. “I don't think AI's impact will be as dramatic as the Industrial Revolution,” Gates told Trevor Noah on the comedian’s podcast, “but it certainly will be as big as the introduction of the PC.”
18: Apple CEO Tim Cook thinks that generative AI needs “rules of the road and some regulation,” which he expects will come in the next 18 months . “I think most governments are a little behind the curve today,” Cook said on a podcast with the pop singer Dua Lipa. “I think the US, the UK, the EU, and several countries in Asia are quickly coming up to speed.”
52%: More Americans, some 52%, are concerned about the use of AI than they are excited about it, according to a Pew Research Center survey. Ten percent are more excited than concerned, and 36% have mixed feelings.
18: A group of 18 countries, headlined by the US and UK, announced on Sunday that they had signed a pact to ensure AI systems are safe from cybersecurity threats . The commitments are voluntary but offer guidelines to companies developing AI systems at a time when governments are still in the early stages of crafting regulation to rein in the emerging technology.An illustration of AI atop a computer motherboard.
Europe has spent two years trying to adopt comprehensive AI regulation. The AI Act, first introduced by the European Commission in 2021, aspires to regulate AI models based on different risk categories.
The proposed law would ban dangerous models outright, such as those that might manipulate humans, and mandate strict oversight and transparency for powerful models that carry the risk of harm. For lower-risk models, the AI Act would require simple disclosures. In May, the European Parliament approved the legislation , but the three bodies of the European legislature are still in the middle of hammering out the final text. The makers of generative AI models, like the one powering ChatGPT, would have to submit to safety checks and publish summaries of the copyrighted material they’re trained on.
Bump in the road: Last week, France, Germany, and Italy dealt the AI Act a setback by reaching an agreement that supports “mandatory self-regulation through codes of conduct" for AI developers building so-called foundation models . These are the models that are trained on massive sets of data and can be used for a wide range of applications, including OpenAI’s GPT-4, the large language model that powers ChatGPT. This surprise deal represents a desire to bolster European AI firms at the expense of the effort to hold them legally accountable for their products.
The view of these countries, three of the most powerful in the EU, is that the application of AI should be regulated, not the technology itself, which is a departure from the EU’s existing plan to regulate foundation models. While the tri-country proposal would require developers to publish information about safety tests, it doesn’t demand penalties for withholding that information — though it suggests that sanctions could be introduced.
A group of tech companies, including Apple, Ericson, Google, and SAP signed a letter backing the proposal: “Let's not regulate [AI] out of existence before they get a chance to scale, or force them to leave," the group wrote .
But it angered European lawmakers who favor the AI Act. “This is a declaration of war," one member of the European Parliament told Politico , which suggested that this “power grab” could even end progress on the AI Act altogether. A fifth round of European trilogue discussions is set for Dec. 6, 2023.
EU regulators have grown hungry to regulate AI, a counter to the more laissez-faire approach of the United Kingdom under Prime Minister Rishi Sunak, whose recent Bletchley Declaration , signed by countries including the US and China, was widely considered nonbinding and light-touch. Now, the three largest economies in Europe, France, Germany, and Italy, have brought that thinking to EU negotiations — and they need to be appeased. Europe’s Big Three not only carry political weight but can form a blocking minority in the Council of Europe if there’s a vote, says Nick Reiners, senior geotechnology analyst at Eurasia Group.
Reiners says this thrown-wrench by the three makes it unlikely that the AI Act’s text will be agreed upon by the original target date of Dec. 6. But there is still strong political will on both sides, he says, to reach a compromise before next June’s European Parliament elections.