We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
The OpenAI-Sam Altman drama: Why should you care?
Taylor Owen, professor at the Max Bell School of Public Policy at McGill University and director of its Centre for Media, Technology & Democracy, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. In this episode of the series, Taylor Owen takes a look at the OpenAI-Sam Altman drama.
Hi, I'm Taylor Owen. This is GZERO AI. So if you're watching this video, then like me, you're probably glued to your screen over the past week, watching the psychodrama play out at OpenAI, a company literally at the center of the current AI moment we're in.
Sam Altman, the CEO of OpenAI, was kicked out of his company by his own board of directors. Under a week later, he was back as CEO, and all but one of those board members was gone. All of this would be amusing, and it certainly was in a glib sort of way, if the consequences weren't so profound. I've been thinking a lot about how to make sense of all this, and I keep coming back to this profound sense of deja vu.
First, though, a quick recap. We don't know all of the details, but it really does seem to be the case that at the core of this conflict was a tension between two different views of what OpenAI was and will be in the future. Remember, OpenAI was founded in 2015 as a nonprofit, and a nonprofit because it was choosing a mission of building technologies to benefit all of humanity over a private corporate mission of increasing value for shareholders. When they started running out of money, though, a couple of years later, they embedded a for-profit entity within this nonprofit structure so that they could capitalize on the commercial value of the products that the nonprofit was building. This is where the tension lied, between the incentives of a for-profit engine and the values and mission of a nonprofit board structure.
All of this can seem really new. OpenAI was building legitimately groundbreaking technologies, technologies that could transform our world. But I think the problem and the wider problem here is not a new one. This is where I was getting deja vu. Back in the early days of Web 2.0, there was also a huge amount of excitement over a new disruptive technology. In this case, the power of social media. In some ways, events like the Arab Spring were very similar to the emergence of ChatGPT, a seismic of event that demonstrated to broader society the power of an emerging technology.
Now I spent the last 15 years studying the emergence of social media, and in particular how we as societies can balance the immense benefits and upside of these technologies with also the clear downside risks as they emerged. I actually think we got a lot of that balance wrong. It's times like this when a new technology emerges that we need to think carefully about what lessons we can learn from the past. I want to highlight three.
First, we need to be really clear-eyed about who has power in the technological infrastructure we're deploying. In the case of OpenAI, it seems very clear that the profit incentives won over the more broader social mandate. Power is also, though, who controls infrastructure. In this case, Microsoft played a big role. They controlled the compute infrastructure, and they wielded this power to come out on top in this turmoil.
Second, we need to bring the public into this discussion. Ultimately, a technology will only be successful if it has legitimate citizen buy-in, if it has a social license. What are citizens supposed to think when they hear the very people building these technologies disagreeing over their consequences? Ilya Sutskever, for example, said just a month ago, "If you value intelligence over all human qualities, you're going to have a bad time," when talking about the future of AI. This kind of comment coming from the very people that are building the technologies is just exacerbating an already deep insecurity many people feel about the future. Citizens need to be allowed and be enabled and empowered to weigh into the conversation about the technologies that are being built on their behalf.
Finally, we simply need to get the governance right this time. We didn't last time. For over 20 years, we've largely left the social web unregulated, and it's had disastrous consequences. This means not being confused by technical or systemic complexity masking lobbying efforts. It means applying existing laws and regulations first ... In the case of AI, things like copyright, online safety rules, data privacy rules, competition policy ... before we get too bogged down in big, large-scale AI governance initiatives. We just can't let the perfect be the enemy of the good. We need to iterate, experiment, and countries need to learn from each other in how they step into this complex new world of AI governance.
Unfortunately, I worry we're repeating some of the same mistakes of the past. Once again, we're moving fast and we're breaking things. If the new board of OpenAI is any indication about how they're thinking about governance and how the AI world in general values and thinks about governance, there's even more to worry about. Three white men calling the shots at a tech company that could very well transform our world. We've been here before, and it doesn't end well. Our failure to adequately regulate social media had huge consequence. While the upside of AI is undeniable, it's looking like we're making many of the same mistakes, only this time the consequences could be even more dire.
I'm Taylor Owen, and thanks for watching.
“Like asking the butcher how to test his meat”: Q&A on the OpenAI fiasco and the need for regulation
AI-generated art courtesy of Midjourney
The near-collapse of OpenAI, the world’s foremost artificial intelligence company, shocked the world earlier this month. Its nonprofit board of directors fired its high-profile and influential CEO, Sam Altman, on Friday, Nov. 17, for not being “consistently candid” with them. But the board never explained its rationale. Altman campaigned to get his job back and was joined in his pressure campaign by OpenAI lead investor Microsoft and 700 of OpenAI’s 770 employees. Days later, multiple board members resigned, new ones were installed, and Altman returned to his post.
To learn more about what the blowup means for global regulation, we spoke to Marietje Schaake, a former member of the European Parliament who serves as the international policy director of the Cyber Policy Center at Stanford University and as president of the Cyber Peace Institute. Schaake is also a host of the GZERO AI video series.
The interview has been edited for clarity and length.
GZERO: What are you taking away from the OpenAI debacle?
Schaake: This incident makes it crystal clear that companies alone are not the legitimate or most fit stakeholder to govern over powerful AI. The confrontation between the board and the executive leadership at OpenAI seems to have at least included disagreement about the impact of next-generation models on society. To weigh what is and is not an acceptable risk to accept, there needs to be public research and scrutiny, based on public policy. I am hoping the soap opera we watched at OpenAI underlines the need for democratic governance, not corporate governance.
Was there any element that was particularly concerning to you?
The governance processes seem underdeveloped in light of the stakes. And there are probably many other parts of OpenAI that lack the maturity to deal with the many impacts their products will have around the world. I am even more concerned than I was two weeks ago.
Microsoft exerted its power by pressuring OpenAI's nonprofit board to partially resign and reinstate Altman. Should we be concerned about Microsoft's influence in the AI industry?
I do not like the fact that with the implosions of OpenAI's governance, the entire notion of giving less power to investors may now lose support. For Microsoft to throw around the weight of its financial resources is not surprising, but also hardly reassuring. Profit motives all too often clash with the public interest, and the competition between companies investing in AI is almost as fierce as that between the developers of AI applications. The drive to outgame competitors rather than to consider multiple stakeholders and factors in society is a perverse one. But instead of looking at the various companies in the ecosystem, we need to look to government to assert itself, and to develop a mechanism of independent oversight.
Sam Altman has been an incredibly visible ambassador for this technology in the US and on the world stage. How would you describe the role he played over the past year with regard to shaping global regulation of AI?
Altman has become the face of the industry, for better and worse. He has made conflicting statements on how he sees regulation as impacting the company. In the same week, he encouraged Congress to adopt regulation, and threatened OpenAI would leave the EU because of the EU AI Act – regulation. It is a reminder for anyone who needs it that a brilliant businessman should not be the one in charge of deciding on regulation. This anecdote also shows we need a more sophisticated debate about regulation. Just claiming to be in favor or against means little, it is about the specific objectives of a given piece of regulation, the trade offs, and the enforcement.
In your view, has his lobbying been successful? Was his message more successful with certain regulators as opposed to others? Did politicians listen to him?
He cleverly presented himself as an ally to regulators, when he appeared before Congress. That is a lesson he may well have learned from Microsoft. In that sense, Altman got a much more friendly reception than Mark Zuckerberg ever got. It seems members of Congress listened and even asked him for advice on how AI should be regulated. It is like asking the butcher how to test his meat. I hope politicians stop asking CEOs for advice and rather feel empowered to consider many more experts and people impacted by the rollout of AI, to serve the public interest, and to prevent harms, protect rights, competition, and national security.
Given what you know now, do you think Altman will continue being the posterboy for AI and an active player in shaping AI regulation?
There are already different camps with regard to what success or danger looks like around AI. There will surely be tribes that see Altman as having come out stronger from this episode. Others will underline the very cynical dealings we saw on display. We should not forget that there is a lot of detail we do not even know about what went down.
I feel like everyone is the meme of Michael Jackson eating popcorn, fascinated by this bizarre series of events, desperately trying to understand what's going on. What are you hoping to learn next? What answers do the people at the center of this ordeal owe to the public?
Actually, we should not be distracted by the entertainment aspect of this soap of a confrontation, complete with cliffhangers and plot twists. Instead, if the board, which had a mandate emphasizing the public good, has concerns about OpenAI’s new models, they should speak out. Even if the steps taken appeared hasty and haphazardly, we must assume there were reasons behind their concerns.
If you were back in the European Parliament, how would you be responding?
I would work on regulation, before, during, and after this drama. In other words, I would not have changed my activities because of it.
What final message would you like to leave us with?
Maybe just to repeat that this saga underlines the key problems of a lack of transparency, of democratic rules, and of independent oversight over these companies. If anyone needed a refresher of why those are urgently needed, we can thank the OpenAI board and Sam Altman for sounding the alarm bell once more.
Sam Altman, who has just been ousted as CEO of OpenAI, is seen here testifying before a Senate Judiciary Privacy, Technology & the Law Subcommittee back in May 2023.
A chaotic shakeup at OpenAI
OpenAI’s board of directors fired Sam Altman as CEO on Friday — a shock decision with ramifications for the entire AI industry. After Altman and allies campaigned throughout the weekend to get him reinstated, the board affirmed its decision and brought in former Twitch CEO Emmett Shear to lead the company responsible for ChatGPT. Trouble is, that there may be no one left to lead.
Microsoft, which has invested $13 billion in the for-profit arm of OpenAI but did not hold a seat on its nonprofit board of directors, was blindsided by Altman’s ouster. Microsoft CEO Satya Nadella announced that Altman and fellow OpenAI co-founder Greg Brockman are joining Microsoft to “lead a new advanced AI research team.” Nadella may not have been able to control OpenAI’s board, but he can poach their best talent for his own gain.
Microsoft may soon have most of OpenAI’s team. More than 700 of the 770 employees of OpenAI have signed an open letter threatening to quit unless the board resigns and Altman is reinstated. They all have open offers to join Microsoft. That includes Mira Murati, the former chief technology officer, whom the board briefly appointed interim CEO on Friday before bringing in Shear, an outsider. A group of investors, including Thrive Capital, is still pressuring the board to reinstate Altman, and Nadella said that, despite his offer to absorb Altman and his loyalists, he would welcome this outcome.
OpenAI’s board has still not explained why Altman was fired. It merely said Altman was “not consistently candid in his communications with the board” and that the four members had lost confidence in him. In effect, the board was exercising its duty to put the organization's mission above profit motive or even employee demands: OpenAI has a unique setup where a small nonprofit board controls a massive for-profit venture. While it was within its rights to terminate Altman, it has not adequately explained to anyone why it did so.
The schism at OpenAI is of seismic importance: Before Friday, OpenAI was an unstoppable industry leader with the most visible CEO in the industry, and it was backed by one of the world’s largest and most powerful companies. What would OpenAI be without most of its staff? Did Microsoft waste its $13 billion investment or functionally acquire OpenAI for free? Why does OpenAI co-founder and board member Ilya Sutskever now “deeply regret” firing Altman?
Further, this messy breakup has direct consequences for the technology. OpenAI was on the cutting edge of AI development, and its products are used by hundreds of millions of people worldwide. With the company behind it now in disarray, we have to ask: What is the future of ChatGPT? Will a rival outpace it, or will Microsoft build its own version?
Gerald Butts, vice chairman of Eurasia Group, said that the future of AI is somewhat insulated from this dramatic game of musical chairs. “AI development is independent of the personalities involved at this stage,” Butts said. “Overall, this whole episode is a nothing burger.”
Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation summit in San Francisco, California, on Nov. 16, 2023, just a day before being fired by his board of directors.
Ask ChatGPT: What will Sam Altman achieve for Microsoft?
On Friday, the tech world was abuzz with the news that Sam Altman, the 38-year-old co-founder of OpenAI, had been pink-slipped by the firm’s board of directors after a hastily called Google Meet. OpenAI’s other co-founder, Greg Brockman, also decided to leave the company after the board demoted him in the same meeting. By late Sunday, they both had new jobs.
According to insiders, Altman had been moving “too fast” in the development of new AI technology. Board members were reportedly concerned about OpenAI’s recent developer conference and the announcement of a means for anyone to create their own versions of ChatGPT. Ilya Sutskever, a key researcher and board member who was also one of the co-founders of OpenAI, was reportedly concerned about the dangers posed by OpenAI’s technology and believed Altman was downplaying that risk. The board was also apparently uncomfortable with Altman’s attempt to raise $100 billion from investors in the Middle East and SoftBank founder Masayoshi Son to establish a new microchip development company.
Altman’s firing was only possible because of the unique corporate structure of OpenAI. Despite being a co-founder, Altman had no equity in the company. The company’s board controls OpenAI’s 501(c)(3) charity, OpenAI Inc., which was established via a charter to “ensure that safe artificial general intelligence is developed and benefits all of humanity.” That charter takes “precedence over any obligation to generate a profit.”
Altman did not take it lying down. On Saturday night, he tweeted “i love the openai team so much.” Hundreds of employees, including interim CEO Mira Murati and COO Brad Lightcap, liked or reposted the tweet within the hour. Over the weekend, investors also rallied behind Altman, including Thrive Capital, Tiger Global, Khosla Ventures, and Sequoia Capital. A plan to sell as much as $1 billion in employee stock now hangs in the balance; Thrive Capital was set to lead that tender offer and to value OpenAI at $86bn.
Despite the pressure, OpenAI’s board chose not to reinstate Altman – they refused to meet his demands of there being a new board and governance structure – and announced Sunday evening that Emmett Shear, former chief executive of Twitch, will replace him as CEO. Shear faces a tough job, given that so many OpenAI staffers had threatened to quit unless Altman returned.
But some of them may have a landing pad: Microsoft CEO Satya Nadella posted late Sunday on X that Altman, Brockman, and their team will be joining Microsoft to lead a “new advanced AI research team.”