Skip to content
Search

Latest Stories

GZERO AI

“Like asking the butcher how to test his meat”: Q&A on the OpenAI fiasco and the need for regulation

“Like asking the butcher how to test his meat”: Q&A on the OpenAI fiasco and the need for regulation
Courtesy of Midjourney

AI-generated art courtesy of Midjourney

The near-collapse of OpenAI, the world’s foremost artificial intelligence company, shocked the world earlier this month. Its nonprofit board of directors fired its high-profile and influential CEO, Sam Altman, on Friday, Nov. 17, for not being “consistently candid” with them. But the board never explained its rationale. Altman campaigned to get his job back and was joined in his pressure campaign by OpenAI lead investor Microsoft and 700 of OpenAI’s 770 employees. Days later, multiple board members resigned, new ones were installed, and Altman returned to his post.

To learn more about what the blowup means for global regulation, we spoke to Marietje Schaake, a former member of the European Parliament who serves as the international policy director of the Cyber Policy Center at Stanford University and as president of the Cyber Peace Institute. Schaake is also a host of the GZERO AI video series.

The interview has been edited for clarity and length.


GZERO: What are you taking away from the OpenAI debacle?

Schaake: This incident makes it crystal clear that companies alone are not the legitimate or most fit stakeholder to govern over powerful AI. The confrontation between the board and the executive leadership at OpenAI seems to have at least included disagreement about the impact of next-generation models on society. To weigh what is and is not an acceptable risk to accept, there needs to be public research and scrutiny, based on public policy. I am hoping the soap opera we watched at OpenAI underlines the need for democratic governance, not corporate governance.

Was there any element that was particularly concerning to you?

The governance processes seem underdeveloped in light of the stakes. And there are probably many other parts of OpenAI that lack the maturity to deal with the many impacts their products will have around the world. I am even more concerned than I was two weeks ago.

Microsoft exerted its power by pressuring OpenAI's nonprofit board to partially resign and reinstate Altman. Should we be concerned about Microsoft's influence in the AI industry?

I do not like the fact that with the implosions of OpenAI's governance, the entire notion of giving less power to investors may now lose support. For Microsoft to throw around the weight of its financial resources is not surprising, but also hardly reassuring. Profit motives all too often clash with the public interest, and the competition between companies investing in AI is almost as fierce as that between the developers of AI applications. The drive to outgame competitors rather than to consider multiple stakeholders and factors in society is a perverse one. But instead of looking at the various companies in the ecosystem, we need to look to government to assert itself, and to develop a mechanism of independent oversight.

Sam Altman has been an incredibly visible ambassador for this technology in the US and on the world stage. How would you describe the role he played over the past year with regard to shaping global regulation of AI?

Altman has become the face of the industry, for better and worse. He has made conflicting statements on how he sees regulation as impacting the company. In the same week, he encouraged Congress to adopt regulation, and threatened OpenAI would leave the EU because of the EU AI Act – regulation. It is a reminder for anyone who needs it that a brilliant businessman should not be the one in charge of deciding on regulation. This anecdote also shows we need a more sophisticated debate about regulation. Just claiming to be in favor or against means little, it is about the specific objectives of a given piece of regulation, the trade offs, and the enforcement.

In your view, has his lobbying been successful? Was his message more successful with certain regulators as opposed to others? Did politicians listen to him?

He cleverly presented himself as an ally to regulators, when he appeared before Congress. That is a lesson he may well have learned from Microsoft. In that sense, Altman got a much more friendly reception than Mark Zuckerberg ever got. It seems members of Congress listened and even asked him for advice on how AI should be regulated. It is like asking the butcher how to test his meat. I hope politicians stop asking CEOs for advice and rather feel empowered to consider many more experts and people impacted by the rollout of AI, to serve the public interest, and to prevent harms, protect rights, competition, and national security.

Given what you know now, do you think Altman will continue being the posterboy for AI and an active player in shaping AI regulation?

There are already different camps with regard to what success or danger looks like around AI. There will surely be tribes that see Altman as having come out stronger from this episode. Others will underline the very cynical dealings we saw on display. We should not forget that there is a lot of detail we do not even know about what went down.

I feel like everyone is the meme of Michael Jackson eating popcorn, fascinated by this bizarre series of events, desperately trying to understand what's going on. What are you hoping to learn next? What answers do the people at the center of this ordeal owe to the public?

Actually, we should not be distracted by the entertainment aspect of this soap of a confrontation, complete with cliffhangers and plot twists. Instead, if the board, which had a mandate emphasizing the public good, has concerns about OpenAI’s new models, they should speak out. Even if the steps taken appeared hasty and haphazardly, we must assume there were reasons behind their concerns.

If you were back in the European Parliament, how would you be responding?

I would work on regulation, before, during, and after this drama. In other words, I would not have changed my activities because of it.

What final message would you like to leave us with?

Maybe just to repeat that this saga underlines the key problems of a lack of transparency, of democratic rules, and of independent oversight over these companies. If anyone needed a refresher of why those are urgently needed, we can thank the OpenAI board and Sam Altman for sounding the alarm bell once more.

More For You

What we learned from a week of AI-generated cartoons
Courtesy of ChatGPT
Last week, OpenAI released its GPT-4o image-generation model, which is billed as more responsive to prompts, more capable of accurately rendering text, and better at producing higher-fidelity images than previous AI image generators. Within hours, ChatGPT users flooded social media with cartoons they made using the model in the style of the [...]
The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

Jonathan Raa/NurPhoto via Reuters
H3C, one of China’s biggest server makers, has warned about running out of Nvidia H20 chips, the most powerful AI chips Chinese companies can legally purchase under US export controls. [...]
​North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

KCNA via REUTERS
Hermit Kingdom leader Kim Jong Un has reportedly supervised AI-powered kamikaze drone tests. He told KCNA, the state news agency, that developing unmanned aircraft and AI should be a top priority to modernize North Korea’s armed forces. [...]
The logo for Isomorphic Labs is displayed on a tablet in this illustration.

The logo for Isomorphic Labs is displayed on a tablet in this illustration.

Igor Golovniov/SOPA Images/Sipa USA via Reuters
In 2024, Demis Hassabis won a Nobel Prize in chemistry for his work in predicting protein structures through his company, Isomorphic Labs. The lab, which broke off from Google's DeepMind in 2021, raised $600 million from investors in a new funding round led by Thrive Capital on Monday. The company did not disclose a valuation. [...]