Should we regulate generative AI with open or closed models?

Title Placeholder | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.

We just finished a half week workshop that dealt with the billion-dollar question of how to best regulate generative AI. And often this discussion tends to get quite tribal between those who say, “Well, open models are the best route to safety because they foster transparency and learning for a larger community, which also means scrutiny for things that might go wrong,” or those that say, “No, actually closed and proprietary models that can be scrutinized by a handful of companies that are able to produce them are safer because then malign actors may not get their hands on the most advanced technology.”

And one of the key takeaways that I have from this workshop, which was kindly hosted by Princeton's Institute for Advanced Studies, is actually that the question of open versus closed models, but also the question of whether or not to regulate is much more gradient. So, there is a big spectrum of considerations between models that are all the way open and what that means for safety and security,

Two models that are all the way closed and what that means for opportunities for oversight, as well as the whole discussion about whether or not to regulate and what good regulation looks like. So, one discussion that we had was, for example, how can we assess the most advanced or frontier models in a research phase with independent oversight, so government mandated, and then decide more deliberately when these new models are safe enough to be put out into the market or the wild.

So that there is actually much less of these cutting, cutting throat market dynamics that lead companies to just push out their latest models out of concern that their competitor might be faster and that there is oversight built in that really considers, first and foremost, what is important for society, for the most vulnerable, for anything from national security to election integrity, to, for example, nondiscrimination principles which are already under enormous pressure thanks to AI.

So, a lot of great takeaways to continue working on. We will hopefully publish something that I can share soon, but these were my takeaways from an intense two and a half days of AI discussions.

More from GZERO Media

- YouTube

The next leap in artificial intelligence is physical. On Ian Explains, Ian Bremmer breaks down how robots and autonomous machines will transform daily life, if we can manage the risks that come with them.

Britain's Prime Minister Keir Starmer is flanked by Ukraine's President Volodymyr Zelenskiy and NATO Secretary-General Mark Rutte, Denmark's Prime Minister Mette Frederiksen and Dutch Prime Minister Dick Schoof as he hosts a 'Coalition of the Willing' meeting of international partners on Ukraine at the Foreign, Commonwealth, and Development Office (FCDO) in London, Britain, October 24, 2025.
Henry Nicholls/Pool via REUTERS

As we race toward the end of 2025, voters in over a dozen countries will head to the polls for elections that have major implications for their populations and political movements globally.

The biggest story of our G-Zero world, Ian Bremmer explains, is that the United States – still the world’s most powerful nation – has chosen to walk away from the international system it built and led for three-quarters of a century. Not because it's weak. Not because it has to. But because it wants to.