Should we regulate generative AI with open or closed models?

Title Placeholder | GZERO AI

Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.

We just finished a half week workshop that dealt with the billion-dollar question of how to best regulate generative AI. And often this discussion tends to get quite tribal between those who say, “Well, open models are the best route to safety because they foster transparency and learning for a larger community, which also means scrutiny for things that might go wrong,” or those that say, “No, actually closed and proprietary models that can be scrutinized by a handful of companies that are able to produce them are safer because then malign actors may not get their hands on the most advanced technology.”

And one of the key takeaways that I have from this workshop, which was kindly hosted by Princeton's Institute for Advanced Studies, is actually that the question of open versus closed models, but also the question of whether or not to regulate is much more gradient. So, there is a big spectrum of considerations between models that are all the way open and what that means for safety and security,

Two models that are all the way closed and what that means for opportunities for oversight, as well as the whole discussion about whether or not to regulate and what good regulation looks like. So, one discussion that we had was, for example, how can we assess the most advanced or frontier models in a research phase with independent oversight, so government mandated, and then decide more deliberately when these new models are safe enough to be put out into the market or the wild.

So that there is actually much less of these cutting, cutting throat market dynamics that lead companies to just push out their latest models out of concern that their competitor might be faster and that there is oversight built in that really considers, first and foremost, what is important for society, for the most vulnerable, for anything from national security to election integrity, to, for example, nondiscrimination principles which are already under enormous pressure thanks to AI.

So, a lot of great takeaways to continue working on. We will hopefully publish something that I can share soon, but these were my takeaways from an intense two and a half days of AI discussions.

More from GZERO Media

- YouTube

Following a terrorist attack in Kashmir last spring, India and Pakistan, both nuclear powers, exchanged military strikes in an alarming escalation. Former Pakistani Foreign Minister Hina Khar joins Ian Bremmer on GZERO World to discuss Pakistan’s perspective in the simmering conflict.

- YouTube

A military confrontation between India and Pakistan in May nearly pushed the two nuclear-armed countries to the brink of war. On Ian Explains, Ian Bremmer breaks down the complicated history of the India-Pakistan conflict, one of the most contentious and bitter rivalries in the world.

A combination picture shows Russian President Vladimir Putin during a meeting with Arkhangelsk Region Governor Alexander Tsybulsky in Severodvinsk, Arkhangelsk region, Russia July 24, 2025.
REUTERS/Leah Millis

In negotiations, the most desperate party rarely gets the best terms. As Donald Trump and Vladimir Putin meet in Alaska today to discuss ending the Ukraine War, their diverging timelines may shape what deals emerge – if any.