VIDEOSGZERO World with Ian BremmerQuick TakePUPPET REGIMEIan ExplainsGZERO ReportsAsk IanGlobal Stage
Site Navigation
Search
Human content,
AI powered search.
Latest Stories
Start your day right!
Get latest updates and insights delivered to your inbox.
GZERO AI Video
GZERO AI is our weekly video series intended to help you keep up and make sense of the latest news on the AI revolution.
Presented by
Marietje Schaake, International Policy Fellow, Stanford Human-Centered Artificial Intelligence, and former European Parliamentarian, co-hosts GZERO AI, our new weekly video series intended to help you keep up and make sense of the latest news on the AI revolution. Fresh from a workshop hosted by Princeton's Institute for Advanced Studies where the discussion was centered around whether regulating generative AI should be opened to the public or a select few, in this episode, she shares insights into the potential workings, effectiveness and drawbacks of each approach.
We just finished a half week workshop that dealt with the billion-dollar question of how to best regulate generative AI. And often this discussion tends to get quite tribal between those who say, “Well, open models are the best route to safety because they foster transparency and learning for a larger community, which also means scrutiny for things that might go wrong,” or those that say, “No, actually closed and proprietary models that can be scrutinized by a handful of companies that are able to produce them are safer because then malign actors may not get their hands on the most advanced technology.”
And one of the key takeaways that I have from this workshop, which was kindly hosted by Princeton's Institute for Advanced Studies, is actually that the question of open versus closed models, but also the question of whether or not to regulate is much more gradient. So, there is a big spectrum of considerations between models that are all the way open and what that means for safety and security,
Two models that are all the way closed and what that means for opportunities for oversight, as well as the whole discussion about whether or not to regulate and what good regulation looks like. So, one discussion that we had was, for example, how can we assess the most advanced or frontier models in a research phase with independent oversight, so government mandated, and then decide more deliberately when these new models are safe enough to be put out into the market or the wild.
So that there is actually much less of these cutting, cutting throat market dynamics that lead companies to just push out their latest models out of concern that their competitor might be faster and that there is oversight built in that really considers, first and foremost, what is important for society, for the most vulnerable, for anything from national security to election integrity, to, for example, nondiscrimination principles which are already under enormous pressure thanks to AI.
So, a lot of great takeaways to continue working on. We will hopefully publish something that I can share soon, but these were my takeaways from an intense two and a half days of AI discussions.
Keep reading...Show less
More from GZERO AI Video
Europe’s AI deepfake raid
March 04, 2025
AI's existential risks: Why Yoshua Bengio is warning the world
October 01, 2024
How is AI shaping culture in the art world?
July 02, 2024
How AI models are grabbing the world's data
June 18, 2024
Can AI help doctors act more human?
June 04, 2024
How neurotech could enhance our brains using AI
May 21, 2024
OpenAI is risk-testing Voice Engine, but the risks are clear
April 03, 2024
AI and Canada's proposed Online Harms Act
March 05, 2024
Gemini AI controversy highlights AI racial bias challenge
February 29, 2024
When AI makes mistakes, who can be held responsible?
February 20, 2024
AI & human rights: Bridging a huge divide
February 16, 2024
Taylor Swift AI images & the rise of the deepfakes problem
February 06, 2024
Will Taylor Swift's AI deepfake problems prompt Congress to act?
February 01, 2024
ChatGPT on campus: How are universities handling generative AI?
January 23, 2024
Davos 2024: AI is having a moment at the World Economic Forum
January 16, 2024
AI in 2024: Will democracy be disrupted?
December 20, 2023
New AI toys spark privacy concerns for kids
December 12, 2023
GZERO Series
GZERO Daily: our free newsletter about global politics
Keep up with what’s going on around the world - and why it matters.






























