We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
AI's rapid rise
In a remarkable shift, AI has catapulted to the forefront of global conversations within a span of just one year. From political leaders to multilateral organizations, the dialogue has swiftly transitioned from mere curiosity to deep-seated concern. Ian Bremmer, founder and president of GZERO Media and Eurasia Group, says AI transcends traditional geopolitical boundaries. Notably, the reins of AI's dominion rest not in governments but predominantly within the hands of technology corporations.
This unconventional dynamic prompts a recalibration of governance strategies. Unlike past challenges that could be addressed in isolation, AI's complexity necessitates collaboration with its creators—engineers, scientists, technologists, and corporate leaders. The emergence of a new era, where technology companies hold significant sway, has redefined the political landscape. The journey to understand and govern AI is a collaborative endeavor that promises both learning and transformation.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- The AI power paradox: Rules for AI's power ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- Ian Bremmer explains: Should we worry about AI? ›
- The geopolitics of AI ›
- Making rules for AI … before it’s too late ›
- How should artificial intelligence be governed? ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
AI governance: Cultivating responsibility
Mustafa Suleyman, a prominent voice in the AI landscape and CEO & co-founder of Inflection AI, contends that effective regulation transcends legal frameworks—it encompasses a culture of self-regulation and informed regulatory comprehension. Today's AI leaders exhibit a unique blend of optimism and caution, recognizing both the transformative potential and potential pitfalls of AI technologies. Suleyman underscores the paradigm shift compared to the era of social media dominance.
This time, AI leaders have been proactive in raising concerns and questions about the technology's impact. Balancing innovation's pace with prudent safeguards is the goal, acknowledging that through collective efforts, the benefits of AI can far outweigh its drawbacks. Suleyman highlights that advanced AI models are increasingly controllable and capable of producing desired, safe outputs. He encourages external oversight and welcomes regulation as a proactive and thoughtful measure. The message is clear: the path to harnessing AI's power lies in fostering a culture of responsible development and collaborative regulatory action.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- The AI power paradox: Rules for AI's power ›
- Is life better than ever for the human race? - GZERO Media ›
Insights on AI governance and global stability
Ian Bremmer and Mustafa Suleyman, CEO and co-founder of Inflection AI, delve into the realm of AI governance and its vital role in shaping our rapidly evolving world. Just like the macro-prudential policies that govern global finance, society now find itself in need of techno-prudential policies to ensure that artificial intelligence (AI) flourishes without compromising global stability. AI presents multi-faceted challenges, including disinformation, technology proliferation, and the urgent need to strike a balance between innovation and risk management.
A vision for inclusive AI governance
Casting a spotlight on the intricate landscape of AI governance, Ian Bremmer, president and founder of GZERO Media and Eurasia Group, and Mustafa Suleyman, CEO and co-founder of Inflection AI, eloquently unravel the pressing need for collaboration between governments, advanced industrial players, corporations, and a diverse spectrum of stakeholders in the AI domain. The exponential pace of this technological evolution demands a united front and the stakes have never been higher. There is urgency of getting AI governance right while the perils of getting it wrong could be catastrophic. While tech giants acknowledge this necessity, they remain engrossed in their domains, urging the imperative for collective action.
Mustafa vividly illustrates the competitive dynamics among AI developers vying for supremacy, stressing that cooperation between corporations and governments is pivotal. Ian emphasizes the existing techno-polar world and the importance of inclusivity in shaping AI's trajectory. The discourse emphasizes that the way forward isn't confined to legislative channels, but rather a tapestry woven with non-governmental organizations, academics, critics, and civil society entities. Mustafa propounds the notion that diversity and inclusivity breed resilience. The duo makes a compelling case for stakeholders' collaboration. They draw a parallel between their alignment and the potential accord between major tech leaders and governments.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- The AI power paradox: Rules for AI's power ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
- Use new data to fight climate change & other challenges: UN tech envoy - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect - GZERO Media ›
- State of the World with Ian Bremmer: December 2023 ›
Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox
Listen: Dive into the world of artificial intelligence in our new GZERO World podcast episode. Ian Bremmer, founder of Eurasia Group and GZERO Media, teams up with Mustafa Suleyman, CEO of Inflection AI, to discuss their groundbreaking article titled, “The AI Power Paradox,” recently published in Foreign Affairs magazine. Uncover the explosive growth and potential risks of generative AI and explore Ian and Mustafa’s proposed 5 principles for effective AI governance. Join host Evan Solomon as he delves into the crucial conversation about regulating AI before it spirals out of control and without stifling innovation. Tune in for insights on technology, politics, and securing our global future.
- The geopolitics of AI ›
- How should artificial intelligence be governed? ›
- Making rules for AI … before it’s too late ›
- Governing AI Before It’s Too Late ›
- The AI power paradox: Rules for AI's power ›
- A vision for inclusive AI governance - GZERO Media ›
- Artificial intelligence: How soon will we see meaningful progress? - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
The AI power paradox: Rules for AI's power
Ian Bremmer's Quick Take: Hi everybody, Ian Bremmer here and a piece to share with you that I've just completed with Mustafa Suleyman, my good friend, the founder, co-founder of DeepMind, and now Inflection AI, in Foreign Affairs.
This issue called, "The AI Power Paradox - Can states learn to govern artificial intelligence before it’s too late?" The biggest change in power and governance in a very short period of time that I've experienced in my lifetime.
It's how to deal with artificial intelligence. Just a year ago, there wasn't a head of state I would meet with that was asking anything about AI, and now it's in every single meeting. And in part, that's because of how explosive this technology itself has suddenly become in terms of both its staggering upside. I mean, when you get anyone with a smartphone that has access to some of the, you know, global levels of intelligence on in an education field, in a health care field,in a managerial field. I mean, not just access to information and communication, but access to intelligence and the ability to take action with it. That is a game changer for globalization and for productivity of the sort that we've never seen before in such a short period of time all across the world. And yet those same technologies can be used in very disruptive ways by bad actors all over the world, and not just governments, but organizations and individual people. And that's a big piece of why these leaders are concerned.
They're concerned about can we still run an election that's free and fair and people will believe in? Will we be able to limit the proliferation of bad actors to develop and distribute malware or bio weapons? And will people, intellectual workers still have, or white collar workers still have jobs, have productive things to do, but also because the top issues that policymakers are concerned about are also affected very dramatically by AI, whether it's how you think about the war with Russia and Russia's ability to be a disruptive actor or it’s US-China relations, and to what extent that continues to be a reasonably stable and constructive interdependent relationship.
And also the United States and other advanced industrial democracies, can they persist as functional democracies given the proliferation of AI? So everyone's worried about it. Everyone has urgency. Very few people know what to do. So a few big takeaways from us in this piece.
The big concept that we think should infuse AI governance is techno-prudentialism. That's a big long word, but it's aligned with macro-prudentialism and it's aligned with the way that global finance has been governed. The idea that you need to identify and limit risks to global stability in AI without choking off innovation and the opportunities that come from it. And that's the way the financial stability board works. The Bank of International Settlements, the IMF, despite all of the conflict between the United States and China and the Europeans, they all work together in those institutions. They do it because global finance is too important to allow it to break. It fundamentally needs to be and is global. It has the potential to cause systemic contagion and collapse and everyone wants to work against and mitigate that.
So techno-prudentialism would be applying that to the AI space. With that as a backdrop, we see five principles that should direct AI governance. When you're thinking about governing AI that you want to keep these principles in mind. Number one, the precautionary principle - do no harm. Obvious in the medical field, it needs to be obvious in the AI field. So incredibly suffused with opportunities for global growth, but also enormously dangerous. Caution has to be in place because tinkering with these institutions, creating, creating capabilities for regulation can be incredibly dangerous and also can cut off incredible innovation. So that level of humility, as we think about governing, a completely new set of technologies that will change very, very quickly, should be number one.
Number two, agile, because these technologies are changing so quickly, the institutions and the architecture that you create need to themselves be very flexible. They need to be able to adapt and course correct as AI itself evolves and improves. Usually we put architecture together and it's meant to be as strong and stable as humanly possible that nothing could break it. And that also means it usually can't change very much. Whether you talk about the Security Council of the United Nations or NATO, or the European Union. Not the way you need to think about AI governance.
Inclusive. It needs to be a hybrid system. Technology companies are the dominant actors in artificial intelligence. They exert fundamental sovereignty. What I call a techno-polar order. And we believe that any institutions that govern AI will have to have both technology companies and governments at the table. That doesn't mean tech companies get equal votes, but they're going to have to be directly and mutually involved in governance because the governments don't have the expertise. They don't understand what these algorithms do, and they're not driving the future.
Impermeable. They have to be global. You can't have slippage when you're talking about technologies that if individual actors have their hands on it and can use it for whatever purposes, that it's incredibly dangerous. They can't be fragmented institutions. They can't be institutions that allow some percentage of AI companies and developers to not be a part of it. They'll have to be easy in and very hard out for the architecture that's created.
And then finally, targeted. This is not one-size-fits-all. AI ends up impacting every sector of the global economy, and there will be very, very different types of institutions for different needs that will need to be created. So those are the principles of AI governance. What kind of institutions do we need? The first, like we have through the United Nations on climate change, the Intergovernmental Panel on Climate Change. We need that for artificial intelligence. We need that with the kind of models,the data, the training models that are being done, the algorithms that are being developed and deployed, that you need to have all of the actors in one space that are sharing the same set of facts, which we don't have right now. Everyone's touching different pieces of the elephant. So an intergovernmental panel on artificial intelligence.
A second would be a geotechnology stability board. And this is the group of both national and technology actors that together can react when dangerous disruption occur. Weaponization from cyber criminals, or state sponsored actors, or lone wolvesas will inevitably occur. Those responses will need to be global because everyone has a huge stake in not allowing these technologies to suddenly undermine governance on the planet. And finally, we're going to need to have some form of US-China collaboration that looks like the hardest piece of it to put together right now, because of course, we don't even talk on defense matters at a high level at all.
And the politics are in a very different direction. But with the Americans and the Soviets, we knew that we had access to these weapons of mass destruction. Even though we hated each other. We knew we had to talk about it. So we didn't blow each other up, what our capabilities were and what capabilities we thought were too dangerous to be able to develop. That kind of communication needs to happen between the US and China and its top technology actors, especially because not only will some of these technologies be existentially threatening, but also because there are lots of them will very quickly be in the hands of actors that developed and countries with a lot at stake in maintaining the existing system will not want to see as a threat. And, you know, not that we believe that you can set that up today, but rather that you want the principals of governments and corporations to be talking about it now so that when the first crises start emerging, they will already be prepared in this direction. They will have a toolkit that they will then be able to take out and start working on.
So that's the view of the piece. I suspect we'll be talking about an awful lot over the course of the coming weeks and months. I hope you find this interesting and worthwhile and we've got a link to the piece that we'll be sending on. Have a look at it. Talk to you soon. Bye.
- Making rules for AI … before it’s too late ›
- Governing AI Before It’s Too Late ›
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- The geopolitics of AI ›
- How should artificial intelligence be governed? ›
- Be more worried about artificial intelligence ›
- A vision for inclusive AI governance - GZERO Media ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton - GZERO Media ›
- How tech companies aim to make AI more ethical and responsible - GZERO Media ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
- AI's role in the Israel-Hamas war so far - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- State of the World with Ian Bremmer: December 2023 ›
Governing AI Before It’s Too Late
The explosion of generative AI we’ve seen since November 2022 has been a game changer in both technology and politics, capable of bringing enormous growth and productivity but also the potential for great peril. How can AI be regulated and governed before it’s too late? That’s the topic of a new collaboration between our own Ian Bremmer, founder and president of Eurasia Group and GZERO Media, and Mustafa Suleyman, CEO and co-founder of Inflection AI.
Together they penned an article for the September issue of Foreign Affairs magazine which details a plan to create a global framework around fast-moving and evolving technologies. The two describe the need for 5 basic principles of governance, and new global organizations that can monitor and mitigate risk without stifling growth.
In this special report for GZERO Media, Bremmer and Suleyman join GZERO’s publisher Evan Solomon to take a deep and critical look at where AI is today, where it is going, and how to prevent it from becoming ungovernable.
- How should artificial intelligence be governed? ›
- AI regulation can’t address what people want ›
- China to require AI licenses ›
- Regulate AI: Sure, but how? ›
- Podcast: Getting to know generative AI with Gary Marcus - GZERO Media ›
- The AI power paradox: Rules for AI's power - GZERO Media ›
- UK AI Safety Summit brings government leaders and AI experts together - GZERO Media ›
- AI agents are here, but is society ready for them? - GZERO Media ›
Making rules for AI … before it’s too late
Ian Bremmer, the founder and president of Eurasia Group and GZERO Media, has joined forces with tech entrepreneur Mustafa Suleyman, the CEO and co-founder of Inflection AI, to take on one of the big questions of this moment in human history: “Can states learn to govern artificial intelligence before it’s too late?”
Their answer is yes. In the next edition of “Foreign Affairs,” already available online here, they’ve offered a series of ideas and principles they say can help world leaders meet this challenge.
Here’s a summary of their argument.
Artificial intelligence will open our lives and societies to impossible-to-imagine scientific advances, unprecedented access to technology for billions of people, toxic misinformation that disrupts democracies, and real economic upheaval.
In the process, it will trigger a seismic shift in the structure and balance of global power.
That’s why AI’s creators have become crucial geopolitical actors. Their sovereignty over AI further entrenches an emerging “techno-polar” international order, one in which governments must compete with tech companies for control over these fast-emerging technologies and their continuous development.
Governments around the world are (slowly) awakening to this challenge, but their attempts to use existing laws and rules to govern AI won’t work, because the complexity of these technologies and the speed of their advance will make it nearly impossible for policymakers and regulators to keep pace.
Policymakers now have a short time in which to build a new governance model to manage this historic transition. If they don’t move quickly, they may never catch up.
If global AI governance is to succeed, the international system must move past traditional ideas of sovereignty by welcoming tech companies to the planning table. More importantly, AI’s unique features must be reflected in its governance.
There are five key principles to follow. AI governance must be:
- Precautionary: First, rules should do no harm.
- Agile: Rules must be flexible enough to evolve as quickly as the technologies do.
- Inclusive: The rule-making process must include all actors, not just governments, that are needed to intelligently regulate AI.
- Impermeable: Because a single bad actor or breakaway algorithm can create a universal threat, rules must be comprehensive.
- Targeted: Rules must be carefully targeted, rather than one-size-fits-all, because AI is a general-purpose technology that poses multidimensional threats.
Building on these principles, a strong “techno-prudential” model – something akin to the macroprudential role played by global financial institutions like the International Monetary Fund in overseeing risks to the financial system – would mitigate the societal risks posed by AI and ease tensions between China and the United States by reducing the extent to which AI remains an arena — and a tool — of geopolitical competition.
The techno-prudential mandate proposed here would see at least three overlapping governance regimes for different aspects of AI.
The first, like the UN’s International Panel on Climate Change, would be a global scientific body that can objectively advise governments and international bodies on basic definitional questions of AI.
The second would resemble the monitoring and verification approaches of arms control regimes to prevent an all-out arms race between countries like the US and China.
The third, a Geotechnology Stability Board, would, as financial authorities do with monetary policy today, manage the disruptive forces of AI.
Creating several institutions with different competencies to address different aspects of AI will give us the best chance of limiting AI risk without blocking the innovation that can change all our lives for the better.
So, GZERO reader, what do you think? Can and should this path be followed? Share your thoughts with us here.