Making rules for AI … before it’s too late

Ian Bremmer, the founder and president of Eurasia Group and GZERO Media, has joined forces with tech entrepreneur Mustafa Suleyman, the CEO and co-founder of Inflection AI, to take on one of the big questions of this moment in human history: “Can states learn to govern artificial intelligence before it’s too late?”

Their answer is yes. In the next edition of “Foreign Affairs,” already available online here, they’ve offered a series of ideas and principles they say can help world leaders meet this challenge.

Here’s a summary of their argument.

Artificial intelligence will open our lives and societies to impossible-to-imagine scientific advances, unprecedented access to technology for billions of people, toxic misinformation that disrupts democracies, and real economic upheaval.

In the process, it will trigger a seismic shift in the structure and balance of global power.

That’s why AI’s creators have become crucial geopolitical actors. Their sovereignty over AI further entrenches an emerging “techno-polar” international order, one in which governments must compete with tech companies for control over these fast-emerging technologies and their continuous development.

Governments around the world are (slowly) awakening to this challenge, but their attempts to use existing laws and rules to govern AI won’t work, because the complexity of these technologies and the speed of their advance will make it nearly impossible for policymakers and regulators to keep pace.

Policymakers now have a short time in which to build a new governance model to manage this historic transition. If they don’t move quickly, they may never catch up.

If global AI governance is to succeed, the international system must move past traditional ideas of sovereignty by welcoming tech companies to the planning table. More importantly, AI’s unique features must be reflected in its governance.

There are five key principles to follow. AI governance must be:

  1. Precautionary: First, rules should do no harm.
  2. Agile: Rules must be flexible enough to evolve as quickly as the technologies do.
  3. Inclusive: The rule-making process must include all actors, not just governments, that are needed to intelligently regulate AI.
  4. Impermeable: Because a single bad actor or breakaway algorithm can create a universal threat, rules must be comprehensive.
  5. Targeted: Rules must be carefully targeted, rather than one-size-fits-all, because AI is a general-purpose technology that poses multidimensional threats.

Building on these principles, a strong “techno-prudential” model – something akin to the macroprudential role played by global financial institutions like the International Monetary Fund in overseeing risks to the financial system – would mitigate the societal risks posed by AI and ease tensions between China and the United States by reducing the extent to which AI remains an arena — and a tool — of geopolitical competition.

The techno-prudential mandate proposed here would see at least three overlapping governance regimes for different aspects of AI.

The first, like the UN’s International Panel on Climate Change, would be a global scientific body that can objectively advise governments and international bodies on basic definitional questions of AI.

The second would resemble the monitoring and verification approaches of arms control regimes to prevent an all-out arms race between countries like the US and China.

The third, a Geotechnology Stability Board, would, as financial authorities do with monetary policy today, manage the disruptive forces of AI.

Creating several institutions with different competencies to address different aspects of AI will give us the best chance of limiting AI risk without blocking the innovation that can change all our lives for the better.

So, GZERO reader, what do you think? Can and should this path be followed? Share your thoughts with us here.

More from GZERO Media

Trump's silhouette as a wrecking ball banging into the Federal Reserve.
Gemini

President Trump has made no secret of his longstanding desire for lower interest rates to juice the economy and reduce the cost of servicing the $30 trillion federal debt.

The Nepalese government’s decision last week to ban several social platforms has touched off an ongoing wave of deadly unrest in the South Asian country of 30 million.

The Nepalese government’s decision last week to ban several social platforms has touched off an ongoing wave of deadly unrest in the South Asian country of 30 million.

General Wieslaw Kukula, chief of the General Staff of the Polish Armed Forces, takes part in an extraordinary government cabinet meeting at the Chancellery of the Prime Minister, following violations of Polish airspace during a Russian attack on Ukraine in Warsaw, Poland, on September 10, 2025.
(Photo by Aleksander Kalka/NurPhoto

NATO jets last night shot down Russian drones that had entered Polish airspace. Poland said the unmanned aircraft had crossed the border en route to a strike on Ukraine.

U.S. President Donald Trump shakes hands with European Commission President Ursula von der Leyen, after an announcement of a trade deal between the U.S. and EU, in Turnberry, Scotland, Britain, July 27, 2025.
REUTERS/Evelyn Hockstein/File Photo

100: In his ongoing, and so-far fruitless, efforts to convince Vladimir Putin to stop the war in Ukraine, Donald Trump reportedly asked the EU to apply 100% tariffs on India and China, the Kremlin’s most important trade partners.

Throughout his Walmart career, Greg has earned nine promotions, moving from an hourly associate to now overseeing 10 Walmart stores. His story is one of many. More than 75% of Walmart management started as hourly associates, and the retailer offers competitive benefits to support associates on and off the clock. At Walmart, there is a path for everyone. Learn how Walmart is investing in opportunities for associates at all levels.

This summer, Microsoft released the 2025 Responsible AI Transparency Report, demonstrating Microsoft’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.