Can we govern AI before it’s too late?

Artificial intelligence
Artificial intelligence
GZERO Media

That’s the question I set out to answer in my latest Foreign Affairs deep dive, penned with one of the top minds on artificial intelligence in the world, Inflection AI CEO and Co-Founder Mustafa Suleyman.

Just a year ago, there wasn’t a single world leader I’d meet who would bring up AI. Today, there isn’t a single world leader who doesn’t. In this short time, the explosive debut of generative AI systems like ChatGPT and Midjourney signaled the beginning of a new technological revolution that will remake politics, economies, and societies. For better and for worse.

As governments are starting to recognize, realizing AI’s astonishing upside while containing its disruptive – and destructive – potential may be the greatest governance challenge humanity has ever faced. If governments don’t get it right soon, it’s possible they never will.

Why AI needs to be governed

First, a disclaimer: I’m an AI enthusiast. I believe AI will drive nothing less than a new globalization that will give billions of people access to world-leading intelligence, facilitate impossible-to-imagine scientific advances, and unleash extraordinary innovation, opportunity, and growth. Importantly, we’re heading in this direction without policy intervention: The fundamental technologies are proven, the money is available, and the incentives are aligned for full-steam-ahead progress.

At the same time, artificial intelligence has the potential to cause unprecedented social, economic, political, and geopolitical disruption that upends our lives in lasting and irreversible ways.

In the nearest term, AI will be used to generate and spread toxic misinformation, eroding social trust and democracy; to surveil, manipulate, and subdue citizens, undermining individual and collective freedom; and to create powerful digital or physical weapons that threaten human lives. In the longer run, AI could also destroy millions of jobs, worsening existing inequalities and creating new ones; entrench discriminatory patterns and distort decision-making by amplifying bad information feedback loops; or spark unintended and uncontrollable military escalations that lead to war. Farther out on the horizon lurks the promise of artificial general intelligence (AGI), the still uncertain point where AI exceeds human performance at any given task, and the existential (albeit speculative) peril that an AGI could become self-directed, self-replicating, and self-improving beyond human control.

Experts disagree on which of these risks are more important or urgent. Some lie awake at night fearing the prospect of a superpowerful AGI turning humans into slaves. To me, the real catastrophic threat is humans using ever more powerful and available AI tools for malicious or unintended purposes. But it doesn’t really matter: Given how little we know about what AI might be able to do in the future – what kinds of threats it could pose, how severe and irreversible its damages could be – we should prepare for the worst while hoping for (and working toward) the best.

What makes AI so hard to govern

AI can’t be governed like any previous technology because it’s unlike any previous technology. It doesn’t just pose policy challenges; its unique features also make solving those challenges progressively harder. That is the AI power paradox.

For starters, the pace of AI progress is hyper-evolutionary. Take Moore’s Law, which has successfully predicted the doubling of computing power every two years. The new wave of AI makes that rate of progress seem quaint. The amount of computation used to train the most powerful AI models has increased by a factor of 10 every year for the last 10 years. Processing that once took weeks now happens in seconds. Yesterday’s cutting-edge capabilities are running on smaller, cheaper, and more accessible systems today.

As their enormous benefits become self-evident, AI systems will only grow bigger, cheaper, and more ubiquitous. And with each new order of magnitude, unexpected capabilities will emerge. Few predicted that training on raw text would enable large language models to produce coherent, novel, and even creative sentences. Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. Soon, AI developers will likely succeed in creating systems capable of quasi-autonomy (i.e., able to achieve concrete goals with minimal human oversight) and self-improvement – a critical juncture that should give everyone pause.

Then there’s the ease of AI proliferation. As with any software, AI algorithms are much easier and cheaper to copy and share (or steal) than physical assets. Although the most powerful models still require sophisticated hardware to work, midrange versions can run on computers that can be rented for a few dollars an hour. Soon, such models will run on smartphones. No technology this powerful has become so accessible, so widely, so quickly. All this plays out on a global field: Once released, AI models can and will be everywhere. All it takes is one malign or “breakout” model to wreak worldwide havoc.

AI also differs from older technologies in that almost all of it can be characterized as “general purpose” and “dual use” (i.e., having both military and civilian applications). An AI application built to diagnose diseases might be able to create – and weaponize – a new one. The boundaries between the safely civilian and the militarily destructive are inherently blurred. This makes AI more than just software development as usual; it is an entirely new means of projecting power.

As such, its advancement is being propelled by irresistible incentives. Whether for its repressive capabilities, economic potential, or military advantage, AI supremacy is a strategic objective of every government and company with the resources to compete. At the end of the Cold War, powerful countries might have cooperated to arrest a potentially destabilizing technological arms race. But today’s tense geopolitical environment makes such cooperation much harder. From the vantage point of the world’s two superpowers, the United States and China, the risk that the other side will gain an edge in AI is greater than any theoretical risk the technology might pose to society or to their own domestic political authority. This zero-sum dynamic means that Beijing and Washington are focused on accelerating AI development, rather than slowing it down.

But even if the world’s powers were inclined to contain AI, there’s no guarantee they’d be able to, because, like most of the digital world, every aspect of AI is presently controlled by the private sector. I call this arrangement “technopolar,” with technology companies effectively exerting sovereignty over the rules that apply to their digital fiefdoms at the expense of governments. The handful of large tech firms that currently control AI may retain their advantage for the foreseeable future – or they may be eclipsed by a raft of smaller players as low barriers to entry, open-source development, and near-zero marginal costs lead to uncontrolled proliferation of AI. Either way, AI’s trajectory will be largely determined not by governments but by private businesses and individual technologists who have little incentive to self-regulate.

Any one of these features would strain traditional governance models; all of them together render these models inadequate and make the challenge of governing AI unlike anything governments have faced before.

The “technoprudential" imperative

For AI governance to work, it must be tailored to the specific nature of the technology and the unique challenges it poses. But because the evolution, uses, and risks of AI are inherently unpredictable, AI governance can’t be fully specified at the outset. Instead, it must be as innovative, adaptive, and evolutionary as the technology it seeks to govern.

Our proposal? “Technoprudentialism.” That’s a big word, but essentially it’s about governing AI much in the same way that we govern global finance. The idea is that we need a system to identify and mitigate risks to global stability posed by AI before they occur, without choking off innovation and the opportunities that flow from it, and without getting bogged down by everyday politics and geopolitics. In practice, technoprudentialism requires the creation of multiple complementary governance regimes – each with different mandates, levers, and participants – to address the various aspects of AI that could threaten geopolitical stability, guided by common principles that reflect AI’s unique features.

Mustafa and I argue that AI governance needs to be precautionary, agile, inclusive, impermeable, and targeted. Built atop these principles should be a minimum of three AI governance regimes: an Intergovernmental Panel on Artificial Intelligence for establishing facts and advising governments on the risks posed by AI, an arms control-style mechanism for preventing an all-out arms race between them, and a Geotechnology Stability Board for managing the disruptive forces of a technology unlike anything the world has seen.

The 21st century will throw up few challenges as daunting or opportunities as promising as those presented by AI. Whether our future is defined by the former or the latter depends on what policymakers do next.

More from GZERO Media

- YouTube

Fifty years after the fall of Saigon (or its liberation, depending on whom you ask), Vietnam has transformed from a war-torn battleground to one of Asia’s fastest-growing economies—and now finds itself caught between two superpowers. Ian Bremmer breaks down how Vietnam went from devastation in the wake of the Vietnam War to becoming a regional economic powerhouse.

Eurasia Group and GZERO Media are seeking a highly creative, detail-oriented Graphic and Animation Designer who lives and breathes news, international affairs, and policy. The ideal candidate has demonstrated experience using visual storytelling—including data visualizations and short-form animations—to make complex geopolitical topics accessible, social-friendly, and engaging across platforms. You will join a dynamic team of researchers, editors, video producers, and writers to elevate our storytelling and thought leadership through innovative multimedia content.

The body of Pope Francis in the coffin exposed in St. Peter's Basilica in Vatican City on April 24, 2025. The funeral will be celebrated on Saturday in St. Peter's Square.
Pasquale Gargano/KONTROLAB/ipa-agency.net/IPA/Sipa USA

While the Catholic world prepares for the funeral of Pope Francis on Saturday – the service begins at 10 a.m. local time, 4 a.m. ET – certain high-profile attendees may also have other things on their mind. Several world leaders will be on hand to pay their respects to the pontiff, but they could also find themselves involved in bilateral talks.

A Ukrainian rescue worker sits atop the rubble of a destroyed residential building during rescue operations, following a Russian missile strike on a residential apartment building block in Kyiv, Ukraine, on April 24, 2025.
Photo by Justin Yau/ Sipa USA
Members of the M23 rebel group stand guard at the opening ceremony of Caisse Generale d'epargne du Congo (CADECO) which will serve as the bank for the city of Goma where all banks have closed since the city was taken by the M23 rebels, in Goma, North Kivu province in the East of the Democratic Republic of Congo, April 7, 2025.
REUTERS/Arlette Bashizi

The Democratic Republic of the Congo and an alliance of militias led by the notorious M23 rebels announced a ceasefire on Thursday after talks in Qatar and, after three years of violence, said they would work toward a permanent truce.

Students shout slogans and burn an effigy to protest the Pahalgam terror attack in Guwahati, Assam, India, on April 24, 2025. On April 22, a devastating terrorist attack occurs in Pahalgam, Jammu and Kashmir, resulting in the deaths of at least 28 tourists.
Photo by David Talukdar/NurPhoto

Prime Minister Narendra Modi has blamed Pakistan for Tuesday’s deadly terrorist attack in Kashmir, and he’s takenaggressive action against its government.