The AI power paradox: Rules for AI's power
Ian Bremmer's Quick Take: Hi everybody, Ian Bremmer here and a piece to share with you that I've just completed with Mustafa Suleyman, my good friend, the founder, co-founder of DeepMind, and now Inflection AI, in Foreign Affairs.
This issue called, "The AI Power Paradox - Can states learn to govern artificial intelligence before it’s too late?" The biggest change in power and governance in a very short period of time that I've experienced in my lifetime.
It's how to deal with artificial intelligence. Just a year ago, there wasn't a head of state I would meet with that was asking anything about AI, and now it's in every single meeting. And in part, that's because of how explosive this technology itself has suddenly become in terms of both its staggering upside. I mean, when you get anyone with a smartphone that has access to some of the, you know, global levels of intelligence on in an education field, in a health care field,in a managerial field. I mean, not just access to information and communication, but access to intelligence and the ability to take action with it. That is a game changer for globalization and for productivity of the sort that we've never seen before in such a short period of time all across the world. And yet those same technologies can be used in very disruptive ways by bad actors all over the world, and not just governments, but organizations and individual people. And that's a big piece of why these leaders are concerned.
They're concerned about can we still run an election that's free and fair and people will believe in? Will we be able to limit the proliferation of bad actors to develop and distribute malware or bio weapons? And will people, intellectual workers still have, or white collar workers still have jobs, have productive things to do, but also because the top issues that policymakers are concerned about are also affected very dramatically by AI, whether it's how you think about the war with Russia and Russia's ability to be a disruptive actor or it’s US-China relations, and to what extent that continues to be a reasonably stable and constructive interdependent relationship.
And also the United States and other advanced industrial democracies, can they persist as functional democracies given the proliferation of AI? So everyone's worried about it. Everyone has urgency. Very few people know what to do. So a few big takeaways from us in this piece.
The big concept that we think should infuse AI governance is techno-prudentialism. That's a big long word, but it's aligned with macro-prudentialism and it's aligned with the way that global finance has been governed. The idea that you need to identify and limit risks to global stability in AI without choking off innovation and the opportunities that come from it. And that's the way the financial stability board works. The Bank of International Settlements, the IMF, despite all of the conflict between the United States and China and the Europeans, they all work together in those institutions. They do it because global finance is too important to allow it to break. It fundamentally needs to be and is global. It has the potential to cause systemic contagion and collapse and everyone wants to work against and mitigate that.
So techno-prudentialism would be applying that to the AI space. With that as a backdrop, we see five principles that should direct AI governance. When you're thinking about governing AI that you want to keep these principles in mind. Number one, the precautionary principle - do no harm. Obvious in the medical field, it needs to be obvious in the AI field. So incredibly suffused with opportunities for global growth, but also enormously dangerous. Caution has to be in place because tinkering with these institutions, creating, creating capabilities for regulation can be incredibly dangerous and also can cut off incredible innovation. So that level of humility, as we think about governing, a completely new set of technologies that will change very, very quickly, should be number one.
Number two, agile, because these technologies are changing so quickly, the institutions and the architecture that you create need to themselves be very flexible. They need to be able to adapt and course correct as AI itself evolves and improves. Usually we put architecture together and it's meant to be as strong and stable as humanly possible that nothing could break it. And that also means it usually can't change very much. Whether you talk about the Security Council of the United Nations or NATO, or the European Union. Not the way you need to think about AI governance.
Inclusive. It needs to be a hybrid system. Technology companies are the dominant actors in artificial intelligence. They exert fundamental sovereignty. What I call a techno-polar order. And we believe that any institutions that govern AI will have to have both technology companies and governments at the table. That doesn't mean tech companies get equal votes, but they're going to have to be directly and mutually involved in governance because the governments don't have the expertise. They don't understand what these algorithms do, and they're not driving the future.
Impermeable. They have to be global. You can't have slippage when you're talking about technologies that if individual actors have their hands on it and can use it for whatever purposes, that it's incredibly dangerous. They can't be fragmented institutions. They can't be institutions that allow some percentage of AI companies and developers to not be a part of it. They'll have to be easy in and very hard out for the architecture that's created.
And then finally, targeted. This is not one-size-fits-all. AI ends up impacting every sector of the global economy, and there will be very, very different types of institutions for different needs that will need to be created. So those are the principles of AI governance. What kind of institutions do we need? The first, like we have through the United Nations on climate change, the Intergovernmental Panel on Climate Change. We need that for artificial intelligence. We need that with the kind of models,the data, the training models that are being done, the algorithms that are being developed and deployed, that you need to have all of the actors in one space that are sharing the same set of facts, which we don't have right now. Everyone's touching different pieces of the elephant. So an intergovernmental panel on artificial intelligence.
A second would be a geotechnology stability board. And this is the group of both national and technology actors that together can react when dangerous disruption occur. Weaponization from cyber criminals, or state sponsored actors, or lone wolvesas will inevitably occur. Those responses will need to be global because everyone has a huge stake in not allowing these technologies to suddenly undermine governance on the planet. And finally, we're going to need to have some form of US-China collaboration that looks like the hardest piece of it to put together right now, because of course, we don't even talk on defense matters at a high level at all.
And the politics are in a very different direction. But with the Americans and the Soviets, we knew that we had access to these weapons of mass destruction. Even though we hated each other. We knew we had to talk about it. So we didn't blow each other up, what our capabilities were and what capabilities we thought were too dangerous to be able to develop. That kind of communication needs to happen between the US and China and its top technology actors, especially because not only will some of these technologies be existentially threatening, but also because there are lots of them will very quickly be in the hands of actors that developed and countries with a lot at stake in maintaining the existing system will not want to see as a threat. And, you know, not that we believe that you can set that up today, but rather that you want the principals of governments and corporations to be talking about it now so that when the first crises start emerging, they will already be prepared in this direction. They will have a toolkit that they will then be able to take out and start working on.
So that's the view of the piece. I suspect we'll be talking about an awful lot over the course of the coming weeks and months. I hope you find this interesting and worthwhile and we've got a link to the piece that we'll be sending on. Have a look at it. Talk to you soon. Bye.
- Making rules for AI … before it’s too late ›
- Governing AI Before It’s Too Late ›
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- The geopolitics of AI ›
- How should artificial intelligence be governed? ›
- Be more worried about artificial intelligence ›
- A vision for inclusive AI governance - GZERO Media ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton - GZERO Media ›
- How tech companies aim to make AI more ethical and responsible - GZERO Media ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›