VIDEOSGZERO World with Ian BremmerQuick TakePUPPET REGIMEIan ExplainsGZERO ReportsAsk IanGlobal Stage
Site Navigation
Search
Human content,
AI powered search.
Latest Stories
Sign up for GZERO Daily.
Get our latest updates and insights delivered to your inbox.
Global Stage: Live from Munich
WATCH RECORDING
Economy
Global economy news and analysis from GZERO Media
Presented by
Ian Bremmer's Quick Take: Hi, everybody. Ian Bremmer here. A happy Monday to you. And a Quick Take today on artificial intelligence and how we think about it geopolitically. I'm an enthusiast, I want to be clear. I am more excited about AI as a new development to drive growth and productivity for 8 billion of us on this planet than anything I have seen since the internet, maybe since we invented the semiconductor. It's extraordinary how much that will apply human ingenuity faster and more broadly to challenges that exist today and those we don't even know about yet. But I'm not concerned about the upside in the sense that a huge amount of money is being devoted towards those companies. The people that run them are working as fast as they humanly can to get better and to unlock those opportunities and to also beat their competitors, get there faster. I'm worried about what happens that is more challenging, that we're not spending the resources on the consequences that will be more upsetting for populations from artificial intelligence. The ones that will require some level of government and other intervention or else. And I see four of them.
First is disinformation. We know that AI bots can be very confident and they're also frequently very wrong. And if you can no longer discern an AI bot from a human being in text and very soon in audio and in videos, then that means that you can no longer discern truth from falsehood. And that is not good news for democracies. It's actually good news for authoritarian countries that deploy artificial intelligence for their own political stability and benefit. But in a country like the United States or Canada or Europe or Japan, it's much more deeply corrosive. And I think that this is an area that unless we are able to put very clear labeling and restrictions on what is AI and what is not AI, we're going to be in very serious trouble in terms of the erosion of our institutions much faster than anything we've seen through social media or through cable news or through any of the other challenges that we've had in the information space.
Secondly, and relatedly, is proliferation. Proliferation of AI technologies by either bad actors or by tinkerers that don't have the knowledge and are indifferent to the chaos that they may sow. We today are in an environment with about a hundred human beings that have both the knowledge and the technology to deploy a smallpox virus. Don't do that, right? But very soon with AI, those numbers are going way up. And not just in terms of the creation of new dangerous viruses or lethal autonomous drones, but also in their ability to write malware and deploy it to take money from people or to destroy institutions or to undermine an election. All of these things in the hands not just of a small number of governments, but individuals that have a laptop and a little bit of programming skill is going to make it a lot harder to effectively respond. We saw some of this with the cyber, offensive cyber scare, which then of course created a lot of security and big industries around that to respond and lots of costs. That's what we're going to see with AI, but in every field.
Then you have the displacement risk. A lot of people have talked about this. It's a whole bunch of people that no longer have productive jobs because AI replaces them. I'm not particularly worried about this in the macro setting, in the sense that I believe that the number of jobs that will be created, new jobs, many of which we can't even think about right now, as well as the number of existing jobs that become much more productive because they are using AI effectively, will outweigh the jobs that are lost through artificial intelligence. But they're going to happen at the same time. And unless you have policies in place that help retrain and also economically just take care of the people that are displaced in the nearest-term, those people get angrier. Those people become much more supportive of anti-establishment politicians. They become much angrier and feel like their existing political leaders are illegitimate. We've seen this through free trade and hollowing out of middle classes. We've seen it through automation and robotics. It's going to be a lot faster, a lot broader with AI.
And then finally, and the one that I worry about the most and it doesn't get enough attention, is the replacement risk. The fact that so many human beings will replace relationships they have with other human beings. They'll replace them with AI. And they may be doing this knowledgeably, they may be doing this without knowledge. But, I mean, certainly I see how much in early-stage AI bots’ people are developing actual relationships with these things, particularly young people. And we as humans need communities and families and parents that care about us and take care about us to become social adaptable animals.
And when that's happening through artificial intelligence that not only doesn't care about us, but also doesn't have human beings as a principal interest, principle interest is the business model and the human beings are very much subsidiary and not necessarily aligned, that creates a lot of dysfunction. I fear that a level of dehumanization that could come very, very quickly, especially for young people through addictions and antisocial relationships with AI, which we'll then try to fix through AI bots that can do therapy, is a direction that we really don't want to head on this planet. We will be doing real-time experimentation on human beings. And we never do that with a new GMO food. We never do that with a new vaccine, even when we're facing a pandemic. We shouldn't be doing that with our brains, with our persons, with our souls. And I hope that that gets addressed real fast.
So anyway, that's a little bit for me and the geopolitics of AI, something I'm writing about, thinking about a lot these days. And I hope everyone's well, and I'll talk to you all real soon.
Keep reading...Show less
More from Economy
ask ian
Mar 10, 2026
How a global coalition disrupted Tycoon 2FA
March 10, 2026
Trump bashes globalization, but Americans still buy foreign
March 10, 2026
Quick Take
Mar 09, 2026
A strong day for Colombia’s center-right
March 09, 2026
Graphic Truth: Netanyahu’s strategic window in Iran
March 09, 2026
GZERO World with Ian Bremmer
Mar 09, 2026
Ian Explains
Mar 06, 2026
Tehran's best and worst-case scenarios
March 06, 2026
For Hezbollah, is the writing on the wall?
March 06, 2026
You vs. the News: A Weekly News Quiz - March 6, 2026
March 05, 2026
After decapitation, what’s next?
March 04, 2026
Nuclear is back on the global agenda
March 04, 2026
Iran conflict: who could run out of weapons first?
March 04, 2026
What’s Good Wednesdays™, March 4, 2026
March 04, 2026
What comes next in the US-Israel war with Iran?
March 03, 2026
German lesson: Merz tries a balancing act at the White House
March 03, 2026
Empowering communities to enable the global AI economy
March 03, 2026
Puppet Regime
Mar 03, 2026
Iran conflict spirals, with no end in sight
March 02, 2026
How the Iran conflict could disrupt the world’s oil supply
March 02, 2026
Almost everyone in Iran “just waiting for Ayatollah Khamenei to die”
February 28, 2026
The US and Israel struck Iran. What happens next?
February 28, 2026
The US and Israel launch war on Iran
February 28, 2026
Iran at war with Carnegie’s Karim Sadjadpour
February 28, 2026
Is the US on the brink of war with Iran?
February 28, 2026
Trump and Khamenei hold last ditch Iran nuclear talks
February 27, 2026
Were Epstein’s friends in high places his currency?
February 27, 2026
You vs. the News: A Weekly News Quiz - February 27, 2026
February 27, 2026
The US writes its own rules
February 26, 2026
What the Supreme Court’s tariff ruling really changed
February 25, 2026
What’s Good Wednesdays™, February 25, 2026
February 25, 2026
The foreign policy details Trump omitted
February 25, 2026
Trump’s State of the Union address
February 25, 2026
Small businesses at a crossroads
February 25, 2026
Walmart’s $1 billion investment is strengthening associate careers
February 25, 2026
Four years in: Ukraine war grinds on
February 24, 2026
Putin & Xi on Trump's Iran threats
February 24, 2026
Graphic Truth: Russia’s declining fossil fuel revenues
February 24, 2026
Explore the Trusted Tech Alliance
February 24, 2026
Daalder: "A ceasefire is not in the interest of Ukraine"
February 24, 2026
Has US–Iran diplomacy reached its end?
February 23, 2026
Has social media reached a tipping point?
February 23, 2026
How cybercrime is becoming a threat to economic growth
February 23, 2026
How Europe is moving on from Trump's America
February 23, 2026
NATO, Trump, and Europe’s wake-up call with Ivo Daalder
February 21, 2026
Europe can no longer count on the US
February 20, 2026
Supreme Court blocks Trump’s tariff power
February 20, 2026
India’s AI power play
February 20, 2026
You vs. the News: A Weekly News Quiz - February 20, 2026
February 20, 2026
The man at the center of the Board of Peace
February 19, 2026
Venezuela cracked. Will Cuba follow?
February 18, 2026
From Iran to Ukraine: The growing risk of conflict
February 18, 2026
What’s Good Wednesdays™, February 18, 2026
February 18, 2026
Meet Puppet Regime’s puppet master
February 18, 2026
GZERO Series
GZERO Daily: our free newsletter about global politics
Keep up with what’s going on around the world - and why it matters.






















































































