Trending Now
We have updated our Privacy Policy and Terms of Use for Eurasia Group and its affiliates, including GZERO Media, to clarify the types of data we collect, how we collect it, how we use data and with whom we share data. By using our website you consent to our Terms and Conditions and Privacy Policy, including the transfer of your personal data to the United States from your country of residence, and our use of cookies described in our Cookie Policy.
{{ subpage.title }}
Why governments vs. Big Tech is the wrong question
It’s been three and a half years since I first laid out the idea of a technopolar world: one no longer dominated solely by states, but increasingly shaped – and sometimes steered – by a handful of powerful tech companies with the newfound ability to influence economies, societies, politics, and geopolitics.
At the time, I said the power of Big Tech was poised to grow but argued governments wouldn’t go down without a fight and sketched out three potential futures, depending on how the showdown between them played out: one in which tech companies displaced governments as the principal sovereigns of a globalized digital order; one where a tech cold war took hold and states reasserted control over a fragmented cyberspace; and one in which state dominance gave way to a new order led by tech firms.
This week, I published a follow-up in Foreign Affairs – “The Technopolar Paradox: The Frightening Fusion of Tech Power and State Power” – looking at how those predictions have aged, what’s actually happened since 2021, and where we might be heading next.
Spoiler: the trends I flagged back then have only accelerated. But reality has turned out messier, and more dangerous, than anyone could’ve imagined.
Here’s what you need to know.
Technopolarity turbocharged
Let’s rewind to early 2022. Russian tanks are bearing down on Kyiv. Ukraine’s government and military command structure is under threat as the whole country faces an imminent communications blackout.
Enter Elon Musk.
Within days, SpaceX ships Starlink terminals to Ukraine and flips on satellite internet coverage, effectively keeping the country online and in the fight. For a time, he’s hailed as a hero. But months later, when Ukraine asks him to extend that coverage to Crimea to enable a submarine drone strike on Russian naval assets, Musk refuses. He’s worried about escalation. Even the Pentagon can’t change his mind.
Think about that. An unelected billionaire with no formal role in government single-handedly altered the trajectory of a war between nation-states – not once, but multiple times. That was technopolarity in action: private tech actors wielding state-like powers with geopolitical consequences, making decisions that would normally be reserved for presidents, defense ministers, or national security councils, without public accountability.
Over the last few years, the power of tech firms has only deepened. During the pandemic, they became indispensable: for remote work, education, healthcare, and the flow of information (and disinformation). Their influence grew in every sphere – economic, social, and political. And it didn’t stop in the digital world. Tech firms now control critical infrastructure that governments rely on: cloud systems, cybersecurity platforms, data centers, satellites. They’re not just platforms. They’re utilities – with geopolitical consequences.
Governments have tried to claw back power. The EU passed the Digital Markets Act and Digital Services Act. US regulators launched antitrust lawsuits and passed state-level privacy laws. Countries like India, South Africa, and Brazil cracked down on tech platforms. But none of these moves changed who writes the rules of the digital world.
And that was before AI exploded onto the scene and supercharged Big Tech’s lead over states. Suddenly, tech firms weren’t just dominant online. They were defining the frontier of innovation – and the terms of national power. Building advanced AI requires staggering amounts of data, compute, and talent. Only a few companies have these resources. And no government has the ability to move fast enough to rein them in. Even if they could build rules to constrain today’s models, those rules would be outdated by the time they were implemented. Key decisions about how AI shapes our societies, economies, and geopolitics looked bound to be made in Silicon Valley boardrooms, not parliaments or congresses.
Geopolitics strikes back
But just when technopolar consolidation seemed unstoppable, the old forces of geopolitics were making a comeback. Protectionism, economic security, and great power rivalry all returned with a vengeance – especially after Russia’s invasion of Ukraine and amid growing US-China tensions. In response, governments began to take back control over economic and technological domains they had largely ceded to globalization and free markets.
Washington imposed export bans on advanced semiconductors, blacklisted Chinese firms, and poured billions into reshoring strategic supply chains. China retaliated with restrictions of its own, doubled down on self-reliance, and reined in its tech sector completely. Protectionism and industrial policy became the new global norm.
This retreat from globalization fractured the global tech ecosystem and upended the business models of “globalist” firms like Apple and Tesla, which long depended on open markets and integrated supply chains. “National champions” like Microsoft and Palantir, by contrast, are thriving, profiting off their close ties to the US government in this post-globalization, hyper-securitized, state-aligned era. Tech firms can’t just float above the fray anymore.
Rise of the techno-authoritarians
While states were busy battling for control of digital space, some of Silicon Valley’s leading lights decided they’d rather take over the US state than take orders from it (or resist it).
Back in 2021, I described folks like Musk, Peter Thiel, and Marc Andreessen as “techno-utopians”: visionaries who believed technology could transcend politics and even render governments obsolete. But in recent years, those same people pivoted from wanting to escape the state to trying to capture it.
What explains the shift from libertarian idealism to techno-authoritarian ambition? For one, today’s frontier technologies – from AI to quantum to biotech – can’t scale without state support. That’s made alignment with Washington a strategic necessity. And in an era of great power competition, the rewards of capturing public power have grown alongside the risks of being left out. But some of these tech leaders have grown ideologically hostile to democracy. Thiel has said he doesn’t believe “freedom and democracy are compatible.” Musk once called for a “modern-day Sulla,” in reference to a Roman dictator who dismantled republican institutions in the name of restoring order.
That might have started as a joke, but the governing instinct was real. Musk poured nearly $300 million into helping Trump retake power in 2024 and has since been rewarded with sweeping authority over the federal government. He’s used that perch to purge civil servants, install loyalists, and amass troves of government data – all while maintaining control of his private companies. Suddenly, the same tech overlords who control AI development, space infrastructure, and the digital public square are also shaping public policy and writing their own rules.
The risk isn’t just crony capitalism. It’s the fusion of state and tech power into a hybrid Leviathan where public institutions are reoriented to serve the strategic, commercial, and political goals of a narrow tech elite. Already, reports suggest that confidential IRS, immigration, health, financial, and Social Security data are being consolidated. For all we know, they are being fed into AI models developed by Musk’s xAI to be exploited for commercial gain or even political surveillance. We’re not talking about China’s top-down surveillance state. We’re talking about something more decentralized, more market-driven, and potentially even more dangerous – a system with just as much potential for abuse, fewer checks, and even fewer balances.
What the future looks like
So where does this leave us? Not in a fully tech-dominated world. Not in a state-dominated world. But in a messy hybrid one shaped by two poles of concentrated power.
On one side, we have an increasingly technopolar United States, where a handful of tech firms and leaders enjoy extraordinary power – wielding growing influence not just over digital space and critical infrastructure, but over US public policy and global standards. In some cases, they enjoy the implicit (or explicit) backing of the US government.
On the other side, we have a tightly state-controlled China, where tech firms serve the Chinese Communist Party’s goals.
Caught between these two poles is … everyone else. Europe aspires to digital sovereignty but lacks the homegrown tech muscle to pull it off. Much of the Global South is being pulled toward one model or the other. And global institutions that might once have brokered a balance are being sidelined or dismantled.
And here’s the kicker: though the US and Chinese systems differ in ideology, they’re starting to converge in practice. Both dominant models – American and Chinese – prioritize power, efficiency, and control over consent, accountability, and freedom. Whether authority lies with the state or the corporation, democracy and individual rights are not the default.
Therein lies the paradox of the technopolar age: technologies that were supposed to democratize access to power, information, and opportunity are now enabling more effective forms of centralized, unaccountable control, making it harder to govern democratically and easier for unaccountable elites – public or private – to tighten their grip.
In the West, we risk handing over our democracies to unelected technocrats. In China, the state already runs the show. In both cases, the result is the same: less transparency, less accountability, and more concentration of power – whether in corporate boardrooms or party headquarters.
The question is no longer whether the state can contain Big Tech. It’s whether open societies can survive the fusion of the two. Right now, the answer is very much up in the air.
How tech companies aim to make AI more ethical and responsible
Artificial intelligence’s immense potential power raises significant questions over its safety. Large language models, a kind of AI like Microsoft’s Bard or OpenAI’s ChatGPT, in particular, run the risk of providing potentially dangerous information.
Should someone, say, ask for instructions to build a bomb, or advice on harming themselves, it would be better that AI not answer the question at all. Instead, says Microsoft Vice Chair and President Brad Smith in a recent Global Stage livestream, from the sidelines of the 78th UN General Assembly, tech companies need to build in guardrails that will direct users toward counseling, or explain why they can’t answer.
And that’s just the first step. Microsoft aims to build a full safety architecture to help artificial intelligence technology flourish within safe boundaries.
Watch the full Global Stage Livestream conversation here: Hearing the Christchurch Call
- The AI arms race begins: Scott Galloway’s optimism & warnings ›
- Why Big Tech companies are like “digital nation states” ›
- Will consumers ever trust AI? Regulations and guardrails are key ›
- The UN will discuss AI rules at this week's General Assembly ›
- The AI power paradox: Rules for AI's power ›
- How should artificial intelligence be governed? ›
Regulating AI: The urgent need for global safeguards
There’s been a lot of excitement about the power and potential of new generative artificial intelligence tools like ChatGPT or Midjourney. But there’s also a lot to be worried about, like misinformation, data privacy, and algorithm bias, just to name a few.
On GZERO World with Ian Bremmer, cognitive scientist and AI researcher Gary Marcus lays out the case for effective, comprehensive, global regulation when it comes to artificial intelligence.
Because of how fast the technology is developing and its potential impact on everything from elections to the economy, Marcus believes that every nation should have its own AI agency or cabinet-level position. He also believes that global AI governance is crucial, so that AI safety standards are the same from country to country.
“We need to move to something like the FDA model,” Marcus tells Bremmer on GZERO World, “If you’re going to do something that you deploy on a wide scale, you have to make a safety case.”
Watch the GZERO World episode: Is AI's "intelligence" an illusion?
And watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- The AI power paradox: Rules for AI's power ›
- Podcast: Getting to know generative AI with Gary Marcus ›
- AI comes to Capitol Hill ›
- Is AI's "intelligence" an illusion? ›
- Ian Bremmer: Algorithms are now shaping human beings' behavior - GZERO Media ›
- Rishi Sunak's first-ever UK AI Safety Summit: What to expect - GZERO Media ›
- Gemini AI controversy highlights AI racial bias challenge - GZERO Media ›
AI's rapid rise
In a remarkable shift, AI has catapulted to the forefront of global conversations within a span of just one year. From political leaders to multilateral organizations, the dialogue has swiftly transitioned from mere curiosity to deep-seated concern. Ian Bremmer, founder and president of GZERO Media and Eurasia Group, says AI transcends traditional geopolitical boundaries. Notably, the reins of AI's dominion rest not in governments but predominantly within the hands of technology corporations.
This unconventional dynamic prompts a recalibration of governance strategies. Unlike past challenges that could be addressed in isolation, AI's complexity necessitates collaboration with its creators—engineers, scientists, technologists, and corporate leaders. The emergence of a new era, where technology companies hold significant sway, has redefined the political landscape. The journey to understand and govern AI is a collaborative endeavor that promises both learning and transformation.
Watch the full conversation: Governing AI Before It’s Too Late
Watch GZERO World with Ian Bremmer every week at gzeromedia.com/gzeroworld or on US public television. Check local listings.
- The AI power paradox: Rules for AI's power ›
- Podcast: Artificial intelligence new rules: Ian Bremmer and Mustafa Suleyman explain the AI power paradox ›
- Ian Bremmer explains: Should we worry about AI? ›
- The geopolitics of AI ›
- Making rules for AI … before it’s too late ›
- How should artificial intelligence be governed? ›
- How AI can be used in public policy: Anne Witkowsky - GZERO Media ›
Ian Bremmer: How AI may destroy democracy
More than 30 years ago, the US was the top exporter of democracy to the rest of the world. But now, America has become the main exporter of the tools that undermine democracy where it is weak, Ian Bremmer said in a GZERO Live conversation about Eurasia Group's Top Risks 2023 report.
Social media and tech companies based in the US have developed what he calls "Weapons of Mass Disruption" — Eurasia Group's #3 geopolitical risk for 2023.
And guess who wrote the title? An artificial intelligence bot from ChatGPT.
To be sure, Bremmer adds, AI can be great for many things. But "no one talks about the flip side, the dangers of these disruptive technologies, until the crisis hits, until it's too late."
Read Eurasia Group's Top Risks 2023 report here.
Watch the full live conversation: Top Risks 2023: A rogue Russia and autocrats threatening the world
- Be more worried about artificial intelligence ›
- The AI addiction cycle ›
- Can we control AI before it controls us? ›
- Eurasia Group’s Top Risks for 2023 ›
- Scared of rogue AI? Keep humans in the loop, says Microsoft's Natasha Crampton - GZERO Media ›
- How are emerging technologies helping to shape democracy? - GZERO Media ›
- Should AI content be protected as free speech? - GZERO Media ›
- Stop AI disinformation with laws & lawyers: Ian Bremmer & Maria Ressa - GZERO Media ›
- Al Gore: "Artificial insanity" threatens democracy - GZERO Media ›
- 2024 is the ‘Voldemort’ of election years, says Ian Bremmer - GZERO Media ›
- America vs itself: Political scientist Francis Fukuyama on the state of democracy - GZERO Media ›
- Al Gore's take on American democracy, climate action, and "artificial insanity" - GZERO Media ›
- How neurotech could enhance our brains using AI - GZERO Media ›
- Ian Bremmer: On AI regulation, governments must step up to protect our social fabric - GZERO Media ›
- Staving off "the dark side" of artificial intelligence: UN Deputy Secretary-General Amina Mohammed - GZERO Media ›
- Ian Bremmer's 2024 State of the World speech: Watch live Tuesday at 8:30 pm ET - GZERO Media ›
- The transformative potential of artificial intelligence - GZERO Media ›
"We're identifying new cyber threats and attacks every day" – Microsoft’s Brad Smith
Cyber threats are the new frontier of war. That's why companies like Microsoft are investing heavily in the capability to identify new threats and attempted attacks. “We work every day to make sure that we’re identifying new threats and attacks, regardless of where they’re from,” said Microsoft President Brad Smith at the Munich Security Conference. This includes monitoring infiltrations and alerting companies, countries and sometimes even the public, as needed, in a timely fashion, he explained.
Smith spoke with moderator David Sanger in GZERO Media's Global Stage livestream discussion at the Munich Security Conference.
- How Russian cyberwarfare could impact Ukraine & NATO response ... ›
- DarkSide hack reveals risk of ransomware cyberattacks - GZERO ... ›
- Would you pay a cyber ransom? - GZERO Media ›
- Russian cyber attack could trigger NATO's Article 5, warns NATO ... ›
- How Russian cyberwarfare could impact Ukraine & NATO response - GZERO Media ›
- Global Stage ›
- Podcast: Cyber threats in Ukraine and beyond - GZERO Media ›
- Microsoft president Brad Smith has a plan to meet the UN's goals - GZERO Media ›