“Blood and glass" and the power of Big Tech

A little more than ten years after the start of the Arab Spring — a popular pro-democracy revolution helped along by Facebook and Twitter — the world's largest social media platforms this week banned the US president for inciting deadly violence in the United States.

If ever there were an illustration of the simultaneous promise, peril, and more importantly the power of social media to shape our lives and politics, this is it.

Not surprisingly, the Trump ban — and the decision by Apple, Amazon, and Google to expel other right-wing platforms where Trump supporters had plotted violence — has raised a host of thorny questions about how to define free speech, how to regulate tech companies, and what comes next at a delicate and dangerous moment in the "world's oldest democracy." Let's decode some of it.

This isn't, legally speaking, a "free speech" debate. The Bill of Rights in the US Constitution offers no inalienable right to post on Twitter or Facebook, much less to be published, say, by Simon and Shuster. What's more, free speech laws generally stop short of permitting incitement to violence, the primary reason for the tech companies' recent actions.

But it is about the staggering and seemingly arbitrary power of technology companies to shape what is, in practice, the main public square of the 21st century.

Agree or not with the tech companies' decisions here, we don't know much about how those decisions were reached, or by whom. Well beyond Trump's supporters, critics as wide-ranging as German Chancellor Angela Merkel, Russian dissident Alexey Navalny, and the left-wing American Civil Liberties Union pointed out the dangers of arbitrary tech censorship, or the potential powerlessness of people with far less power and recourse to fight back than the US president.

Part of the reason that this is even an issue is that the tech companies have gotten so big in the first place. If Facebook had 200,000 users rather than 2 billion, it wouldn't matter much. So implicit in all of this is the question, again, of if/how to regulate tech companies, and whether to reduce their power to control speech and markets in ways that may inflict harm on society.

Three regulation models. Globally, there are basically three main approaches to tech regulation at the moment. In China, tech companies — some of the world's largest — are privately-run but expected to act as the loyal arms of an authoritarian state, advancing its interests at home and abroad (sometimes even with help from Silicon Valley). In the EU, where by contrast there are very few tech firms of global scale, governments set strict rules on privacy, speech, competition, and transparency which companies must adhere to in order to gain access to a lucrative market of 500 million relatively high-income people.

Lastly, the US — cradle of what are still the world's most influential tech giants — has taken a hands-off approach: tech companies have until now been left largely to regulate themselves, and enjoy certain protections against liability for material posted on their sites. That light touch is what helped them become giants in the first place.

Where does the US go now? In recent years both mainstream US political parties have warmed to the idea of stronger regulation of tech companies, though for different reasons. Republicans allege liberal bias in Silicon valley, while Democrats are primarily worried about policing hate speech and protecting privacy.

Last week's events have supercharged both sides' concerns: Republicans are crying foul over the "deplatforming" of their supporters, while top Democrats see those actions as too little, too late. "It took blood and glass in the halls of Congress" for tech firms to act, said Democratic Senator Richard Blumenthal, a leading voice on tech regulatory issues.

Of course, as a result of last week's Georgia Senate runoff, it is now Democrats who will assume (razor-thin) control over Congress along with the White House, putting them in a position to start advancing their vision of what better tech regulation should look like.

More from GZERO Media

Elon Musk in an America Party hat.
Jess Frampton

Life comes at you fast. Only five weeks after vowing to step back from politics and a month after accusing President Donald Trump of being a pedophile, Elon Musk declared his intention to launch a new political party offering Americans an alternative to the Republicans and Democrats.

Chancellor of the Exchequer Rachel Reeves (right) crying as Prime Minister Sir Keir Starmer speaks during Prime Minister’s Questions in the House of Commons, London, United Kingdom, on July 2, 2025.
PA Images via Reuters Connect

UK Prime Minister Keir Starmer has struggled during his first year in office, an ominous sign for centrists in Western democracies.

- YouTube

“We wanted to be first with a flashy AI law,” says Kai Zenner, digital policy advisor in the European Parliament. Speaking with GZERO's Tony Maciulis at the 2025 AI for Good Summit in Geneva, Zenner explains the ambitions and the complications behind Europe’s landmark AI Act. Designed to create horizontal rules for all AI systems, the legislation aims to set global standards for safety, transparency, and oversight.

More than 60% of Walmart suppliers are small businesses.* Through a $350 billion investment in products made, grown, or assembled in the US, Walmart is helping these businesses expand, create jobs, and thrive. This effort is expected to support the creation of over 750,000 new American jobs by 2030, empowering companies like Athletic Brewing, Bon Appésweet, and Milo’s Tea to grow their teams, scale their production, and strengthen the communities they call home. Learn more about Walmart's commitment to US manufacturing. *See website for additional details.

Last month, Microsoft released its 2025 Responsible AI Transparency Report, demonstrating the company’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.