Regulate AI: Sure, but how?

U.S. President Joe Biden, Governor of California Gavin Newsom and other officials attend a panel on Artificial Intelligence, in San Francisco, California.
U.S. President Joe Biden, Governor of California Gavin Newsom and other officials attend a panel on Artificial Intelligence, in San Francisco, California.
REUTERS/Kevin Lamarque

AI revolutionaries like OpenAI CEO Sam Altman want government regulation, and they want it now – before things get out of hand.

The challenges are many, but they include AI-generated disinformation, harmful biases that wind up baked into AI algorithms, the problems of copyright infringement when AI uses other people’s work as inputs for their own “original” content, and, yes, the “Frankenstein” risks of AI-computers or weapons somehow rebelling against their human masters.

But how to regulate AI is a big question. Broadly, there are three main schools of thought on this. Not surprisingly, they correspond to the world’s three largest economic poles — China, the EU, and the US, each of which has its own unique political and economic circumstances.

China, as an authoritarian government making an aggressive push to be a global AI leader, has adopted strict regulations meant to both boost trust and transparency of Chinese-built AI, while also giving the government ample leeway to police companies and content.

The EU, which is the world’s largest developed market but has few heavyweight tech firms of its own, is taking a “customer-first” approach that strictly polices privacy and transparency while regulating the industry based on categories of risk. That is, an AI judge in a trial deserves much tighter regulation than a program that simply makes you uncannily good psychedelic oil paintings of capybaras.

The US is lagging. Washington wants to minimize the harms that AI can cause, but without stifling the innovative brio of its industry-leading tech giants. This is all the more important since those tech giants are on the front lines of Washington’s broader battle with China for global tech supremacy.

The bigger picture: This isn’t just about what happens in the US, EU, and China. It’s also a three-way race to develop regulatory models that the rest of the world adopts too. So far, Brussels and Beijing are in the lead. Your move, Washington.

More from GZERO Media

Elon Musk in an America Party hat.
Jess Frampton

Life comes at you fast. Only five weeks after vowing to step back from politics and a month after accusing President Donald Trump of being a pedophile, Elon Musk declared his intention to launch a new political party offering Americans an alternative to the Republicans and Democrats.

Chancellor of the Exchequer Rachel Reeves (right) crying as Prime Minister Sir Keir Starmer speaks during Prime Minister’s Questions in the House of Commons, London, United Kingdom, on July 2, 2025.
PA Images via Reuters Connect

UK Prime Minister Keir Starmer has struggled during his first year in office, an ominous sign for centrists in Western democracies.

- YouTube

“We wanted to be first with a flashy AI law,” says Kai Zenner, digital policy advisor in the European Parliament. Speaking with GZERO's Tony Maciulis at the 2025 AI for Good Summit in Geneva, Zenner explains the ambitions and the complications behind Europe’s landmark AI Act. Designed to create horizontal rules for all AI systems, the legislation aims to set global standards for safety, transparency, and oversight.

More than 60% of Walmart suppliers are small businesses.* Through a $350 billion investment in products made, grown, or assembled in the US, Walmart is helping these businesses expand, create jobs, and thrive. This effort is expected to support the creation of over 750,000 new American jobs by 2030, empowering companies like Athletic Brewing, Bon Appésweet, and Milo’s Tea to grow their teams, scale their production, and strengthen the communities they call home. Learn more about Walmart's commitment to US manufacturing. *See website for additional details.

Last month, Microsoft released its 2025 Responsible AI Transparency Report, demonstrating the company’s sustained commitment to earning trust at a pace that matches AI innovation. The report outlines new developments in how we build and deploy AI systems responsibly, how we support our customers, and how we learn, evolve, and grow. It highlights our strengthened incident response processes, enhanced risk assessments and mitigations, and proactive regulatory alignment. It also covers new tools and practices we offer our customers to support their AI risk governance efforts, as well as how we work with stakeholders around the world to work towards governance approaches that build trust. You can read the report here.