Did the US steal the UK’s AI thunder?

Art courtesy of Midjourney
World leaders gathered last week at Bletchley Park, the former headquarters of Britain’s codebreakers during World War II, to make sense of what is perhaps the most important emerging technology. British Prime Minister Rishi Sunak played host, attempting to position the United Kingdom at the forefront of regulating artificial intelligence.

By many accounts, his summit was a success. Sunak brought together an impressive group of world leaders — US Vice President Kamala Harris, UN Secretary-General António Guterres, European Commission President Ursula von der Leyen, and even China’s Vice Minister of Science and Technology Wu Zhaohui. Industry leaders such as Tesla chief Elon Musk, OpenAI CEO Sam Altman, and Microsoft President Brad Smith also attended.

The summit’s big achievement? The Bletchley Declaration is a 28-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In other words, it’s a promise to use the technology for good and not evil. Experts say Sunak’s ability to get China on board was particularly laudable, but the agreement itself is more of a statement of intent than anything with teeth.

Sunak may have earned plaudits from his star-studded summit, but one of the key no-shows stole some of his thunder from abroad. On Oct. 30, President Joe Biden signed a sweeping executive order on AI – or as sweeping as an executive order can be. Biden cannot unilaterally make new law — that’s the job of Congress — but he can direct many of the government’s departments and agencies to act under existing statutes.

What’s in the US plan? Biden’s order is filled with requests for new studies, reports, and recommendations. It involves six departments, including Justice and Homeland Security, charged with tackling AI-related issues related to civil rights and critical infrastructure, respectively. It also impacts agencies like the National Institute of Standards and Technology, which it tasks with developing watermarking standards for generative AI.

Invoking the Defense Production Act, Biden ordered AI companies working on advanced models to notify the federal government and clue them in to their ongoing safety testing. Many top AI companies already agreed to this earlier this summer, but the executive order codified these previously voluntary transparency requirements.

Dev Saxena, director of Eurasia Group’s geo‑technology practice, called the order “extremely comprehensive” and “ambitious,” noting that it could influence regulators around the world. “Given US leadership in this emerging technology, and the first-mover role it could play in global governance, principles, and tactics used in the executive order could spill over globally,” Saxena said.

Adam Conner, vice president for technology policy at the Center for American Progress, a liberal think tank, called the order an “important first step” and was pleased it included "real accountability for federal government use of AI.” However, he lamented that it “stops short of prohibiting … really harmful uses of AI in things like federal law enforcement” and that it failed to go beyond setting minimum safety standards.

But some think it went too far. Brent Skorup, a senior research fellow at George Mason University’s Mercatus Center, a libertarian think tank, thinks the order contains some “very troubling assertions of government oversight” concerning AI. “Many modest-sized and open-source companies are going to have government scrutiny they likely never anticipated,” he said. However, like Conner, Skorup was heartened that it included “a prominent call to agencies to protect citizens’ privacy and civil liberties” when the government uses AI, which he said is “an issue that has gotten very little focus to date.”

While Sunak’s summit grabbed headlines last week, Biden’s order gave people something more to pore over. That could be a problem for Sunak, who is fighting for his political career and was counting on the AI summit to help position him as a global leader in AI.

“The prime minister’s conservative government is staring down some truly dire polling numbers and an election that has to be called by January 2025,” said Conner. “Sunak needed this AI summit to project success at home to help his political fortunes.”

“Downing Street gets due credit for pulling off a great event,” Saxena said, “but separately, the White House has laid out the most detailed set of policy prescriptions on AI, at the executive level, in US history.”

Art: Courtesy of Midjourney

More from GZERO Media

- YouTube

On Ian Explains, Ian Bremmer breaks down how the US and China are both betting their futures on massive infrastructure booms, with China building cities and railways while America builds data centers and grid updates for AI. But are they building too much, too fast?

Elon Musk attends the opening ceremony of the new Tesla Gigafactory for electric cars in Gruenheide, Germany, March 22, 2022.
Patrick Pleul/Pool via REUTERS/File Photo

$1 trillion: Tesla shareholders approved a $1-trillion pay package for owner Elon Musk, a move that is set to make him the world’s first trillionaire – if the company meets certain targets. The pay will come in the form of stocks.

Brazil's President Luiz Inácio Lula da Silva and Germany's Chancellor Friedrich Merz walk after a bilateral meeting on the sidelines of the UN Climate Change Conference (COP30), in Belem, Brazil, on November 7, 2025.
REUTERS/Adriano Machado

When it comes to global warming, the hottest ticket in the world right now is for the COP30 conference, which runs for the next week in Brazil. But with world leaders putting climate lower on the agenda, what can the conference achieve?

- YouTube

How do we ensure AI is trustworthy in an era of rapid technological change? Baroness Joanna Shields, Executive Chair of the Responsible AI Future Foundation, says it starts with principles of responsible AI and a commitment to ethical development.