Analysis
Could Mythos prompt Trump admin U-turn on AI regulation?
CEO and Co-Founder of Anthropic Dario Amodei speaks during the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland, on January 20, 2026.
REUTERS/Denis Balibouse
One month ago, the White House made their feelings about artificial intelligence regulation clear: they didn’t want it. In its legislative framework for AI regulation, published March 20, the Trump administration took an accelerationist stance toward the burgeoning technology, aiming to largely give US companies free rein as a way to ensure they outpace Chinese rivals in the global AI race.
That was before Anthropic announced on April 7 that it had created Mythos.
The model has sent shockwaves through the United States – and the world – for its extraordinary ability to identify software vulnerabilities. If it gets into the wrong hands, it could pose a massive cyber threat to critical infrastructure, such as banks or hospitals. To avoid such a disaster, Anthropic voluntarily decided to limit its release of the technology to a select group of companies, including tech giants like Apple and Google, and software firms like CrowdStrike and Palo Alto Networks, under an initiative dubbed “Project Glasswing.” Even so, the model has reportedly still reached some unauthorized users.
The release of Mythos appears to have rattled the Trump administration. Just three days after it came out, Treasury Secretary Scott Bessent, along with Federal Reserve Chair Jerome Powell, hastily convened a group of finance leaders to discuss the potential threats that the model could pose to the banking system. Then on April 17, the Trump administration met with Anthropic CEO Dario Amodei. After the confab, the White House struck a far more conciliatory tone with the AI leader than in previous weeks, possibly because it wants access to the model.
Mythos is not the only factor that could push the Trump administration to flip its position on AI regulation. The technology is increasingly becoming an electoral issue: only a minority of voters view AI positively, per recent polls, while a growing number fear it will take their jobs away. Voters are also souring on data centers, blaming these warehouses that power AI for rising energy costs.
“The position that we should not regulate AI at all is going to be politically untenable in the near future,” Council on Foreign Relations senior fellow Chris McGuire, who served on the National Security Council during the Biden administration, told GZERO.
It remains unclear whether Mythos is truly the game-changing model some believe it to be. After all, only a few people have seen it. Even the US’s leading cyber agency, the Cybersecurity and Infrastructure Security Agency (CISA), hasn’t gained access to it. What’s clear, though, is that AI is becoming increasingly powerful, and conversations around the technology in the United States are increasingly centering on its risks, rather than its benefits.
“The scary thing is that Mythos is not a peak, right?” said McGuire. “Four months from now, Anthropic will develop a model that is twice as powerful as Mythos. Eight months from now, it’ll be four times as powerful. A year from now, it’ll be eight times as powerful.” As such, voters will only become increasingly concerned about the risks that AI poses.
Why the White House reluctance? Washington’s resistance to AI regulation up until now – outside of labeling Anthropic a “supply chain risk,” a move widely seen as retribution for the firm’s refusal to let the Pentagon use Claude without restrictions – has everything to do with China. The White Housewants the US to maintain its lead in the AI race over Beijing, with AI and crypto “czar” David Sacks warning that any regulations could hinder American firms.
This creates something of a conundrum for Washington, according to McGuire. The willingness of the US to regulate – or even hold back the release of new models like Mythos, akin to what the Food and Drug Administration does with new pharmaceutical products – is contingent on China being several months away from releasing their own version of the model.
“Do we keep racing as fast as possible? Or do we try to make sure that our products are safe?” McGuire said. “You don’t want to make that choice.”
For now, the US has a solid lead over China when it comes to large-language models, with Beijing’s versions lagging behind by about seven months, according to Epoch AI. That has allowed companies like Anthropic and OpenAI to hold back on a full release of their new models, and give other American firms time to boost their cyber defenses before facing any potential cyberattacks from China.
However, there was no obligation for Anthropic to hold back this “frontier model” (the most advanced general purpose model at any moment), despite its massive power. This, according to CATO Institute’s AI fellow Kevin Frazier, will likely change – meaning the federal government will start to mandate that firms take appropriate safety measures before releasing their models.
“How do we make sure that labs are living up to their own internal safeguards and their own internal measures of safety checks before they deploy a model?” Frazier questioned. “I think that’s regarded now, especially after Mythos, as a core part of a legislative framework.”
The White House’s AI framework does state that “appropriate agencies” should have the “technical capacity to understand frontier AI model capabilities and any associated national security considerations.” Whether that translates into delaying the release of future AI models remains to be seen. A spokesperson for CISA declined to comment for this article.
Louise Marie Hurel, a senior research fellow at the security think tank Royal United Services Institute, doesn’t expect the White House to enact a rule that would mandate such a delay.
“If something like a requirement to limit releases became part of an EO or legislation, the current start-up ecosystem, AI labs, and investors would probably react badly,” Hurel told GZERO. “That said, it seems unlikely that there would be a mandate at the federal level to limit model releases.”
In the first edition of “ask ian” Live, Ian Bremmer takes questions directly from the GZERO community on two geopolitical flashpoints: China’s strategic patience and the US campaign to squeeze Cuba economically.
Crime rates are plunging in El Salvador, but this mass trial raises further questions about civil liberties in the Central American country.
Chris, an Army veteran, started his Walmart journey over 25 years ago as an hourly associate. Today, he manages a Distribution Center and serves as a mentor, helping others navigate their own paths to success. At Walmart, associates have the opportunity to take advantage of the pathways, perks, and pay that come with the job — with or without a college degree. In fact, more than 75% of Walmart management started as hourly associates. Learn more about how over 130,000 associates were promoted into roles of greater responsibility and higher pay in FY25.
Microsoft and North America’s Building Trades Unions announced an expanded partnership to bring no‑cost AI training to millions of workers in the building trades. The effort reflects a simple idea: the people building the future should also be equipped to thrive in it. The partnership builds on training that has already reached more than 1,500 instructors across 50 states and North America, expanding access through a recognized AI literacy credential on LinkedIn Learning. It also extends to the next generation of skilled professionals through NABTU’s TradesFutures programs in 34 states, helping strengthen workforce pathways as demand for AI infrastructure grows. Read the full blog here.