Skip to content
Search

Latest Stories

GZERO AI

The Feds vs. California: Inside the twin efforts to regulate AI in the US

The Feds vs. California: Inside the twin efforts to regulate AI in the US
Midjourney

Silicon Valley is home to the world’s most influential artificial intelligence companies. But there’s currently a split approach between the Golden State and Washington, DC, over how to regulate this emerging technology.

The federal approach is relatively hands-off. After Joe Biden’s administration persuaded leading AI companies to sign a voluntary pledge in July 2023 to mitigate risks posed by AI, it issued a sweeping executive order on artificial intelligence in October 2023. That order commanded federal agencies and departments to begin writing rules and explore how they can incorporate AI to improve their current work. The administration also signed onto the UK’s Bletchley Declaration, a multi-country commitment to develop and deploy AI in a way that’s “human-centric, trustworthy, and responsible.” In April, the White House clarified that under the executive order, agencies have until December to “assess, test, and monitor” the impact of AI on their work, mitigate algorithmic discrimination, and provide transparency into how they’re using AI.


But perhaps its biggest win came on Aug. 29 when OpenAI and Anthropic voluntarily agreed to share their new models with the government so officials can safety-test them before they’re released to the public. The models will be shared with the US AI Safety Institute, housed under the Commerce Department’s National Institute of Standards and Technology, or NIST.

“We are happy to have reached an agreement with the US AI Safety Institute for pre-release testing of our future models,” OpenAI CEO Sam Altman wrote on X. “For many reasons, we think it’s important that this happens at the national level. US needs to continue to lead!”

Altman’s clarification that regulation should happen at the national level implied an additional rebuke of how California seeks to regulate the company and its tech.

Brian Albrecht, the chief economist at the International Center for Law & Economics, was not surprised by the companies’ willingness to share their models with the government. “This is a very standard response to expected regulation,” Albrecht said. “And it’s always tough to know how voluntary any of this is.”

But Dean Ball, a research fellow at the libertarian think tank Mercatus Center, said he’s concerned about the opacity of these arrangements. “We do not know what level of access the federal government is being given, whether the federal government has the ability to request that model releases be delayed, and many other specific details,” Ball said. “This is not the way lawmaking is supposed to work in America; having private arrangements worked out between providers of transformative technology and the federal government is a troubling step in AI policy.”

Still, these appear to be relatively light-touch measures that counter California’s proposed approach to regulating artificial intelligence.

On Aug. 28, the state’s legislature passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, or SB 1047, which aims to establish “common sense safety standards” for powerful AI models. Written by California State Sen. Scott Wiener, and supported by AI pioneers like Geoffrey Hinton and Yoshua Bengio, the bill has divided Silicon Valley companies. Albrecht said that what’s been proposed by California is much closer to the European model of AI regulation — the EU’s AI Act passed in March — while Washington hasn’t yet adopted a unified view on how the technology should be regulated.

Critics of the bill include OpenAI, California’s Chamber of Commerce, and even former Speaker of the House Nancy Pelosi. “While we want California to lead in AI in a way that protects consumers, data, intellectual property, and more, SB 1047 is more harmful than helpful in that pursuit,” Pelosi said in a recent statement. In a recent edition of GZERO AI, experts from the Electronic Frontier Foundation at the Atlantic Council expressed concerns about the bill’s so-called “kill switch” and how it could stifle open-source AI development.

Some industry players have been more open to the bill. Anthropic said the bill’s benefits likely outweigh its risks and Tesla CEO Elon Musk, who has an AI startup of his own called xAI, said California should “probably” pass the bill.

It’s still unclear whether Gov. Gavin Newsom will sign the bill — he has until Sept. 30 to do so. He has not signaled his view on the legislation, but in May, he warned about the risk of overregulating AI.

“I don’t want to cede this space to other states or other countries,” Newsom said at an event in San Francisco. “If we overregulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”

More For You

What we learned from a week of AI-generated cartoons
Courtesy of ChatGPT
Last week, OpenAI released its GPT-4o image-generation model, which is billed as more responsive to prompts, more capable of accurately rendering text, and better at producing higher-fidelity images than previous AI image generators. Within hours, ChatGPT users flooded social media with cartoons they made using the model in the style of the [...]
The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

The flag of China is displayed on a smartphone with a NVIDIA chip in the background in this photo illustration.

Jonathan Raa/NurPhoto via Reuters
H3C, one of China’s biggest server makers, has warned about running out of Nvidia H20 chips, the most powerful AI chips Chinese companies can legally purchase under US export controls. [...]
​North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

North Korean leader Kim Jong Un supervises the test of suicide drones with artificial intelligence at an unknown location, in this photo released by North Korea's official Korean Central News Agency on March 27, 2025.

KCNA via REUTERS
Hermit Kingdom leader Kim Jong Un has reportedly supervised AI-powered kamikaze drone tests. He told KCNA, the state news agency, that developing unmanned aircraft and AI should be a top priority to modernize North Korea’s armed forces. [...]
The logo for Isomorphic Labs is displayed on a tablet in this illustration.

The logo for Isomorphic Labs is displayed on a tablet in this illustration.

Igor Golovniov/SOPA Images/Sipa USA via Reuters
In 2024, Demis Hassabis won a Nobel Prize in chemistry for his work in predicting protein structures through his company, Isomorphic Labs. The lab, which broke off from Google's DeepMind in 2021, raised $600 million from investors in a new funding round led by Thrive Capital on Monday. The company did not disclose a valuation. [...]