California wants to prevent an AI “catastrophe”

Courtesy of Midjourney

The Golden State may be close to passing AI safety regulation — and Silicon Valley isn’t pleased.

The proposed AI safety bill, SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, aims to establish “common sense safety standards” for powerful AI models.

The bill would require companies developing high-powered AI models to implement safety measures, conduct rigorous testing, and provide assurances against "critical harms," such as the use of models to execute mass-casualty events and cyberattacks that lead to $500 million in damages. It warns that the California attorney general can take civil action against violators, though rules would only apply to models that cost $100 million to train and pass a certain computing threshold.

A group of prominent academics, including AI pioneers Geoffrey Hinton and Yoshua Bengio,published a letter last week to California’s political leaders supporting the bill. “There are fewer regulations on AI systems that could pose catastrophic risks than on sandwich shops or hairdressers,“ they wrote, saying that regulations are necessary not only to rein in the potential harms of AI but also to restore public confidence in the emerging technology.

Critics, including many in Silicon Valley, argue the bill is overly vague and could stifle innovation. In June, the influential startup incubator Y Combinator, wrote a public letter outlining its concerns. It said that liability should lie with those who abuse AI tools, not developers, that the threshold for inclusion under the law is arbitrary, and that a requirement that developers include a “kill switch” allowing them to turn off the model would be a “de facto ban on open-source AI development.”

Steven Tiell, a nonresident senior fellow with the Atlantic Council's GeoTech Center, thinks the bill is “a good start” but points to “some pitfalls.” He appreciates that it only applies to the largest models but has concerns about the bill’s approach to “full shutdown” capabilities – aka the kill switch.

“The way SB 1047 talks about the ability for a ‘full shutdown’ of a model – and derivative models – seems to assume foundation models would have some ability to control derivative models,” Tiell says. He warned this could “materially impact the commercial viability of foundation models across wide swaths of the industry.”

Hayley Tsukayama, associate director of legislative activism at the Electronic Frontier Foundation, acknowledges the tech industry’s concerns. “AI is changing rapidly, so it’s hard to know whether — even with the flexibility in the bill — the regulation it’s proposing will age well with the industry,” she says.

“The whole idea of open-source is that you’re making a tool for people to use as they see fit,” she says, emphasizing the burden on open-source developers. “And it’s both harder to make that assurance and also less likely that you’ll be able to deal with penalties in the bill because open-source projects are often less funded and less able to spend money on compliance.”

State Sen. Scott Wiener, the bill’s sponsor, told Bloomberg he’s heard industry criticisms and made adjustments to its language to clarify that open-source developers aren’t entirely liable for all the ways their models are adapted, but he stood by the bill’s intentions. “I’m a strong supporter of AI. I’m a strong supporter of open source. I’m not looking in any way to impede that innovation,” Wiener said. “But I think it’s important, as these developments happen, for people to be mindful of safety.” Spokespeople for Wiener did not respond to GZERO’s request for comment.

In the past few months, Utah and Colorado have passed their own AI laws, but they’ve both focused on consumer protection rather than liability for catastrophic results of the technology. California, which houses many of the biggest companies in AI, has broader ambitions. But while California has been able to lead the nation — and the federal government on data privacy — it might need industry support to get its AI bill fully approved in the legislature and signed into law. California’s Senate passed the bill last month, and the Assembly is set to vote on it before the end of August.

California Gov. Gavin Newsom hasn’t signaled whether or not he’ll sign the bill should it pass both houses of the legislature, but in May, he publicly warned against over-regulating AI and ceding America’s advantage to rival nations: “If we over-regulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”

More from GZERO Media

US President Donald Trump pardons a turkey at the annual White House Thanksgiving Turkey Pardon in the Rose Garden in Washington, D.C., USA, on Nov. 25, 2025.
Andrew Leyden/NurPhoto

Although not all of our global readers celebrate Thanksgiving, it’s still good to remind ourselves that while the world offers plenty of fodder for doomscrolling and despair, there are still lots of things to be grateful for too.

Marine Le Pen, French member of parliament and parliamentary leader of the far-right National Rally (Rassemblement National - RN) party and Jordan Bardella, president of the French far-right National Rally (Rassemblement National - RN) party and member of the European Parliament, gesture during an RN political rally in Bordeaux, France, September 14, 2025.
REUTERS/Stephane Mahe

Army Chief Asim Munir holds a microphone during his visit at the Tilla Field Firing Ranges (TFFR) to witness the Exercise Hammer Strike, a high-intensity field training exercise conducted by the Pakistan Army's Mangla Strike Corps, in Mangla, Pakistan, on May 1, 2025.

Inter-Services Public Relations (ISPR)/Handout via REUTERS

Field Marshal Asim Munir, the country’s de facto leader, consolidated his power after the National Assembly rammed through a controversial constitutional amendment this month that grants him lifelong immunity from any legal prosecution.