The California Senator gave final approval to a major AI safety bill that sets new transparency requirements for large businesses early Saturday morning.
As state Sen. Scott Wiener, author SB 53, explained, “We will require large AI labs to be transparent about safety protocols, create whistleblower protection for (employees) in AI Labs, create public clouds and expand computing access (Calcompute).”
The bill is currently going to California Gov. Gavin Newsom to sign or veto. He has not commented publicly on SB 53, but last year he rejected the broader safety bill created by Wiener and signed narrower laws targeting issues such as Deepfakes.
At the time, Newsom “recognized the importance of protecting the public from the real threat posed by this technology, but Wiener’s previous bill criticized the “strict standards” by applying it to a large-scale model, whether it was “the use of critical data (involved) deployments in a high-risk environment.”
Wiener said the new bill was influenced by recommendations from the policy panel of AI experts, which Newsom convened after his veto.
Politico also reports that with SB 53 recently revised, introducing an annual revenue of less than $500 million while developing a “frontier” AI model will only disclose high levels of safety details, and that more companies will need to provide more detailed reports.
The bill has been opposed by many Silicon Valley companies, VC companies and lobbying groups. In a recent letter to Newsom, Openai did not specifically mention SB 53, but argued that in order to avoid “duplicate and conflict,” it should be considered to be in compliance with statewide safety regulations, as long as it meets federal or European standards.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Andreessen Horowitz’s AI Policy and Chief Legal Officer recently argued that “many of today’s AI bills take risks, like the proposals in California and New York.”
The A16Z co-founder previously pointed to technical regulations as one of the factors that led to high-tech regulations becoming Donald Trump’s second term bid. The Trump administration and its allies subsequently called for a 10-year ban on state AI regulations.
Meanwhile, humanity came out in favor of SB 53.
“We’ve been saying for a long time that we prefer federal standards,” Jack Clark, co-founder of humanity, said in the post. “But if it doesn’t, this creates a solid blueprint for AI governance that cannot be ignored.”