The California Senator recently gave final approval to the new AI safety bill, SB 53, and sent it to Governor Gavin Newsom, who either signs or vetos.
If this all sounds familiar, it’s because he rejected another AI safety bill last year written by State Sen. Scott Wiener. However, SB 53 is narrower than Wiener’s previous SB 1047, focusing on large AI companies with annual revenues of over $500 million.
On the latest episode of TechCrunch’s flagship podcast Equity, I had the opportunity to discuss SB 53 with my colleagues Max Zeff and Kirsten Korosec. Max believes Wiener’s new bill has a better shot of becoming law because of its big company focus and also being supported by the human race of AI companies.
Read the preview of the AI safety and state-level legal conversations below. (I edited the transcript to make us slightly smarter.)
Max: Why should I be concerned about the AI Safety Act that has passed through the Chamber of Commerce in California? We are taking part in this age where AI companies are becoming the most powerful companies in the world. This could be one of the few checks of their power.
This is much narrower than the SB 1047 and last year we got a lot of pushbacks. However, I think SB 53 still posts some meaningful regulations in the AI Lab. We will publish model safety reports. If they have an incident, it basically forces them to report it to the government. Also, for employees in these labs, if they have concerns, they will give the government a channel to report that many people are signing the NDA but are not facing pushbacks from businesses.
To me, this feels like a potentially meaningful check on the power of high-tech companies.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Kirsten: It’s important to think about the fact that it’s California for reasons that are important at the state level. All major AI companies have a great footprint on this state, even if they don’t have a base here. It’s not that other states aren’t important – I don’t want to receive emails from people in Colorado, etc. – but it’s really a hub for AI activities, so it’s important to be California in particular.
But my question about you, Max, it seems like it’s sculpted with many exceptions. It’s narrow, but more complicated than the previous (invoice)?
Max: In a way, yes. I think the main sculpture of this bill is that it’s trying to prevent it from being applied to really small startups. And basically, many people, who are one of the main controversies about the last legislative effort from Sen. Scott Weiner, who wrote the bill, who said it could be a booming California economy.
This bill applies specifically to AI developers who have generated more than $500 million (generated) from AI models. This really is about to target these big companies, Google Deepmind.
Anthony: As I understand, if you’re a smaller startup, you need to share some safety information, but not so much.
It’s worth talking about the wider landscape of AI regulations and the fact that one of the major changes between last year and this year is the new president. The federal administration has taken much more stance without regulations, and businesses should be able to do what they want to, to the extent they actually include (language) in their funding bills and say that states don’t have their own AI regulations.
I don’t think it’s just before, but potentially they can try and make it happen in the future. So this could be another aspect of the Trump administration and the blue state fighting.
Equity is TechCrunch’s flagship podcast produced by Teresa Loconsolo, posted every Wednesday and Friday.
Subscribe to us with Apple Podcasts, Cloudy, Spotify, and all casts. You can also follow X and thread equity, @EquityPod.