Close Menu
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

What's Hot

Stocks with the biggest price movements at midday: NKE, CORT, TSM

January 1, 2026

Chase Stokes and Kelsea Ballerini reconcile after breakup

January 1, 2026

Gabon government sacks Aubameyang, suspends national team from AFCON2025 | Africa Cup of Nations News

January 1, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram Vimeo
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
  • Home
  • AI
  • Entertainment
  • Finance
  • Sports
  • Tech
  • USA
  • World
  • Latest News
BWE News – USA, World, Tech, AI, Finance, Sports & Entertainment Updates
Home » Humanity supports California’s AI Safety Bill, SB 53
AI

Humanity supports California’s AI Safety Bill, SB 53

adminBy adminSeptember 9, 2025No Comments4 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Share
Facebook Twitter LinkedIn Pinterest Email


On Monday, humanity announced official approval of SB 53, the California bill by Sen. Scott Winner, which imposes domestic transparency requirements on the world’s largest AI model developer. Human recognition marks a rare and big victory for SB 53. This is when major technology groups such as the Consumer Technology Association (CTA) and indoor rooms are lobbying for the bill.

“We believe that frontier AI safety is best handled at the federal level, not a patchwork of state regulations, but strong AI advancements will not wait for consensus in Washington,” Humanity said in a blog post. “The question isn’t whether AI governance is necessary. It’s whether we develop it thoughtfully today or reactively tomorrow. The SB53 provides a solid path to the former.”

If passed, SB 53 will require frontier AI model developers such as OpenAI, Anthropic, Google, XAI to develop safety frameworks and release public safety and security reports before deploying powerful AI models. The bill also establishes whistleblower protections for employees with safety concerns.

Senator Wiener’s bill focuses specifically on limiting the contribution of the AI ​​model to “catastrophic risks.” The bill is defined as deaths of at least 50 people or more than $1 billion in damages. The SB 53 focuses on the extreme aspects of AI risk. AI models are used to provide professional-level support used in biological weapon creation and cyberattacks rather than short-term concerns such as AI deepfakes and psychofancy.

The California Senate has approved an earlier version of SB 53, but it will need to final vote on the bill before proceeding to the governor’s desk. Governor Gavin Newsom has remained silent about the bill so far, but has rejected Senator Weiner’s final AI Safety Bill, SB 1047.

The bill regulating developers of frontier AI models faces a major backlash from both Silicon Valley and the Trump administration. Investors like Andreessen Horowitz and Y Combinator have led some of the pushbacks to SB 1047, and in recent months the Trump administration has repeatedly threatened to stop states from passing AI regulations fully.

One of the most common arguments about AI safety bills is that states should leave the issue to the federal government. Matt Perault, head of Andreessen Horowitz’s AI policy, and Jai Ramaswamy, Chief Legal Officer, published a blog post last week claiming that many of today’s state AI bills risk violating the constitutional commercial transaction provisions.

TechCrunch Events

San Francisco
|
October 27th-29th, 2025

But human co-founder Jack Clark argues in an X post that the tech industry cannot wait for the federal government to act in the coming years to build powerful AI systems.

“We’ve been saying for a long time that we prefer federal standards,” Clark said. “But if it doesn’t, this creates a solid blueprint for AI governance that cannot be ignored.”

Openai’s Chief Global Affairs Officer, Chris Lehane, wrote to Governor Newsom in August.

Miles Brundage, former head of policy research at Openai, said in a post on X that Lehane’s letters are “generally filled with misleading trash about SB 53 and AI policy.” In particular, SB 53 aims to regulate only the world’s largest AI companies, particularly those that generate gross revenues of over $500 million.

Despite criticism, policy experts say SB 53 is a more modest approach than previous AI safety bills. Dean Ball, a senior fellow at the American Innovation Foundation and a former White House AI policy advisor, said in a blog post in August that he believes SB 53 is likely to become law now. Bow, who criticized SB 1047, said that the drafters of SB 53 were “respectful of technical reality” and “a measure of legislative restraint.”

Senator Wiener previously said that SB 53 has been heavily influenced by the expert policy panel of Newsom, co-led by the co-founder of FEI-FEI LI, a leading researcher and co-founder of Stanford University.

Most AI labs already have the version of the internal safety policy required for the SB 53. Openai, Google Deepmind, and humanity regularly publish safety reports for their models. However, these companies may fall behind their voluntary safety commitments as they are not bound by anyone but themselves. SB 53 is intended to set these requirements as state law. If AI Labs fail to comply, they are financially affected.

Earlier in September, California lawmakers revised SB 53 to remove a section of the bill that required AI model developers to audit third-party. Tech companies have previously fought these types of third-party audits in other AI policy battles, claiming they are overly burdensome.



Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleMan accused of trying to assassinate Trump apologises to potential ju umpire
Next Article Stoxx 600, CAC, French Trust Votes
admin
  • Website

Related Posts

‘College dropout’ has become the most coveted qualification to be a startup founder

January 1, 2026

Investors predict AI will enter the workforce in 2026

December 31, 2025

My phone went off. Please live long. . . What exactly?

December 31, 2025

Best AI-powered dictation apps of 2025

December 30, 2025
Leave A Reply Cancel Reply

Our Picks

Newly freed hostages face long road to recovery after two years in captivity

October 15, 2025

Former Kenyan Prime Minister Raila Odinga dies at 80

October 15, 2025

New NATO member offers to buy more US weapons to Ukraine as Western aid dwindles

October 15, 2025

Russia expands drone targeting on Ukraine’s rail network

October 15, 2025
Don't Miss
Entertainment

Chase Stokes and Kelsea Ballerini reconcile after breakup

By adminJanuary 1, 20260

January 2023: First date nervesIn August 2023, Kelsey posted footage of the beginning of their…

Prediction of zodiac signs in 2026

January 1, 2026

Little People, Big World’s Matt Roloff and Zach Roloff reunite amid feud

January 1, 2026

David Beckham pays tribute to Brooklyn Beckham amid family rift

January 1, 2026
About Us
About Us

Welcome to BWE News – your trusted source for timely, reliable, and insightful news from around the globe.

At BWE News, we believe in keeping our readers informed with facts that matter. Our mission is to deliver clear, unbiased, and up-to-date news so you can stay ahead in an ever-changing world.

Our Picks

Why does “Auld Lang Syne” still unite the world in the dead of night?

January 1, 2026

Russia-Ukraine: Putin exudes confidence as Russia approaches tough milestone

January 1, 2026

Live updates: Fire at ski resort in Crans-Montana, Switzerland, dozens believed dead in New Year’s disaster

January 1, 2026

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Facebook X (Twitter) Instagram Pinterest
  • Home
  • About Us
  • Advertise With Us
  • Contact US
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2026 bwenews. Designed by bwenews.

Type above and press Enter to search. Press Esc to cancel.