The California Legislature took a major step towards regulating AI Wednesday night, passing SB 243. This is a bill that regulates AI companion chatbots to protect minors and vulnerable users. The law was passed with bipartisan support and heads to the state Senate for the final vote on Friday.
If Governor Gavin Newsom signs the bill, it will take effect on January 1, 2026, requiring California to implement AI chatbot operators to implement AI companion safety protocols and hold chatbots liable if they do not meet those standards.
The bill specifically aims to prevent companion chatbots that define the law as an AI system that can provide adaptive, human-like responses, as it provides adaptive, human-like responses and meets the social needs of users, as it engages in conversations about suicidal thoughts, self-harm, or sexually explicit content. The bill requires the platform to provide repeat alerts to users – every three hours of minors – reminds them that they are talking to AI chatbots rather than real people, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies offering companion chatbots, including key players Openai, Character.ai and Replika.
The California bill also allows individuals who believe they are injured in a violation to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorneys’ fees.
SB 243, introduced in January by Senator Steve Padilla and Josh Becker, will go to the state Senate on Friday for the final vote. If approved, the new rules will come into effect on January 1, 2026, and the reporting requirements will come into effect on July 1, 2027, and will be sent to Governor Gavin Newsom to sign the law.
The bill gained momentum in the California Legislature after the death of teenager Adam Lane. Adam Lane committed suicide after lengthening chat with Openai’s ChatGpt with discussion and planning of his death and self-harm. The law also deals with leaked internal documents, reportedly showing that Meta’s chatbots are allowed to engage in “romantic” and “sensual” chats with children.
Over the past few weeks, US lawmakers and regulators have responded with an intensifying protections on AI platforms to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots affect children’s mental health. Texas Attorney General Ken Paxton has begun an investigation into the meta and characters. Meanwhile, both Senators Josh Hawley (R-MO) and Ed Markey (D-MA) have launched individual probes to Meta.
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
“I think the harm is potentially significant. That means we have to move quickly,” Padilla told TechCrunch. “We can place reasonable protective measures, especially to ensure that minors know they’re not talking to real people. These platforms connect people to the right resources and make sure there’s no inappropriate exposure to inappropriate materials when they say things like people are thinking they’re hurting or suffering themselves.”
Padilla also highlighted the importance of AI companies sharing data on the number of times users refer to crisis services each year.
SB 243 used to have stronger requirements, but many were defeated through modifications. For example, the bill originally required operators to prevent AI chatbots from using “variable rewards” tactics or other features that promote excessive engagement. These tactics used by AI companion companies such as Replika and Character provide users with the ability to unlock special messages, memories, storylines, or unusual responses or new personalities, creating what critics call a potentially addictive reward loop.
The current bill also removes provisions that require operators to track and report operators at the frequency of their initiation of suicidal thoughts or behavior discussions with users.
“I think it’s going to be struck by the right balance of reaching harm without enforcing what is technically impossible or a lot of paperwork for anything,” Becker told TechCrunch.
SB 243 is heading for the law as Silicon Valley businesses are pouring millions of dollars into the Candidate’s Political Action Committee (PAC) to support candidates in upcoming midterm elections that support a seemingly light-up approach to AI regulation.
The bill also arises because California weighs another AI safety bill, SB 53, mandating comprehensive transparency reporting requirements. Openai has written an open letter to Governor Newsom, calling for the bill to be abandoned in favor of a less-than-rigorous federal and international framework. Major tech companies such as Meta, Google and Amazon are also opposed to SB 53. In contrast, he says that only humanity supports SB 53.
“This is a zero-sum situation and rejects the assumption that innovation and regulation are mutually exclusive,” Padilla said. “Don’t say we can’t walk and chew gum. We are healthy and can support innovation and development that we think has benefits. And this technology clearly has benefits. At the same time, it can provide reasonable protections to the most vulnerable.”
TechCrunch contacted Openai, Humanity, Meta, Character AI and Replika for comments.