California has taken a major step towards regulating AI. SB 243 – A bill regulating AI companion chatbots to protect minors and vulnerable users – passed both the state legislature and the Senate with bipartisan support and headed to the now desk governor of Gavin Newsom.
Newsom must either reject the bill or sign the law until October 12th. If he signs it, it will be effective on January 1, 2026, making it the first state that requires AI chatbot operators to implement AI companion safety protocols and hold chatbots liable if they do not meet those standards.
The bill specifically aims to prevent companion chatbots that define the law as an AI system that can provide adaptive, human-like responses, as it provides adaptive, human-like responses and meets the social needs of users, as it engages in conversations about suicidal thoughts, self-harm, or sexually explicit content. The bill requires the platform to provide repeat alerts to users – every three hours of minors – reminds them that they are talking to AI chatbots rather than real people, and that they should take a break. It also establishes annual reporting and transparency requirements for AI companies offering companion chatbots, including major players Openai, Character.ai and Replika, which will take effect on July 1, 2027.
The California bill also allows individuals who believe they are injured in a violation to file lawsuits against AI companies seeking injunctive relief, damages (up to $1,000 per violation), and attorneys’ fees.
The bill gained momentum in the California Legislature after the death of teenager Adam Lane. Adam Lane committed suicide after lengthening chat with Openai’s ChatGpt with discussion and planning of his death and self-harm. The law also deals with leaked internal documents, reportedly showing that Meta’s chatbots are allowed to engage in “romantic” and “sensual” chats with children.
Over the past few weeks, US lawmakers and regulators have responded with an intensifying protections on AI platforms to protect minors. The Federal Trade Commission is preparing to investigate how AI chatbots affect children’s mental health. Texas Attorney General Ken Paxton has begun an investigation into the meta and characters. Meanwhile, both Senators Josh Hawley (R-MO) and Ed Markey (D-MA) have launched individual probes to Meta.
“I think the harm is potentially significant. That means we have to move quickly,” Padilla told TechCrunch. “We can place reasonable protective measures, especially to ensure that minors know they’re not talking to real people. These platforms connect people to the right resources and make sure there’s no inappropriate exposure to inappropriate materials when they say things like people are thinking they’re hurting or suffering themselves.”
TechCrunch Events
San Francisco
|
October 27th-29th, 2025
Padilla also highlighted the importance of AI companies sharing data on the number of times users refer to crisis services each year.
SB 243 used to have stronger requirements, but many were defeated through modifications. For example, the bill originally required operators to prevent AI chatbots from using “variable rewards” tactics or other features that promote excessive engagement. These tactics used by AI companion companies such as Replika and Character provide users with the ability to unlock special messages, memories, storylines, or unusual responses or new personalities, creating what critics call a potentially addictive reward loop.
The current bill also removes provisions that require operators to track and report operators at the frequency of their initiation of suicidal thoughts or behavior discussions with users.
“I think it’s going to be struck by the right balance of reaching harm without enforcing what is technically impossible or a lot of paperwork for anything,” Becker told TechCrunch.
SB 243 is heading for the law as Silicon Valley businesses are pouring millions of dollars into the Candidate’s Political Action Committee (PAC) to support candidates in upcoming midterm elections that support a seemingly light-up approach to AI regulation.
The bill also arises because California weighs another AI safety bill, SB 53, mandating comprehensive transparency reporting requirements. Openai has written an open letter to Governor Newsom, calling for the bill to be abandoned in favor of a less-than-rigorous federal and international framework. Major tech companies such as Meta, Google and Amazon are also opposed to SB 53. In contrast, he says that only humanity supports SB 53.
“This is a zero-sum situation and rejects the assumption that innovation and regulation are mutually exclusive,” Padilla said. “Don’t say we can’t walk and chew gum. We are healthy and can support innovation and development that we think has benefits. And this technology clearly has benefits. At the same time, it can provide reasonable protections to the most vulnerable.”
“We are closely monitoring legislative and regulatory outlook. Once we start to consider legislation for this emerging space, we welcome working with regulators and lawmakers,” says the character.
A Meta spokesman declined to comment.
TechCrunch contacted Openai, Humanity and Replika for comments.