California has taken a giant step towards regulating AI. SB 243 — a bill that would regulate AI companion chatbots so as to shield minors and weak customers — handed each the State Meeting and Senate with bipartisan assist and now heads to Governor Gavin Newsom’s desk.
Newsom has till October 12 to both veto the bill or signal it into law. If he indicators, it would take impact January 1, 2026, making California the primary state to require AI chatbot operators to implement security protocols for AI companions and maintain firms legally accountable if their chatbots fail to meet these requirements.
The bill particularly goals to stop companion chatbots — which the laws defines as AI methods that present adaptive, human-like responses and are able to assembly a person’s social wants — from partaking in conversations round suicidal ideation, self-harm, or sexually express content material.
The bill would require platforms to present recurring alerts to customers — each three hours for minors — reminding them that they’re talking to an AI chatbot, not an actual particular person, and that they need to take a break. It additionally establishes annual reporting and transparency necessities for AI firms that supply companion chatbots, together with main gamers OpenAI, Character.AI, and Replika, which would go into impact July 1, 2027.
The California bill would additionally permit people who imagine they’ve been injured by violations to file lawsuits towards AI firms looking for injunctive reduction, damages (up to $1,000 per violation), and legal professional’s charges.
SB 243 was launched in January by state senators Steve Padilla and Josh Becker. It gained momentum within the California legislature following the loss of life of teenager Adam Raine, who dedicated suicide after extended chats with OpenAI’s ChatGPT that concerned discussing and planning his loss of life and self-harm. The laws additionally responds to leaked inside paperwork that reportedly confirmed Meta’s chatbots had been allowed to interact in “romantic” and “sensual” chats with youngsters.
In latest weeks, U.S. lawmakers and regulators have responded with intensified scrutiny of AI platforms’ safeguards to shield minors. The Federal Trade Commission is getting ready to examine how AI chatbots influence youngsters’s psychological well being. Texas legal professional normal Ken Paxton has launched investigations into Meta and Character.AI, accusing them of deceptive youngsters with psychological well being claims. In the meantime, each Sen. Josh Hawley (R-MO) and Sen. Ed Markey (D-MA) have launched separate probes into Meta.
Techcrunch occasion
San Francisco
|
October 27-29, 2025
“I feel the hurt is probably nice, which implies we’ve got to transfer rapidly,” Padilla informed TechCrunch. “We are able to put affordable safeguards in place to be sure that that significantly minors know they’re not speaking to an actual human being, that these platforms hyperlink folks to the correct sources when folks say issues like they’re eager about hurting themselves or they’re in misery, [and] to be sure that there’s not inappropriate publicity to inappropriate materials.”
Padilla additionally harassed the significance of AI firms sharing knowledge in regards to the variety of occasions they refer customers to disaster companies annually, “so we’ve got a greater understanding of the frequency of this drawback, slightly than solely becoming conscious of it when somebody’s harmed or worse.”
SB 243 beforehand had stronger necessities, however many had been whittled down via amendments. For instance, the bill initially would have required operators to stop AI chatbots from utilizing “variable reward” techniques or different options that encourage extreme engagement. These techniques, utilized by AI companion firms like Replika and Character, supply customers particular messages, reminiscences, storylines, or the flexibility to unlock uncommon responses or new personalities, creating what critics name a probably addictive reward loop.
The present bill additionally removes provisions that would have required operators to observe and report how usually chatbots initiated discussions of suicidal ideation or actions with customers.
“I feel it strikes the best steadiness of getting to the harms with out implementing one thing that’s both not possible for firms to adjust to, both as a result of it’s technically not possible or simply a whole lot of paperwork for nothing,” Becker informed TechCrunch.
SB 243 is transferring towards becoming law at a time when Silicon Valley firms are pouring hundreds of thousands of {dollars} into pro-AI political motion committees (PACs) to again candidates within the upcoming midterm elections who favor a light-touch strategy to AI regulation.
The bill additionally comes as California weighs one other AI security bill, SB 53, which would mandate complete transparency reporting necessities. OpenAI has written an open letter to Governor Newsom, asking him to abandon that bill in favor of much less stringent federal and worldwide frameworks. Main tech firms like Meta, Google, and Amazon have additionally opposed SB 53. In distinction, solely Anthropic has mentioned it helps SB 53.
“I reject the premise that this is a zero-sum state of affairs, that innovation and regulation are mutually unique,” Padilla mentioned. “Don’t inform me that we are able to’t stroll and chew gum. We are able to assist innovation and growth that we expect is wholesome and has advantages — and there are advantages to this know-how, clearly — and on the identical time, we are able to present affordable safeguards for probably the most weak folks.”
“We’re intently monitoring the legislative and regulatory panorama, and we welcome working with regulators and lawmakers as they start to take into account laws for this rising area,” a Character.AI spokesperson informed TechCrunch, noting that the startup already consists of distinguished disclaimers all through the person chat expertise explaining that it ought to be handled as fiction.
A spokesperson for Meta declined to remark.
TechCrunch has reached out to OpenAI, Anthropic, and Replika for remark.