Regulation Alert: Is Your Companion AI Required to Tell You It’s Not a Person?
Description
Tune in as we analyze the crucial new legislation emerging from California concerning the regulation of companion AI chatbots. On October 13th, the state passed Senate Bill 243 (SB 243) into law, instituting new safeguards for these rapidly growing technologies.
California Governor Gavin Newsom signed the bill, which Senator Steve Padilla has billed as providing “first-in-the-nation AI chatbot safeguards”. The core requirement of this new law mandates that companion chatbot developers implement specific transparency measures. If a reasonable person interacting with the product would be misled into believing they are communicating with a human, the chatbot maker must issue a "clear and conspicuous notification" that the product is strictly AI.
Starting next year, the legislation also addresses critical safety concerns. It will require certain companion chatbot operators to make annual reports to the Office of Suicide Prevention detailing the safeguards they have put in place. These reported safeguards must be designed "to detect, remove, and respond to instances of suicidal ideation by users". The Office of Suicide Prevention is then required to post this collected data on its website.
Governor Newsom stated that while emerging technology like social media and chatbots can "inspire, educate, and connect," it can also "exploit, mislead, and endanger our kids" without adequate "real guardrails". He emphasized the necessity of leading in AI and technology responsibly, protecting children every step of the way. The signing of SB 243 followed the official signing of Senate Bill 53, which was characterized as a landmark AI transparency bill.







