California Legislators Approve AI Safety Bill SB 53, Yet Newsom May Still Veto

California’s Landmark AI Safety Bill Receives Final Approval

In a significant move for AI governance, California’s state senate approved a critical AI safety bill early Saturday morning, imposing new transparency mandates on large technology firms.

Key Features of SB 53

The bill, championed by state senator Scott Wiener, introduces several pivotal measures. According to Wiener, SB 53 mandates that large AI laboratories disclose their safety protocols, offers whistleblower protections for employees, and initiates a public cloud service called CalCompute to broaden computing access.

Next Steps: Governor Newsom’s Decision

The bill is now on Governor Gavin Newsom’s desk for signature or veto. While he has yet to comment on SB 53, he notably vetoed a previous, more extensive safety bill by Wiener last year, despite endorsing narrower legislation addressing issues like deepfakes.

Governor’s Previous Concerns and Influences on Current Bill

In his earlier decision, Newsom acknowledged the necessity of “protecting the public from genuine threats posed by AI,” but criticized the stringent standards proposed for large models, questioning their applicability outside high-risk environments. This new legislation has been reshaped based on recommendations from AI policy experts assembled by Newsom post-veto.

Amendments: Streamlining Expectations for Businesses

Recent amendments to the bill now dictate that companies developing “frontier” AI models with annual revenues below $500 million will need only to disclose basic safety information, while those exceeding that revenue threshold must provide detailed reports.

Industry Pushback and Calls for Federal Standards

The proposal has faced opposition from various Silicon Valley companies, venture capital firms, and lobbying groups. In a recent correspondence to Newsom, OpenAI argued for a harmonized approach, suggesting that companies meeting federal or European standards should automatically be compliant with California’s safety regulations.

Legal Concerns About State Regulation

The head of AI policy at Andreessen Horowitz has cautioned that many state-level AI regulations, including proposals in California and New York, may violate constitutional restrictions on interstate commerce. The co-founders of a16z have cited tech regulation as one of the reasons for their support of Donald Trump’s campaign for a second term, leading to calls for a 10-year ban on state AI regulations.

Support from the AI Community

In contrast, Anthropic has publicly supported SB 53. Co-founder Jack Clark stated, “While we would prefer a federal standard, in its absence, this bill establishes a robust framework for AI governance that cannot be overlooked.” Their endorsement highlights the importance of this legislative effort.

Here are five FAQs regarding California’s AI safety bill SB 53, along with their answers:

FAQ 1: What is California’s AI safety bill SB 53?

Answer: California’s AI safety bill SB 53 aims to establish regulations surrounding the use and development of artificial intelligence technologies. It emphasizes ensuring safety, accountability, and transparency in AI systems to protect consumers and promote ethical practices in the tech industry.

FAQ 2: What are the key provisions of SB 53?

Answer: Key provisions of SB 53 include requirements for AI developers to conduct risk assessments, implement safety measures, and maintain transparency about how AI systems operate. It also encourages the establishment of a framework for ongoing monitoring of AI technologies’ impacts.

FAQ 3: Why is Governor Newsom’s approval important for SB 53?

Answer: Governor Newsom’s approval is crucial because he has the power to veto the bill. If he issues a veto, the bill will not become law, meaning the proposed regulations for AI safety would not be enacted, potentially leaving gaps in consumer protection.

FAQ 4: How does SB 53 address potential risks associated with AI?

Answer: SB 53 addresses potential risks by requiring developers to evaluate the impacts of their AI systems before deployment, ensuring that they assess any hazards related to safety, discrimination, or privacy. This proactive approach aims to mitigate issues before they arise.

FAQ 5: What happens if Governor Newsom vetoes SB 53?

Answer: If Governor Newsom vetoes SB 53, the bill would not become law, and the current regulatory framework governing AI would remain in place. Advocates for AI safety may push for future legislation or modifications to address prevailing concerns in the absence of the bill’s protections.

Source link

California Bill to Regulate AI Companion Chatbots Nears Legal Approval

California Takes Major Steps to Regulate AI with SB 243 Bill

California has made significant progress in the regulation of artificial intelligence.
SB 243 — a pivotal bill aimed at regulating AI companion chatbots to safeguard minors and vulnerable users — has passed both the State Assembly and Senate with bipartisan support, and is now on its way to Governor Gavin Newsom’s desk.

Next Steps for SB 243: Awaiting the Governor’s Decision

Governor Newsom has until October 12 to either sign the bill into law or issue a veto. If signed, SB 243 is set to take effect on January 1, 2026, positioning California as the first state to mandate safety protocols for AI chatbot operators, ensuring companies are held legally accountable for compliance.

Key Provisions of the Bill: Protecting Minors from Harmful Content

The legislation focuses specifically on preventing AI companion chatbots — defined as AI systems providing adaptive, human-like responses to meet users’ social needs — from discussing topics related to suicidal thoughts, self-harm, or sexually explicit material.

User Alerts and Reporting Requirements: Ensuring Transparency

Platforms will be required to notify users every three hours — particularly minors — reminding them they are interacting with an AI chatbot and encouraging breaks. The bill also mandates annual reporting and transparency requirements for AI companies, including major players like OpenAI, Character.AI, and Replika, commencing July 1, 2027.

Legal Recourse: Empowering Users to Seek Justice

SB 243 grants individuals who believe they’ve been harmed due to violations the right to pursue lawsuits against AI companies for injunctive relief, damages of up to $1,000 per violation, and recovery of attorney’s fees.

The Context: A Response to Recent Tragedies and Scandals

Introduced in January by Senators Steve Padilla and Josh Becker, SB 243 gained traction following the tragic suicide of teenager Adam Raine, who engaged in prolonged conversations with OpenAI’s ChatGPT regarding self-harm. The legislation is also a response to leaked
internal documents from Meta indicating their chatbots were permitted to have “romantic” interactions with children.

Increased Scrutiny on AI Platforms: Federal and State Actions

Recently, U.S. lawmakers and regulators have heightened their scrutiny of AI platforms. The
Federal Trade Commission is set to investigate the implications of AI chatbots on children’s mental health.

Legislators Call for Urgent Action: Emphasizing the Need for Safer AI

“The harm is potentially great, which means we have to move quickly,” Padilla told TechCrunch, emphasizing the importance of ensuring that minors are aware they are not interacting with real humans and connecting users with appropriate resources during distress.

Striking a Balance: Navigating Regulation and Innovation

Despite initial comprehensive requirements, SB 243 underwent amendments that diluted some provisions, such as tracking discussions around suicidal ideation. Becker expressed confidence that the bill appropriately balances addressing harm without imposing unfeasible compliance demands on companies.

The Future of AI Regulation: A Broader Context

As Silicon Valley companies channel millions into pro-AI political action committees ahead of upcoming elections, SB 243 is advancing alongside another proposal,
SB 53, aimed at enhancing transparency in AI operations. Major tech players like Meta, Google, and Amazon are rallying against SB 53, while only
Anthropic supports it.

A Collaborative Approach to Regulation: Insights from Leaders

“Innovation and regulation are not mutually exclusive,” Padilla stated, highlighting the potential benefits of AI technology while calling for reasonable safeguards for vulnerable populations.

A Character.AI spokesperson conveyed their commitment to working with regulators to ensure user safety, noting existing warnings in their chat experience that emphasize the fictional nature of AI interactions.

Meta has opted not to comment on the legislative developments, while TechCrunch has reached out to OpenAI, Anthropic, and Replika for their perspectives.

Here are five FAQs regarding the California bill regulating AI companion chatbots:

FAQ 1: What is the purpose of the California bill regulating AI companion chatbots?

Answer: The bill aims to establish guidelines for the development and use of AI companion chatbots, ensuring they are safe, transparent, and respectful of users’ privacy. It seeks to protect users from potential harms associated with misinformation, emotional manipulation, and data misuse.


FAQ 2: What specific regulations does the bill propose for AI chatbots?

Answer: The bill proposes several key regulations, including requirements for transparency about the chatbot’s AI nature, user consent for data collection, and safeguards against harmful content. Additionally, it mandates that users are informed when they are interacting with a bot rather than a human.


FAQ 3: Who will be responsible for enforcing the regulations if the bill becomes law?

Answer: Enforcement will primarily fall under the jurisdiction of the state’s Attorney General or designated regulatory agencies. They will have the power to impose penalties on companies that violate the established guidelines.


FAQ 4: How will this bill impact developers of AI companion chatbots?

Answer: Developers will need to comply with the new regulations, which may involve implementing transparency measures, modifying data handling practices, and ensuring their chatbots adhere to ethical standards. This could require additional resources and training for developers.


FAQ 5: When is the bill expected to take effect if it becomes law?

Answer: If passed, the bill is expected to take effect within a specified timeframe set by the legislature, likely allowing a period for developers to adapt to the new regulations. This timeframe will be detailed in the final version of the law.

Source link

California Bill Aiming to Regulate AI Companion Chatbots Nears Enactment

The California Assembly Takes a Stand: New Regulations for AI Chatbots

In a significant move toward safeguarding minors and vulnerable users, the California State Assembly has passed SB 243, a bill aimed at regulating AI companion chatbots. With bipartisan support, the legislation is set for a final vote in the state Senate this Friday.

Introducing Safety Protocols for AI Chatbot Operators

Should Governor Gavin Newsom approve the bill, it will come into effect on January 1, 2026, positioning California as the first state to mandate that AI chatbot operators adopt safety measures and assume legal responsibility for any failures in these systems.

Preventing Harmful Interactions with AI Companions

The bill targets AI companions capable of human-like interaction that might expose users to sensitive topics, such as suicidal thoughts or explicit content. Key provisions include regular reminders for users—every three hours for minors—that they are interacting with AI, along with annual transparency reports from major companies like OpenAI, Character.AI, and Replika.

Empowering Individuals to Seek Justice

SB 243 allows individuals who suffer harm due to violations to pursue legal action against AI companies, seeking damages up to $1,000 per infraction along with attorney’s fees.

A Response to Growing Concerns

The legislation gained momentum after the tragic suicide of a teenager, Adam Raine, who had extensive interactions with OpenAI’s ChatGPT, raising alarms about the potential dangers of chatbots. It also follows leaked documents indicating Meta’s chatbots were permitted to engage in inappropriate conversations with minors.

Intensifying Scrutiny Surrounding AI Platforms

As scrutiny of AI systems increases, the Federal Trade Commission is gearing up to investigate the impact of AI chatbots on children’s mental health, while investigations into Meta and Character.AI are being spearheaded by Texas Attorney General Ken Paxton.

Legislators Call for Quick Action and Accountability

State Senator Steve Padilla emphasized the urgency of implementing effective safeguards to protect minors. He advocates for AI companies to disclose data regarding their referrals to crisis services for a better understanding of the potential harms associated with these technologies.

Amendments Modify Initial Requirements

While SB 243 initially proposed stricter measures, many requirements were eliminated, including the prohibition of “variable reward” tactics designed to increase user engagement, which can lead to addictive behaviors. The revised bill also drops mandates for tracking discussions surrounding suicidal ideation.

Finding a Balance: Innovation vs. Regulation

Senator Josh Becker believes the current version of the bill strikes the right balance, addressing harms without imposing unfeasible regulations. Meanwhile, Silicon Valley companies are investing heavily in pro-AI political action committees, aiming to influence upcoming elections.

The Path Forward: Navigating AI Safety Regulations

SB 243 is making its way through the legislative process as California considers another critical piece of legislation, SB 53, which will enforce reporting transparency. In contrast, tech giants oppose this measure, advocating for more lenient regulations.

Combining Innovation with Safeguards

Padilla argues that innovation and regulation should coexist, emphasizing the need for responsible practices that can protect our most vulnerable while allowing for technological advancement.

TechCrunch has reached out to prominent AI companies such as OpenAI, Anthropic, Meta, Character.AI, and Replika for further commentary.

Here are five frequently asked questions (FAQs) regarding the California bill that aims to regulate AI companion chatbots:

FAQ 1: What is the purpose of the California bill regulating AI companion chatbots?

Answer: The bill aims to ensure the safety and transparency of AI companion chatbots, addressing concerns related to user privacy, misinformation, and the potential emotional impact on users. It seeks to create guidelines for the ethical use and development of these technologies.

FAQ 2: How will the regulation affect AI chatbot developers?

Answer: Developers will need to comply with specific standards, including transparency about data handling, user consent protocols, and measures for preventing harmful interactions. This may involve disclosing the chatbot’s AI nature and providing clear information about data usage.

FAQ 3: What protections will users have under this bill?

Answer: Users will gain better access to information about how their personal data is used and stored. Additionally, safeguards will be implemented to minimize the risk of emotional manipulation and ensure that chatbots do not disseminate harmful or misleading information.

FAQ 4: Will this bill affect existing AI chatbots on the market?

Answer: Yes, existing chatbots may need to be updated to comply with the new regulations, particularly regarding user consent and transparency. Developers will be required to assess their current systems to align with the forthcoming legal standards.

FAQ 5: When is the bill expected to be enacted into law?

Answer: The bill is in the final stages of the legislative process and is expected to be enacted soon, although an exact date for implementation may vary based on the legislative timeline and any necessary amendments before it becomes law.

Source link