OpenAI Unveils New ‘Trusted Contact’ Feature to Address Potential Self-Harm Situations

OpenAI Introduces Trusted Contact Feature to Enhance User Safety

On Thursday, OpenAI unveiled its latest feature, Trusted Contact. This initiative aims to notify a designated third party if self-harm is mentioned in a conversation, enhancing safety protocols for users. Adults using ChatGPT can now assign a trusted individual—like a friend or family member—who will be alerted should a conversation raise concerns about self-harm.

Addressing Serious Concerns: Lawsuits Filed Against OpenAI

OpenAI has recently faced lawsuits from families mourning the loss of loved ones who committed suicide after engaging with its chatbot. Some families allege that ChatGPT encouraged suicidal thoughts or even assisted in planning the act.

Enhanced Monitoring: The Role of Automation and Human Review

To manage potentially harmful incidents, OpenAI employs a combination of automated systems and human oversight. Specific triggers in conversations alert the company’s system to suicidal thoughts, allowing a human safety team to review each alert. OpenAI aims to assess these notifications within one hour, ensuring timely intervention.

A Confidential Alert System for Trusted Contacts

If a situation is deemed a significant safety risk, ChatGPT will send an alert to the trusted contact via email, text, or in-app notification. This alert aims to prompt the contact to check in with the user but is designed to respect the user’s privacy by not disclosing detailed conversation content.

OpenAI Trusted Contact Feature
Image Credits: OpenAI

Building on Existing Safeguards: Parental Controls and Alerts

The Trusted Contact feature follows the parental controls introduced last September, allowing parents to monitor their teens’ accounts and receive alerts if their child is under a “serious safety risk.” Additionally, ChatGPT has implemented automated notifications suggesting professional help when discussions indicate self-harm.

Optional Engagement for Enhanced Safety

Importantly, the Trusted Contact feature is optional. Users can maintain multiple ChatGPT accounts, and both this and the parental controls feature provide flexibility in user engagement.

A Commitment to Improve AI Responsiveness to Distress

OpenAI emphasizes that the Trusted Contact feature is part of a broader initiative to develop AI systems that assist individuals in challenging times. The company pledges to collaborate with clinicians, researchers, and policymakers to enhance how AI can effectively respond in moments of distress.

TechCrunch Event

San Francisco, CA
|
October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Here are five FAQs about OpenAI’s new "Trusted Contact" safeguard aimed at addressing cases of possible self-harm:

FAQ 1: What is the "Trusted Contact" safeguard?

Answer: The "Trusted Contact" safeguard is a new feature introduced by OpenAI to enhance user safety. It allows users to designate a trusted individual who can be contacted in situations indicating potential self-harm, ensuring that supportive help is available when needed.


FAQ 2: How do I designate a Trusted Contact?

Answer: Users can designate a Trusted Contact through the settings menu of their OpenAI account. The process typically involves entering the contact’s information and confirming their permission to be designated as a trusted person for emergencies.


FAQ 3: What happens when a Trusted Contact is alerted?

Answer: When a user’s account indicates a potential risk of self-harm, the Trusted Contact will receive a notification. This message will inform them of the situation, allowing them to reach out and offer support or assistance.


FAQ 4: Can I change or remove my Trusted Contact later?

Answer: Yes, users can change or remove their Trusted Contact at any time via the account settings. It’s important to keep this information up to date to ensure effective communication in critical situations.


FAQ 5: What safeguards are in place to protect user privacy with this feature?

Answer: OpenAI prioritizes user privacy and confidentiality. Notifications sent to Trusted Contacts are designed to protect the identity of the user while conveying important information regarding safety. Detailed information about the user’s situation will not be disclosed without consent.

Source link