OpenAI Unveils New ‘Trusted Contact’ Feature to Address Potential Self-Harm Situations

OpenAI Introduces Trusted Contact Feature to Enhance User Safety

On Thursday, OpenAI unveiled its latest feature, Trusted Contact. This initiative aims to notify a designated third party if self-harm is mentioned in a conversation, enhancing safety protocols for users. Adults using ChatGPT can now assign a trusted individual—like a friend or family member—who will be alerted should a conversation raise concerns about self-harm.

Addressing Serious Concerns: Lawsuits Filed Against OpenAI

OpenAI has recently faced lawsuits from families mourning the loss of loved ones who committed suicide after engaging with its chatbot. Some families allege that ChatGPT encouraged suicidal thoughts or even assisted in planning the act.

Enhanced Monitoring: The Role of Automation and Human Review

To manage potentially harmful incidents, OpenAI employs a combination of automated systems and human oversight. Specific triggers in conversations alert the company’s system to suicidal thoughts, allowing a human safety team to review each alert. OpenAI aims to assess these notifications within one hour, ensuring timely intervention.

A Confidential Alert System for Trusted Contacts

If a situation is deemed a significant safety risk, ChatGPT will send an alert to the trusted contact via email, text, or in-app notification. This alert aims to prompt the contact to check in with the user but is designed to respect the user’s privacy by not disclosing detailed conversation content.

OpenAI Trusted Contact Feature
Image Credits: OpenAI

Building on Existing Safeguards: Parental Controls and Alerts

The Trusted Contact feature follows the parental controls introduced last September, allowing parents to monitor their teens’ accounts and receive alerts if their child is under a “serious safety risk.” Additionally, ChatGPT has implemented automated notifications suggesting professional help when discussions indicate self-harm.

Optional Engagement for Enhanced Safety

Importantly, the Trusted Contact feature is optional. Users can maintain multiple ChatGPT accounts, and both this and the parental controls feature provide flexibility in user engagement.

A Commitment to Improve AI Responsiveness to Distress

OpenAI emphasizes that the Trusted Contact feature is part of a broader initiative to develop AI systems that assist individuals in challenging times. The company pledges to collaborate with clinicians, researchers, and policymakers to enhance how AI can effectively respond in moments of distress.

TechCrunch Event

San Francisco, CA
|
October 13-15, 2026

When you purchase through links in our articles, we may earn a small commission. This doesn’t affect our editorial independence.

Here are five FAQs about OpenAI’s new "Trusted Contact" safeguard aimed at addressing cases of possible self-harm:

FAQ 1: What is the "Trusted Contact" safeguard?

Answer: The "Trusted Contact" safeguard is a new feature introduced by OpenAI to enhance user safety. It allows users to designate a trusted individual who can be contacted in situations indicating potential self-harm, ensuring that supportive help is available when needed.


FAQ 2: How do I designate a Trusted Contact?

Answer: Users can designate a Trusted Contact through the settings menu of their OpenAI account. The process typically involves entering the contact’s information and confirming their permission to be designated as a trusted person for emergencies.


FAQ 3: What happens when a Trusted Contact is alerted?

Answer: When a user’s account indicates a potential risk of self-harm, the Trusted Contact will receive a notification. This message will inform them of the situation, allowing them to reach out and offer support or assistance.


FAQ 4: Can I change or remove my Trusted Contact later?

Answer: Yes, users can change or remove their Trusted Contact at any time via the account settings. It’s important to keep this information up to date to ensure effective communication in critical situations.


FAQ 5: What safeguards are in place to protect user privacy with this feature?

Answer: OpenAI prioritizes user privacy and confidentiality. Notifications sent to Trusted Contacts are designed to protect the identity of the user while conveying important information regarding safety. Detailed information about the user’s situation will not be disclosed without consent.

Source link

Training AI Agents in Controlled Environments Enhances Performance in Chaotic Situations

The Surprising Revelation in AI Development That Could Shape the Future

Most AI training follows a simple principle: match your training conditions to the real world. But new research from MIT is challenging this fundamental assumption in AI development.

Their finding? AI systems often perform better in unpredictable situations when they are trained in clean, simple environments – not in the complex conditions they will face in deployment. This discovery is not just surprising – it could very well reshape how we think about building more capable AI systems.

The research team found this pattern while working with classic games like Pac-Man and Pong. When they trained an AI in a predictable version of the game and then tested it in an unpredictable version, it consistently outperformed AIs trained directly in unpredictable conditions.

Outside of these gaming scenarios, the discovery has implications for the future of AI development for real-world applications, from robotics to complex decision-making systems.

The Breakthrough in AI Training Paradigms

Until now, the standard approach to AI training followed clear logic: if you want an AI to work in complex conditions, train it in those same conditions.

This led to:

  • Training environments designed to match real-world complexity
  • Testing across multiple challenging scenarios
  • Heavy investment in creating realistic training conditions

But there is a fundamental problem with this approach: when you train AI systems in noisy, unpredictable conditions from the start, they struggle to learn core patterns. The complexity of the environment interferes with their ability to grasp fundamental principles.

This creates several key challenges:

  • Training becomes significantly less efficient
  • Systems have trouble identifying essential patterns
  • Performance often falls short of expectations
  • Resource requirements increase dramatically

The research team’s discovery suggests a better approach of starting with simplified environments that let AI systems master core concepts before introducing complexity. This mirrors effective teaching methods, where foundational skills create a basis for handling more complex situations.

The Groundbreaking Indoor-Training Effect

Let us break down what MIT researchers actually found.

The team designed two types of AI agents for their experiments:

  1. Learnability Agents: These were trained and tested in the same noisy environment
  2. Generalization Agents: These were trained in clean environments, then tested in noisy ones

To understand how these agents learned, the team used a framework called Markov Decision Processes (MDPs).

  1. How does training AI agents in clean environments help them excel in chaos?
    Training AI agents in clean environments allows them to learn and build a solid foundation, making them better equipped to handle chaotic and unpredictable situations. By starting with a stable and controlled environment, AI agents can develop robust decision-making skills that can be applied in more complex scenarios.

  2. Can AI agents trained in clean environments effectively adapt to chaotic situations?
    Yes, AI agents that have been trained in clean environments have a strong foundation of knowledge and skills that can help them quickly adapt to chaotic situations. Their training helps them recognize patterns, make quick decisions, and maintain stability in turbulent environments.

  3. How does training in clean environments impact an AI agent’s performance in high-pressure situations?
    Training in clean environments helps AI agents develop the ability to stay calm and focused under pressure. By learning how to efficiently navigate through simple and controlled environments, AI agents can better handle stressful situations and make effective decisions when faced with chaos.

  4. Does training in clean environments limit an AI agent’s ability to handle real-world chaos?
    No, training in clean environments actually enhances an AI agent’s ability to thrive in real-world chaos. By providing a solid foundation and experience with controlled environments, AI agents are better prepared to tackle unpredictable situations and make informed decisions in complex and rapidly changing scenarios.

  5. How can businesses benefit from using AI agents trained in clean environments?
    Businesses can benefit from using AI agents trained in clean environments by improving their overall performance and efficiency. These agents are better equipped to handle high-pressure situations, make quick decisions, and adapt to changing circumstances, ultimately leading to more successful outcomes and higher productivity for the organization.

Source link