ChatGPT Adds Trusted Contact for Mental Health Safety
OpenAI has introduced a Trusted Contact feature in ChatGPT to alert a chosen friend or family member if the AI detects serious self-harm concerns. This optional safety tool aims to provide an extra layer of support for users.

OpenAI has launched a new safety feature in ChatGPT called Trusted Contact. This optional tool allows users to designate a trusted friend or family member who will be notified if ChatGPT detects serious self-harm concerns in a conversation. The feature is designed to provide an additional layer of support for users who may be struggling with their mental health.
This update matters because it bridges the gap between AI assistance and real-world support. Think of it like having a safety net—if you're going through a tough time, someone you trust can be alerted automatically. It's a proactive step toward making digital interactions safer, especially for those who might not have immediate access to help.
If you're concerned about mental health safety, you can enable this feature today. Simply go to your ChatGPT settings and add a trusted contact. This person will only be notified if the AI identifies serious self-harm risks, ensuring privacy and respect for your conversations.