OpenAI adds 'Trusted Contact' safety feature to ChatGPT
OpenAI is introducing a new safety feature in ChatGPT that lets users designate a trusted person to be alerted if the AI detects potential self-harm. This aims to provide an extra layer of support for users in distress.

OpenAI has rolled out a new 'Trusted Contact' feature in ChatGPT designed to help users who may be experiencing self-harm thoughts. The feature allows users to add a trusted person, like a friend or family member, who will be notified if the AI detects concerning language in a conversation. This notification is meant to prompt real-world support when needed.
This update reflects OpenAI's ongoing efforts to make AI interactions safer, especially for vulnerable users. While AI can't replace professional help, having a trusted person alerted could make a critical difference. Think of it like a digital safety net—if you're struggling, someone you care about can step in.
If you or someone you know might benefit from this feature, you can enable it in ChatGPT's settings. Simply navigate to the safety options and add a trusted contact. It's a small step that could provide peace of mind for both you and your loved ones.