OpenAI has been under intense legal and public pressure to improve the way its flagship AI product ChatGPT responds when a user express suicidal feelings.
On Thursday, the company launched a feature called Trusted Contact, which allows users to designate an adult to notify should the user talk about self-harm or suicide in a serious or concerning way.
The optional feature only encourages the trusted contact to reach out to the user. It does not share chat transcripts or conversation details.
“Our goal is to ensure that AI systems do not exist in isolation,” the company said in a blog post announcing the feature. “Instead they should help connect people to the real-world care, relationships, and resources that matter most.”
OpenAI has been sued multiple times for wrongful death by family members of ChatGPT users who died by suicide after ChatGPT allegedly coached them to end their lives or didn’t respond appropriately to their discussions of psychological distress. OpenAI has denied the allegations in the first of those lawsuits.
Mashable Trend Report

A designated trusted contact receives an invitation like this from ChatGPT.
Credit: Courtesy OpenAI
The state of Florida is also investigating ChatGPT’s links to “criminal behavior,” including the “encouragement of suicide and self-harm.”
Trusted Contact was developed with feedback from experts, including OpenAI’s Expert Council on Well-Being and AI and the American Psychological Association.
“Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most,” Dr. Arthur Evans, chief executive officer of the American Psychological Association, said in a statement.
How ChatGPT’s Trusted Contact works
-
Users can start the Trusted Contact process by clicking on their ChatGPT settings.
-
One adult age 18 or older can be added via the Trusted Contact form.
-
The contact doesn’t need a ChatGPT account.
-
The designated contact will receive an invitation from OpenAI explaining their role as a trusted contact. They must accept the invite within one week in order to activate the feature. The contact can share their phone number or email address as a contact method. Should the person decline, the user can add a different adult.
-
When OpenAI’s automated monitoring systems detect discussion of self-harm or indicates a serious safety issue, ChatGPT alerts the user that the company may notify their trusted contact. The user prompt encourages outreach to the trusted contact and provides conversation starters.
-
The safety issue is then reviewed by what OpenAI describes as a “small team of specially trained people.” When the human reviewers confirm a possible serious safety concern, ChatGPT sends the Trusted Contact a brief email or text message. If the person has a ChatGPT account, they will receive an in-app notification.
-
The notification doesn’t include details about the user’s discussion. Instead, it informs the trusted contact that the user mentioned self-harm and encourages the contact to reach out. The message includes a link to guidance for having sensitive conversations.
-
Users are free to remove or edit their Trusted Contact at any time. The Trusted Contact can also remove themselves via ChatGPT’s help center.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
