OpenAI Develops ChatGPT Feature to Alert Trusted Contacts During Mental Health Crises

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI is developing a ChatGPT feature allowing adult users to nominate trusted contacts who may be alerted if the AI detects signs of emotional distress or a mental health crisis. The system, still in development, raises privacy and safety concerns but aims to provide support in critical situations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of an AI system (ChatGPT) in a new safety-related application that could plausibly lead to harm, such as privacy violations or safety concerns, if the system misidentifies distress or improperly shares sensitive information. Since the feature is not yet deployed and no actual harm has been reported, this constitutes a plausible future risk rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceSafety

Industries
Consumer services

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI hazard

AI system task:
Interaction support/chatbotsEvent/anomaly detection


Articles about this incident or hazard

Thumbnail Image

ChatGPT Could Soon Notify Friends Or Family During Mental Health Crises: All You Need To Know

2026-04-14
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a new safety-related application that could plausibly lead to harm, such as privacy violations or safety concerns, if the system misidentifies distress or improperly shares sensitive information. Since the feature is not yet deployed and no actual harm has been reported, this constitutes a plausible future risk rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

'Distressed?': ChatGPT may soon alert your trusted contacts if you show signs of mental distress

2026-04-15
Indian Startup News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) analyzing user input to detect mental distress and potentially alert trusted contacts, which is a use of AI with safety implications. However, since the feature is still in development and no actual harm or incident has been reported, it does not meet the criteria for an AI Incident. The concerns about privacy and consent represent plausible future harms that could arise from the deployment of this feature. Thus, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm in the future, but no harm has yet occurred.
Thumbnail Image

ChatGPT May Soon Alert Your Trusted Contacts During Mental Health Crisis - The News Chronicle

2026-04-14
The News Chronicle
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential future use of an AI system (ChatGPT with crisis detection) that could plausibly lead to harm prevention or, conversely, false alerts or privacy concerns. However, since the feature is still in development and not deployed, no direct or indirect harm has occurred. The article discusses potential risks and benefits, making this a plausible future risk scenario rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI trusted contact feature: Will ChatGPT alert your family if it detects mental distress?

2026-04-14
Techlusive
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT) that could plausibly lead to harm or benefit depending on its implementation and user adoption. Since the feature is not yet active and no harm or incident has occurred, it does not qualify as an AI Incident. Instead, it is an AI Hazard because the system's use could plausibly lead to harm (e.g., privacy breaches, misuse of sensitive data, or failure to alert in critical situations) or benefit (improved mental health support). The article primarily discusses potential future implications and concerns rather than reporting an actual incident or harm.
Thumbnail Image

OpenAI tests feature to alert trusted contacts during user mental health crises

2026-04-14
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a new safety feature designed to detect mental health crises and notify trusted contacts. While this could plausibly lead to harm if misused or if false alerts occur (e.g., privacy violations, distress from false notifications), the article does not report any actual harm or incidents resulting from this feature. Therefore, it does not meet the criteria for an AI Incident. Instead, it represents a potential risk or hazard associated with the development and intended use of the AI system. However, since the feature is still under development and no harm has occurred, and the article mainly discusses the development and considerations around this feature, it is best classified as Complementary Information, as it provides context and updates on AI safety efforts without reporting a new incident or hazard.