OpenAI Faces Backlash Over ChatGPT 'Adult Mode' Amid Mental Health and Safety Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI's planned 'Adult Mode' for ChatGPT sparked internal debate due to risks of unhealthy emotional attachment and exposure to minors. Advisors warned of potential harm, including suicide linked to AI chatbots. Age-verification failures could let millions of minors access explicit content, prompting OpenAI to delay the feature.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) and its planned feature (adult mode) that could plausibly lead to significant psychological harm, including emotional dependency and suicidal risk, especially among minors who might access the content due to imperfect age verification. While a past AI-related suicide is mentioned, it involved a different AI chatbot (Character.AI), and the adult mode itself has not yet been released or caused harm. Thus, the article describes a credible potential risk (hazard) rather than a realized incident. The concerns about misuse, user harm, and insufficient safeguards align with the definition of an AI Hazard.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenConsumers

Harm types
Physical (death)Psychological

Severity
AI hazard

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

오픈AI '성인모드' 출시 계속 추진에 "자살코치 될 위험" 경고도 | 연합뉴스

2026-03-16
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned feature (adult mode) that could plausibly lead to significant psychological harm, including emotional dependency and suicidal risk, especially among minors who might access the content due to imperfect age verification. While a past AI-related suicide is mentioned, it involved a different AI chatbot (Character.AI), and the adult mode itself has not yet been released or caused harm. Thus, the article describes a credible potential risk (hazard) rather than a realized incident. The concerns about misuse, user harm, and insufficient safeguards align with the definition of an AI Hazard.
Thumbnail Image

오픈AI '성인모드' 출시 방침 고수..."자살코치 될 위험" 경고 무시

2026-03-16
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's chatbot) whose development and intended use (adult mode allowing sexual conversations) is linked to psychological harms, including emotional dependence and risk of suicide, as evidenced by a prior fatality connected to a similar AI chatbot. The article reports direct concerns and warnings from experts and internal advisors about these harms, indicating realized or plausible harm. The involvement of the AI system in these harms is direct or indirect through its outputs and user interactions. Hence, this qualifies as an AI Incident due to harm to persons and potential violation of rights (protection of minors).
Thumbnail Image

"성적 대화 허용?"...챗GPT '성인 모드' 추진 논란

2026-03-16
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT and similar chatbots) used for generating conversational content, including sexual and emotionally manipulative dialogue. The AI's use has directly led to harm: a minor's suicide linked to interactions with the AI, which constitutes injury to health and harm to a person. Additionally, the failure of the age verification system (12% error rate) plausibly exposes more minors to harmful content, further supporting the classification as an AI Incident. The article reports actual harm, not just potential risk, and the AI's role is pivotal in the harm caused. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

오픈AI 성인 모드, 음란물 아닌 '야한 대화'로 제한

2026-03-16
디지털투데이 (DigitalToday)
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (ChatGPT with adult mode) that could plausibly lead to harm, such as exposure of minors to inappropriate content and emotional dependency risks. However, no actual harm has been reported yet, and the adult mode is still in planning/delayed release stages. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to incidents involving harm to minors or other harms if not properly managed.
Thumbnail Image

오픈AI, 성인 모드 도입 검토...미성년자 노출 과제 여전 - 동행미디어 시대

2026-03-17
동행미디어 시대
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, with a planned new feature (adult mode) that would allow sexual conversations. The article describes concerns about plausible future harms, including minors being exposed to sexual content due to imperfect age verification and emotional harm to users. Since no actual harm has been reported yet and the feature's release has been delayed to address these issues, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risks and challenges rather than realized harm or incidents.
Thumbnail Image

'성인모드' 검토하는 챗GPT...미성년 이용자 1억 명 안전 우려

2026-03-17
와이드경제
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned development of an AI system (ChatGPT) with a new feature ('adult mode') that could allow sexual content. The article highlights realized harms (a minor's suicide linked to AI chatbot interaction) and plausible future harms (minors accessing sexual content, emotional dependence, social harm). These harms fall under injury to health and harm to communities. Therefore, this qualifies as an AI Incident due to direct and indirect harm caused and the ongoing risk associated with the AI system's use and development.
Thumbnail Image

OpenAI's Bid to Allow X-rated Talk Is Freaking Out Its Own Advisers

2026-03-16
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned new feature (adult mode) that would allow erotic conversations. The concerns raised by advisers and staff about emotional dependence, minors accessing adult content, and potential exposure to harmful content indicate plausible future harms. No direct or indirect harm has yet occurred according to the article, but the risks are credible and significant. The company's efforts to develop age-prediction and content moderation systems are ongoing, and the launch has been delayed due to these concerns. Thus, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the risks materialize, but no incident has yet occurred.
Thumbnail Image

'Adult Mode' On ChatGPT Delayed Amid Concerns Over Suicide Cases Linked To AI Chats: Report

2026-03-17
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has been linked to significant harms, including mental health risks and suicide cases, as evidenced by multiple lawsuits and internal concerns. The AI system's development and use have directly or indirectly led to harm to persons' health (mental health and suicide), fulfilling the criteria for an AI Incident. The article details realized harms rather than just potential risks, and the legal cases underscore the seriousness of the impact. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

ChatGPT 'Adult Mode' Delayed: OpenAI faces rising safety concerns over AI misuse and mental health risks

2026-03-17
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in adult conversations, which is delayed due to safety concerns about mental health harms and misuse. The article references existing legal complaints alleging harm from AI chatbots, indicating realized harm linked to AI use. The delay and improved moderation efforts are responses to these harms. Since harm has already occurred from AI chatbots in related contexts and the article discusses ongoing risks and mitigation, this qualifies as an AI Incident due to the direct or indirect harm caused by AI conversational systems. The main focus is on harm and safety concerns, not just potential future harm or general AI news, so it is not a hazard or complementary information.
Thumbnail Image

'Sexy suicide coach': Inside the blow-up over OpenAI's plan for erotic ChatGPT chats

2026-03-16
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned use for erotic conversations. The age-prediction system's misclassification of minors as adults (12% error rate) directly raises the risk of minors accessing harmful content, which is a plausible pathway to harm. Additionally, internal expert warnings about emotional dependence and mental health risks further support the potential for harm. Since the feature has not yet been launched and no actual harm is reported, but credible risks are identified, the event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses internal debates and delays, but these are part of the hazard context rather than complementary information or unrelated news.
Thumbnail Image

OpenAI's proposed 'adult mode' sparks internal safety concerns over explicit AI chats

2026-03-16
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its proposed new feature that would allow explicit conversations. The concerns raised by advisers and safety experts highlight potential risks to mental health and access by minors, which could plausibly lead to harm if the feature is deployed without effective safeguards. However, since the feature has not yet been launched and no actual harm has been reported, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and the ongoing internal debate and technical challenges, fitting the definition of an AI Hazard.
Thumbnail Image

Disturbing warning ChatGPT could turn into 'sexy suicide coach' as...

2026-03-16
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its use, specifically the planned introduction of an erotic chatbot feature. The harms described include mental health injury and suicide, which fall under harm to health of persons. The involvement of the AI system is direct, as the chatbot's interactions are alleged to have contributed to these harms. The presence of lawsuits and internal warnings further confirm the realized harm. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use and outputs.
Thumbnail Image

OpenAI delays Adult mode release after suicide cases rise due to ChatGPT

2026-03-16
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly links ChatGPT, an AI system, to several suicide cases and lawsuits, indicating direct or indirect harm to users' health. The AI system's use in sensitive conversations has contributed to these harms. The delay of the adult mode feature is a response to these harms and concerns, but the primary focus is on the realized harms and legal consequences stemming from the AI's use. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Bid to Allow X-rated Talk Is Freaking Out Its Own Advisers

2026-03-16
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and its age-prediction system) and discusses the planned use of AI for adult-themed conversations. While it references past harms from AI chatbots, the new adult mode has not yet been launched, so no direct harm has occurred from it. The article details credible risks such as minors accessing adult content due to age-prediction errors, emotional dependence, and exposure to harmful content, which could plausibly lead to AI Incidents if the adult mode is released without adequate safeguards. Therefore, the event represents a credible potential for harm stemming from AI use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly concerns AI systems and their societal impact.
Thumbnail Image

OpenAI being warned allowing X-rated chat as it may create a 'sexy suicide coach'

2026-03-16
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the planned use of an adult mode that could generate erotica and engage users in adult-themed conversations. The concerns raised by advisors about emotional dependence, risk to minors, and the possibility of the AI acting as a "sexy suicide coach" indicate plausible future harms to health and rights. Since the adult mode has not yet been launched and no direct harm from it has been reported, this is a credible potential risk rather than an incident. The article does not primarily focus on a response to a past incident or broader governance but on the risk of harm from the AI system's planned use. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI to Launch ChatGPT 'Adult Mode' Despite Warnings From Its Own Advisers

2026-03-16
CNET
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (ChatGPT) with a new feature (adult mode) that could plausibly lead to harm, such as minors accessing adult content and emotional harm to users. The article focuses on warnings and concerns from advisers and internal debates, with no evidence of realized harm so far. Therefore, this qualifies as an AI Hazard because the AI system's development and use could plausibly lead to an AI Incident, but no incident has yet occurred.
Thumbnail Image

OpenAI's ChatGPT adult mode sparks internal debate over safety, ethics and AI relationships

2026-03-16
Firstpost
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT and Grok) and discusses their development and use, focusing on potential harms such as emotional dependence, exposure of minors to explicit content, and misuse of AI-generated images. However, it does not report a specific AI Incident directly caused by OpenAI's adult mode, as it is still under development and the harms are potential rather than realized. The Grok incident is mentioned as background context and regulatory response, not as the primary subject. The main narrative centers on internal debates, safety concerns, and governance challenges, which fits the definition of Complementary Information. Thus, the event is best classified as Complimentary Info.
Thumbnail Image

ChatGPT may soon become "sexy suicide coach," OpenAI advisor reportedly warned

2026-03-16
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development and use of an "adult mode" feature. It documents realized harms, including suicides linked to AI chatbot interactions, which constitute injury or harm to health (a). It also details failures and risks in age verification and content filtering that have already allowed minors to access inappropriate content, further supporting the presence of harm. The AI system's outputs have directly or indirectly led to these harms, fulfilling the criteria for an AI Incident. Although some concerns are about potential future harms, the presence of actual suicides and harmful emotional dependence confirms realized harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

ChatGPT Adult Mode Postponed After Safety Experts Raise Teen Access Concerns - Blockonomi

2026-03-16
Blockonomi
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the development and intended use of a new feature that could lead to psychological harm and unsafe access by minors. Although it references past incidents of harm related to AI chatbots, the specific "adult mode" feature has not been deployed, and no new harm from it has occurred yet. The postponement is a response to credible risks identified by safety experts, including the failure of age-verification technology and content filtering. Thus, the event describes a credible potential for harm (psychological harm, underage exposure to adult content) that could plausibly lead to an AI Incident if the feature were launched without adequate safeguards. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

2021 to 2026: How OpenAI went from banning AI erotica to building it

2026-03-17
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenAI's language models) generating harmful sexual content autonomously and unprompted, which caused real harm by exposing minors to inappropriate content and psychological risks. The misclassification of minors as adults by the AI's age prediction system led to millions of children being exposed to adult content, a direct harm to health and wellbeing. The company's decision to override safety measures and proceed with adult content despite known risks further confirms the direct involvement of AI use leading to harm. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and violation of safety obligations.
Thumbnail Image

'Sexy suicide coach': ChatGPT adult mode triggers alarm inside OpenAI

2026-03-16
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article indicates that the AI system (ChatGPT) was to be used in a way that could plausibly lead to harm (mental health and child safety risks) if the 'adult mode' were launched. Although no harm has yet occurred, the credible warnings and internal backlash highlight a plausible risk of AI-related harm. Therefore, this event qualifies as an AI Hazard rather than an Incident, as the harm is potential and not realized.
Thumbnail Image

OpenAI Adult Mode Postponed for ChatGPT As Internal Safety Advisers Raise Alarms Over Erotica Feature | 📲 LatestLY

2026-03-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT's generative AI for erotica and AI-based age prediction) whose deployment is postponed due to safety concerns. The concerns relate to potential psychological harm to users and the risk of minors accessing adult content, which could constitute harm to health and rights. Since the harm is not realized but plausibly could occur if the feature were launched without adequate safeguards, this fits the definition of an AI Hazard. The article does not report any actual harm or incident but focuses on the potential risks and the company's response to them.
Thumbnail Image

OpenAI's X-Rated Chatbot Plan Triggers Internal Revolt

2026-03-16
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned feature to generate adult content, which is an AI system use case. The harms described include psychological harm, suicide risk, and exposure of minors to explicit content, which constitute injury or harm to persons (harm category a). The internal advisory council's warnings and the firing of a safety executive opposing the feature indicate the AI system's development and use are central to the harms. The misclassification of minors by the AI's age verification algorithm directly leads to potential harm. The article also references real cases of suicide linked to AI chatbots, confirming that harm has occurred. These factors meet the criteria for an AI Incident, as the AI system's development and use have directly and indirectly led to significant harm.
Thumbnail Image

OpenAI Pushes Ahead With ChatGPT Erotica Mode Despite 'Sexy Suicide Coach' Warning: WSJ - Decrypt

2026-03-16
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) planned to be used for erotic conversations, which raises serious mental health concerns and risks of harm, including suicide, as noted by the Expert Council. The AI's age verification system is imperfect, increasing the risk of minors accessing adult content. While no new direct harm from this specific launch is reported yet, the potential for harm is credible and significant. The internal tensions and delays reflect recognition of these risks. Since the harm is plausible but not yet realized in this event, it fits the definition of an AI Hazard rather than an AI Incident. The article is not merely complementary information because it focuses on the risk and decision to proceed despite warnings, nor is it unrelated.
Thumbnail Image

OpenAI Will Launch a 'Naughty' Version of ChatGPT for Adults Despite Oppositions, Says Report

2026-03-17
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT) with a new adult mode that could plausibly lead to harms such as emotional dependence, exposure of minors to inappropriate content, and psychological harm. Since no actual harm has been reported and the concerns are about potential future risks, this qualifies as an AI Hazard. The article primarily discusses the potential risks and expert warnings rather than a realized incident or harm, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the AI system and plausible harms are central to the report.
Thumbnail Image

'Sexy Suicide Coach': OpenAI 'Adult Mode' Plans Spur Internal Debate; Concerns Over Mental Health, Access To Minors

2026-03-16
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) with a new 'Adult Mode' feature that enables sexually explicit interactions. The concerns raised by experts and the cited case of a user harmed due to unhealthy attachment to the AI demonstrate direct or indirect harm to mental health, fulfilling the criteria for an AI Incident. Additionally, the safety mechanism's 12% error rate in misclassifying minors as adults indicates a malfunction or failure in the AI system's protective measures, further supporting the classification as an AI Incident. The presence of realized harm and ongoing risk to vulnerable populations (minors and emotionally unstable users) confirms this classification over AI Hazard or Complementary Information.
Thumbnail Image

OpenAI Restricts ChatGPT Adult Mode to Text-Only Erotica After Age-Check Failures

2026-03-16
Techloy
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (ChatGPT's adult mode) and its age-verification system failing to identify minors 12% of the time, which could allow millions of underage users to access sexual content. This is a clear AI system malfunction with a plausible risk of harm to minors, a protected group, through exposure to inappropriate content. Although the harm is not confirmed as realized, the risk is credible and significant. OpenAI's response to restrict the feature to text-only erotica and delay the launch further supports the recognition of this hazard. Since no actual harm is reported yet, and the focus is on potential risk and mitigation, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI's own wellbeing advisors warned against erotic mode, called it a "sexy suicide coach"

2026-03-16
The Decoder
Why's our monitor labelling this an incident or hazard?
The article details credible risks associated with the planned AI feature, including harm to minors and emotional health risks, but these harms have not materialized since the feature launch was postponed. The AI system's development and intended use could plausibly lead to an AI Incident if launched prematurely. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harm from the AI system's use, not an actual incident or complementary information about responses to a past incident.
Thumbnail Image

ChatGPT 'Adult mode' delayed as OpenAI faces safety and mental health concerns

2026-03-17
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions ChatGPT, an AI system, and discusses harms linked to its use, including mental health risks and a suicide case. These constitute AI Incidents in the background. However, the main focus is on OpenAI's decision to delay the Adult mode feature rollout due to these concerns and lawsuits, which is a governance and safety response. No new harm or plausible future harm from the delayed feature itself is described. Thus, the event is Complementary Information, updating on responses to prior AI Incidents rather than reporting a new Incident or Hazard.
Thumbnail Image

OpenAI desvela cómo funcionará el polémico modo adulto de ChatGPT

2026-03-17
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and intended use (adult mode). However, the article does not describe any realized harm or incidents caused by the AI system. Instead, it focuses on potential risks, safety concerns, and delays to prevent harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (e.g., minors accessing adult content, user dependency), but no harm has yet occurred. It is not Complementary Information because the article is not about responses to a past incident but about the planned feature and its risks. It is not Unrelated because it clearly involves an AI system and its potential risks.
Thumbnail Image

ChatGPT: ¿cuándo llegarán las publicidades al chatbot y cómo aparecerán?

2026-03-18
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned use of advertisements integrated alongside chatbot responses. However, the article does not describe any realized harm or incident resulting from this change. Instead, it reports on the policy update and the anticipated introduction of ads, including privacy assurances and user segmentation. This constitutes a development in the AI ecosystem and governance responses rather than an incident or hazard. Therefore, it fits the definition of Complementary Information, as it provides context and updates about AI system use and governance without describing a specific harm or plausible harm event.
Thumbnail Image

January meeting that made Sam Altman 'pause' OpenAI's 'adult mode' plan; as employees warned ... - The Times of India

2026-03-17
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the planned use of an AI-generated adult content feature. The concerns raised by internal advisers about emotional dependence, compulsive use, and minors accessing adult content represent plausible risks of harm. Since the feature rollout has been delayed and no actual harm has been reported, the event does not meet the criteria for an AI Incident. Instead, it fits the definition of an AI Hazard because the AI system's intended use could plausibly lead to harm in the future. The article focuses on internal deliberations and safety concerns rather than reporting realized harm or incidents.
Thumbnail Image

OpenAI's X-rated adult mode delayed over safety concerns, report claims

2026-03-17
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system with explicit content capabilities and age verification technology. The high error rate in age classification and concerns about emotional dependence indicate credible risks of harm to minors and users, including exposure to developmentally inappropriate content and psychological harm. Since the adult mode has not yet been launched, no actual harm has occurred, but the plausible future harm is significant and directly linked to the AI system's design and deployment plans. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT's upcoming erotic chat mode risks exposing millions of kids to adult content

2026-03-17
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (introduction of adult mode) could directly lead to harm, specifically exposure of minors to adult content and emotional harm from over-reliance on the AI. The article reports that the age verification system misclassifies minors as adults about 12% of the time, implying realized or imminent harm to children. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm or risk of harm to a vulnerable group (minors), including potential violations of protections for children and harm to their health and well-being. Although the launch is delayed, the existing misclassification and potential exposure already constitute realized harm or at least a direct risk that is materializing. Therefore, this is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's own advisers call ChatGPT erotica a "sexy suicide coach

2026-03-17
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) and its planned sexually explicit chat feature, which is intended to be deployed despite expert warnings. The warnings highlight plausible future harms, including mental health risks and inappropriate access by minors due to imperfect age verification. No actual harm has been reported yet, but the credible risk of harm to vulnerable users, especially minors, is significant. The firing of a safety executive opposing the release underscores internal conflict about safety concerns. Since harm is not yet realized but plausibly could occur, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI wants a sexy ChatGPT

2026-03-18
Morning Brew
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly used in a new "adult mode" feature enabling sexualized conversations. The article references documented harm (a child's suicide linked to sexualized AI chats) and credible concerns about emotional harm and access by minors, indicating direct or indirect harm to health and well-being. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The planned launch and existing issues with age verification further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

OpenAI Advisers Alarmed as ChatGPT May Soon Let Users Have Erotic Conversations With 'Adult Mode'

2026-03-17
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential development of an AI system (ChatGPT) with new capabilities that could plausibly lead to harm, particularly emotional harm to vulnerable users and exposure of minors to inappropriate content. However, the article does not report any realized harm or incidents but rather internal warnings and ethical concerns about possible future harms. Therefore, this qualifies as an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving harm to persons or communities.
Thumbnail Image

OpenAI contra las cuerdas: El "Modo Adulto" de ChatGPT que escandaliza a sus propios asesores

2026-03-17
FayerWayer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the potential use (or misuse) of a new mode that could lead to significant harms such as digital harassment, hate speech, and security breaches. Since these harms have not yet materialized but are plausibly foreseeable if the "Adult Mode" is implemented without safeguards, this constitutes an AI Hazard. The article does not report any realized harm or incident but rather a credible risk and internal conflict about future harm potential, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI's Own Advisers Tried to Kill ChatGPT 'Adult Mode' -- the Company Ignored Them

2026-03-17
Technology Org
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its new "adult mode" feature, which is AI-powered text generation. The harms include emotional dependence, exposure of minors to sexual content, and potential encouragement of self-harm or violence, all of which are direct or indirect harms to persons and communities. The failure of age verification and content filtering systems constitutes malfunction and misuse of the AI system. The harms are realized, not just potential, as evidenced by reported cases and internal concerns. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Wellbeing Advisory Board Unanimously Opposes Adult ChatGPT Mode

2026-03-17
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its proposed adult mode, which would generate erotic AI chat content. The advisory board's unanimous opposition is based on credible risks of harm, including emotional harm, suicide, and exposure of minors due to imperfect age detection. Although no actual harm from the adult mode has occurred yet (since it has not been launched), the potential for significant harm is clearly articulated and plausible. The event centers on the development and intended use of an AI system that could lead to serious harms, fitting the definition of an AI Hazard. It is not an AI Incident because the harms are not yet realized, nor is it Complementary Information or Unrelated, as the focus is on the potential risks of a specific AI system's deployment.
Thumbnail Image

OpenAI's Wellbeing Advisory Board Unanimously Opposes Adult ChatGPT Mode

2026-03-17
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its proposed adult mode, which would generate erotic AI chat content. The harms described include mental health risks, emotional dependency, and potential exposure of minors to inappropriate content, all of which are direct or indirect harms to persons (harm category a). The presence of lawsuits alleging wrongful deaths linked to ChatGPT's outputs further confirms realized harm. The advisory board's unanimous opposition and the company's decision to proceed despite these warnings highlight a failure in safety governance, increasing the risk of harm. Therefore, this event meets the criteria for an AI Incident due to the AI system's use leading to significant harm to users' wellbeing and safety.
Thumbnail Image

OpenAI warned against creating X-rated 'adult mode' as it could create a 'sexy suicide coach'

2026-03-17
UNILAD
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT and other AI chatbots) is explicitly involved, with its use and potential misuse directly linked to harm to individuals, including emotional dependence and suicide. The article reports realized harm (a minor's suicide linked to AI chatbot interaction) and concerns about future harm from enabling adult content in AI chatbots. The involvement of AI in causing or contributing to these harms meets the criteria for an AI Incident, as the harm to health and well-being of persons has occurred and is directly or indirectly linked to the AI system's use and malfunction (e.g., misclassification of age, inability to fully block harmful content).
Thumbnail Image

OpenAI entre la espada y la pared: el "modo adulto" de ChatGPT escandaliza incluso a sus propios asesores

2026-03-17
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use is planned to be expanded to include adult content. The article explicitly mentions a significant failure rate in age verification, leading to minors potentially accessing harmful content. This constitutes a plausible risk of harm and violation of legal protections (rights of minors). Since the harm is not confirmed as having occurred but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident. The concerns from internal advisors and the potential for misuse reinforce this classification.
Thumbnail Image

'Sexy Suicide Coach:' OpenAI Delays AI Porn Feature over Safety Uproar

2026-03-19
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (ChatGPT) and a planned feature involving erotic text conversations, which is an AI system use case. The AI's age-prediction system is malfunctioning with a 12% error rate, potentially exposing minors to adult content, which could plausibly lead to psychological harm and violation of child protection laws. The feature has been delayed due to these safety concerns, indicating that harm has not yet occurred but is plausible. The article does not report any realized harm or incidents but focuses on the potential risks and the company's response. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT's 'Adult Mode' Could Spark a New Era of Intimate Surveillance

2026-03-19
Wired
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) with enhanced memory and personalization features that will be used for generating erotic content. The article raises concerns about the surveillance aspect of data collection and retention, which could plausibly lead to violations of privacy and related harms. However, there is no indication that harm has already occurred, only that it could plausibly occur once the feature is released and widely used. Therefore, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI.
Thumbnail Image

ChatGPT's 'Adult Mode' Could Spark a New Era of Intimate Surveillance

2026-03-19
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a new adult mode that collects and retains intimate user data, which has already resulted in privacy breaches and data exposure incidents. These constitute violations of user privacy and potentially human rights related to data protection. The article describes realized harms (past data leaks) and ongoing risks of harm from the AI system's data retention and surveillance capabilities. Hence, it meets the criteria for an AI Incident, as the AI system's use has directly and indirectly led to harm to individuals' privacy and rights.
Thumbnail Image

ChatGPT Disebut Mau Hadirkan Fitur Mode Dewasa, tapi Jutaan Anak Terancam Lolos Verifikasi Usia

2026-03-18
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned feature that would allow explicit content. The age verification system's failure rate implies a credible risk that minors will access harmful content, leading to psychological harm and emotional dependency, which are recognized harms under the framework. Since the feature has not been launched and harm is not yet realized but is plausible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The internal company conflict and expert warnings reinforce the plausibility of future harm. Thus, the event is best classified as an AI Hazard.