OpenAI Faces Backlash Over ChatGPT 'Adult Mode' Amid Mental Health and Safety Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI's planned 'Adult Mode' for ChatGPT sparked internal debate due to risks of unhealthy emotional attachment and exposure to minors. Advisors warned of potential harm, including suicide linked to AI chatbots. Age-verification failures could let millions of minors access explicit content, prompting OpenAI to delay the feature.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) and its planned feature (adult mode) that could plausibly lead to significant psychological harm, including emotional dependency and suicidal risk, especially among minors who might access the content due to imperfect age verification. While a past AI-related suicide is mentioned, it involved a different AI chatbot (Character.AI), and the adult mode itself has not yet been released or caused harm. Thus, the article describes a credible potential risk (hazard) rather than a realized incident. The concerns about misuse, user harm, and insufficient safeguards align with the definition of an AI Hazard.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenConsumers

Harm types
Physical (death)Psychological

Severity
AI hazard

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

오픈AI '성인모드' 출시 계속 추진에 "자살코치 될 위험" 경고도 | 연합뉴스

2026-03-16
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned feature (adult mode) that could plausibly lead to significant psychological harm, including emotional dependency and suicidal risk, especially among minors who might access the content due to imperfect age verification. While a past AI-related suicide is mentioned, it involved a different AI chatbot (Character.AI), and the adult mode itself has not yet been released or caused harm. Thus, the article describes a credible potential risk (hazard) rather than a realized incident. The concerns about misuse, user harm, and insufficient safeguards align with the definition of an AI Hazard.
Thumbnail Image

오픈AI '성인모드' 출시 방침 고수..."자살코치 될 위험" 경고 무시

2026-03-16
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's chatbot) whose development and intended use (adult mode allowing sexual conversations) is linked to psychological harms, including emotional dependence and risk of suicide, as evidenced by a prior fatality connected to a similar AI chatbot. The article reports direct concerns and warnings from experts and internal advisors about these harms, indicating realized or plausible harm. The involvement of the AI system in these harms is direct or indirect through its outputs and user interactions. Hence, this qualifies as an AI Incident due to harm to persons and potential violation of rights (protection of minors).
Thumbnail Image

"성적 대화 허용?"...챗GPT '성인 모드' 추진 논란

2026-03-16
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT and similar chatbots) used for generating conversational content, including sexual and emotionally manipulative dialogue. The AI's use has directly led to harm: a minor's suicide linked to interactions with the AI, which constitutes injury to health and harm to a person. Additionally, the failure of the age verification system (12% error rate) plausibly exposes more minors to harmful content, further supporting the classification as an AI Incident. The article reports actual harm, not just potential risk, and the AI's role is pivotal in the harm caused. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

오픈AI 성인 모드, 음란물 아닌 '야한 대화'로 제한

2026-03-16
디지털투데이 (DigitalToday)
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (ChatGPT with adult mode) that could plausibly lead to harm, such as exposure of minors to inappropriate content and emotional dependency risks. However, no actual harm has been reported yet, and the adult mode is still in planning/delayed release stages. Therefore, this constitutes an AI Hazard, as the AI system's use could plausibly lead to incidents involving harm to minors or other harms if not properly managed.
Thumbnail Image

오픈AI, 성인 모드 도입 검토...미성년자 노출 과제 여전 - 동행미디어 시대

2026-03-17
동행미디어 시대
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, with a planned new feature (adult mode) that would allow sexual conversations. The article describes concerns about plausible future harms, including minors being exposed to sexual content due to imperfect age verification and emotional harm to users. Since no actual harm has been reported yet and the feature's release has been delayed to address these issues, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential risks and challenges rather than realized harm or incidents.
Thumbnail Image

'성인모드' 검토하는 챗GPT...미성년 이용자 1억 명 안전 우려

2026-03-17
와이드경제
Why's our monitor labelling this an incident or hazard?
The event involves the use and planned development of an AI system (ChatGPT) with a new feature ('adult mode') that could allow sexual content. The article highlights realized harms (a minor's suicide linked to AI chatbot interaction) and plausible future harms (minors accessing sexual content, emotional dependence, social harm). These harms fall under injury to health and harm to communities. Therefore, this qualifies as an AI Incident due to direct and indirect harm caused and the ongoing risk associated with the AI system's use and development.
Thumbnail Image

OpenAI's Bid to Allow X-rated Talk Is Freaking Out Its Own Advisers

2026-03-16
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned new feature (adult mode) that would allow erotic conversations. The concerns raised by advisers and staff about emotional dependence, minors accessing adult content, and potential exposure to harmful content indicate plausible future harms. No direct or indirect harm has yet occurred according to the article, but the risks are credible and significant. The company's efforts to develop age-prediction and content moderation systems are ongoing, and the launch has been delayed due to these concerns. Thus, this situation fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident if the risks materialize, but no incident has yet occurred.
Thumbnail Image

'Adult Mode' On ChatGPT Delayed Amid Concerns Over Suicide Cases Linked To AI Chats: Report

2026-03-17
NDTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use has been linked to significant harms, including mental health risks and suicide cases, as evidenced by multiple lawsuits and internal concerns. The AI system's development and use have directly or indirectly led to harm to persons' health (mental health and suicide), fulfilling the criteria for an AI Incident. The article details realized harms rather than just potential risks, and the legal cases underscore the seriousness of the impact. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

ChatGPT 'Adult Mode' Delayed: OpenAI faces rising safety concerns over AI misuse and mental health risks

2026-03-17
India News, Breaking News, Entertainment News | India.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in adult conversations, which is delayed due to safety concerns about mental health harms and misuse. The article references existing legal complaints alleging harm from AI chatbots, indicating realized harm linked to AI use. The delay and improved moderation efforts are responses to these harms. Since harm has already occurred from AI chatbots in related contexts and the article discusses ongoing risks and mitigation, this qualifies as an AI Incident due to the direct or indirect harm caused by AI conversational systems. The main focus is on harm and safety concerns, not just potential future harm or general AI news, so it is not a hazard or complementary information.
Thumbnail Image

'Sexy suicide coach': Inside the blow-up over OpenAI's plan for erotic ChatGPT chats

2026-03-16
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned use for erotic conversations. The age-prediction system's misclassification of minors as adults (12% error rate) directly raises the risk of minors accessing harmful content, which is a plausible pathway to harm. Additionally, internal expert warnings about emotional dependence and mental health risks further support the potential for harm. Since the feature has not yet been launched and no actual harm is reported, but credible risks are identified, the event fits the definition of an AI Hazard rather than an AI Incident. The article also discusses internal debates and delays, but these are part of the hazard context rather than complementary information or unrelated news.
Thumbnail Image

OpenAI's proposed 'adult mode' sparks internal safety concerns over explicit AI chats

2026-03-16
MoneyControl
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its proposed new feature that would allow explicit conversations. The concerns raised by advisers and safety experts highlight potential risks to mental health and access by minors, which could plausibly lead to harm if the feature is deployed without effective safeguards. However, since the feature has not yet been launched and no actual harm has been reported, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and the ongoing internal debate and technical challenges, fitting the definition of an AI Hazard.
Thumbnail Image

Disturbing warning ChatGPT could turn into 'sexy suicide coach' as...

2026-03-16
New York Post
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its use, specifically the planned introduction of an erotic chatbot feature. The harms described include mental health injury and suicide, which fall under harm to health of persons. The involvement of the AI system is direct, as the chatbot's interactions are alleged to have contributed to these harms. The presence of lawsuits and internal warnings further confirm the realized harm. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use and outputs.
Thumbnail Image

OpenAI delays Adult mode release after suicide cases rise due to ChatGPT

2026-03-16
India Today
Why's our monitor labelling this an incident or hazard?
The article explicitly links ChatGPT, an AI system, to several suicide cases and lawsuits, indicating direct or indirect harm to users' health. The AI system's use in sensitive conversations has contributed to these harms. The delay of the adult mode feature is a response to these harms and concerns, but the primary focus is on the realized harms and legal consequences stemming from the AI's use. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Bid to Allow X-rated Talk Is Freaking Out Its Own Advisers

2026-03-16
The Wall Street Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT and its age-prediction system) and discusses the planned use of AI for adult-themed conversations. While it references past harms from AI chatbots, the new adult mode has not yet been launched, so no direct harm has occurred from it. The article details credible risks such as minors accessing adult content due to age-prediction errors, emotional dependence, and exposure to harmful content, which could plausibly lead to AI Incidents if the adult mode is released without adequate safeguards. Therefore, the event represents a credible potential for harm stemming from AI use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it clearly concerns AI systems and their societal impact.
Thumbnail Image

OpenAI being warned allowing X-rated chat as it may create a 'sexy suicide coach'

2026-03-16
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the planned use of an adult mode that could generate erotica and engage users in adult-themed conversations. The concerns raised by advisors about emotional dependence, risk to minors, and the possibility of the AI acting as a "sexy suicide coach" indicate plausible future harms to health and rights. Since the adult mode has not yet been launched and no direct harm from it has been reported, this is a credible potential risk rather than an incident. The article does not primarily focus on a response to a past incident or broader governance but on the risk of harm from the AI system's planned use. Hence, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI to Launch ChatGPT 'Adult Mode' Despite Warnings From Its Own Advisers

2026-03-16
CNET
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of an AI system (ChatGPT) with a new feature (adult mode) that could plausibly lead to harm, such as minors accessing adult content and emotional harm to users. The article focuses on warnings and concerns from advisers and internal debates, with no evidence of realized harm so far. Therefore, this qualifies as an AI Hazard because the AI system's development and use could plausibly lead to an AI Incident, but no incident has yet occurred.
Thumbnail Image

OpenAI's ChatGPT adult mode sparks internal debate over safety, ethics and AI relationships

2026-03-16
Firstpost
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT and Grok) and discusses their development and use, focusing on potential harms such as emotional dependence, exposure of minors to explicit content, and misuse of AI-generated images. However, it does not report a specific AI Incident directly caused by OpenAI's adult mode, as it is still under development and the harms are potential rather than realized. The Grok incident is mentioned as background context and regulatory response, not as the primary subject. The main narrative centers on internal debates, safety concerns, and governance challenges, which fits the definition of Complementary Information. Thus, the event is best classified as Complimentary Info.
Thumbnail Image

ChatGPT may soon become "sexy suicide coach," OpenAI advisor reportedly warned

2026-03-16
Ars Technica
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its development and use of an "adult mode" feature. It documents realized harms, including suicides linked to AI chatbot interactions, which constitute injury or harm to health (a). It also details failures and risks in age verification and content filtering that have already allowed minors to access inappropriate content, further supporting the presence of harm. The AI system's outputs have directly or indirectly led to these harms, fulfilling the criteria for an AI Incident. Although some concerns are about potential future harms, the presence of actual suicides and harmful emotional dependence confirms realized harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

ChatGPT Adult Mode Postponed After Safety Experts Raise Teen Access Concerns - Blockonomi

2026-03-16
Blockonomi
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the development and intended use of a new feature that could lead to psychological harm and unsafe access by minors. Although it references past incidents of harm related to AI chatbots, the specific "adult mode" feature has not been deployed, and no new harm from it has occurred yet. The postponement is a response to credible risks identified by safety experts, including the failure of age-verification technology and content filtering. Thus, the event describes a credible potential for harm (psychological harm, underage exposure to adult content) that could plausibly lead to an AI Incident if the feature were launched without adequate safeguards. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

2021 to 2026: How OpenAI went from banning AI erotica to building it

2026-03-17
Digit
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (OpenAI's language models) generating harmful sexual content autonomously and unprompted, which caused real harm by exposing minors to inappropriate content and psychological risks. The misclassification of minors as adults by the AI's age prediction system led to millions of children being exposed to adult content, a direct harm to health and wellbeing. The company's decision to override safety measures and proceed with adult content despite known risks further confirms the direct involvement of AI use leading to harm. This fits the definition of an AI Incident as the AI system's use directly led to harm to persons and violation of safety obligations.
Thumbnail Image

'Sexy suicide coach': ChatGPT adult mode triggers alarm inside OpenAI

2026-03-16
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article indicates that the AI system (ChatGPT) was to be used in a way that could plausibly lead to harm (mental health and child safety risks) if the 'adult mode' were launched. Although no harm has yet occurred, the credible warnings and internal backlash highlight a plausible risk of AI-related harm. Therefore, this event qualifies as an AI Hazard rather than an Incident, as the harm is potential and not realized.
Thumbnail Image

OpenAI Adult Mode Postponed for ChatGPT As Internal Safety Advisers Raise Alarms Over Erotica Feature | 📲 LatestLY

2026-03-16
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (ChatGPT's generative AI for erotica and AI-based age prediction) whose deployment is postponed due to safety concerns. The concerns relate to potential psychological harm to users and the risk of minors accessing adult content, which could constitute harm to health and rights. Since the harm is not realized but plausibly could occur if the feature were launched without adequate safeguards, this fits the definition of an AI Hazard. The article does not report any actual harm or incident but focuses on the potential risks and the company's response to them.
Thumbnail Image

OpenAI's X-Rated Chatbot Plan Triggers Internal Revolt

2026-03-16
Gadget Review
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned feature to generate adult content, which is an AI system use case. The harms described include psychological harm, suicide risk, and exposure of minors to explicit content, which constitute injury or harm to persons (harm category a). The internal advisory council's warnings and the firing of a safety executive opposing the feature indicate the AI system's development and use are central to the harms. The misclassification of minors by the AI's age verification algorithm directly leads to potential harm. The article also references real cases of suicide linked to AI chatbots, confirming that harm has occurred. These factors meet the criteria for an AI Incident, as the AI system's development and use have directly and indirectly led to significant harm.
Thumbnail Image

OpenAI Pushes Ahead With ChatGPT Erotica Mode Despite 'Sexy Suicide Coach' Warning: WSJ - Decrypt

2026-03-16
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) planned to be used for erotic conversations, which raises serious mental health concerns and risks of harm, including suicide, as noted by the Expert Council. The AI's age verification system is imperfect, increasing the risk of minors accessing adult content. While no new direct harm from this specific launch is reported yet, the potential for harm is credible and significant. The internal tensions and delays reflect recognition of these risks. Since the harm is plausible but not yet realized in this event, it fits the definition of an AI Hazard rather than an AI Incident. The article is not merely complementary information because it focuses on the risk and decision to proceed despite warnings, nor is it unrelated.
Thumbnail Image

OpenAI Will Launch a 'Naughty' Version of ChatGPT for Adults Despite Oppositions, Says Report

2026-03-17
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT) with a new adult mode that could plausibly lead to harms such as emotional dependence, exposure of minors to inappropriate content, and psychological harm. Since no actual harm has been reported and the concerns are about potential future risks, this qualifies as an AI Hazard. The article primarily discusses the potential risks and expert warnings rather than a realized incident or harm, so it is not an AI Incident. It is also not merely complementary information or unrelated, as the AI system and plausible harms are central to the report.
Thumbnail Image

'Sexy Suicide Coach': OpenAI 'Adult Mode' Plans Spur Internal Debate; Concerns Over Mental Health, Access To Minors

2026-03-16
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) with a new 'Adult Mode' feature that enables sexually explicit interactions. The concerns raised by experts and the cited case of a user harmed due to unhealthy attachment to the AI demonstrate direct or indirect harm to mental health, fulfilling the criteria for an AI Incident. Additionally, the safety mechanism's 12% error rate in misclassifying minors as adults indicates a malfunction or failure in the AI system's protective measures, further supporting the classification as an AI Incident. The presence of realized harm and ongoing risk to vulnerable populations (minors and emotionally unstable users) confirms this classification over AI Hazard or Complementary Information.
Thumbnail Image

OpenAI Restricts ChatGPT Adult Mode to Text-Only Erotica After Age-Check Failures

2026-03-16
Techloy
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (ChatGPT's adult mode) and its age-verification system failing to identify minors 12% of the time, which could allow millions of underage users to access sexual content. This is a clear AI system malfunction with a plausible risk of harm to minors, a protected group, through exposure to inappropriate content. Although the harm is not confirmed as realized, the risk is credible and significant. OpenAI's response to restrict the feature to text-only erotica and delay the launch further supports the recognition of this hazard. Since no actual harm is reported yet, and the focus is on potential risk and mitigation, the classification as an AI Hazard is appropriate.
Thumbnail Image

OpenAI's own wellbeing advisors warned against erotic mode, called it a "sexy suicide coach"

2026-03-16
The Decoder
Why's our monitor labelling this an incident or hazard?
The article details credible risks associated with the planned AI feature, including harm to minors and emotional health risks, but these harms have not materialized since the feature launch was postponed. The AI system's development and intended use could plausibly lead to an AI Incident if launched prematurely. Therefore, this event qualifies as an AI Hazard because it concerns plausible future harm from the AI system's use, not an actual incident or complementary information about responses to a past incident.
Thumbnail Image

ChatGPT 'Adult mode' delayed as OpenAI faces safety and mental health concerns

2026-03-17
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions ChatGPT, an AI system, and discusses harms linked to its use, including mental health risks and a suicide case. These constitute AI Incidents in the background. However, the main focus is on OpenAI's decision to delay the Adult mode feature rollout due to these concerns and lawsuits, which is a governance and safety response. No new harm or plausible future harm from the delayed feature itself is described. Thus, the event is Complementary Information, updating on responses to prior AI Incidents rather than reporting a new Incident or Hazard.
Thumbnail Image

OpenAI desvela cómo funcionará el polémico modo adulto de ChatGPT

2026-03-17
La Razón
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and intended use (adult mode). However, the article does not describe any realized harm or incidents caused by the AI system. Instead, it focuses on potential risks, safety concerns, and delays to prevent harm. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to harm (e.g., minors accessing adult content, user dependency), but no harm has yet occurred. It is not Complementary Information because the article is not about responses to a past incident but about the planned feature and its risks. It is not Unrelated because it clearly involves an AI system and its potential risks.
Thumbnail Image

ChatGPT: ¿cuándo llegarán las publicidades al chatbot y cómo aparecerán?

2026-03-18
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned use of advertisements integrated alongside chatbot responses. However, the article does not describe any realized harm or incident resulting from this change. Instead, it reports on the policy update and the anticipated introduction of ads, including privacy assurances and user segmentation. This constitutes a development in the AI ecosystem and governance responses rather than an incident or hazard. Therefore, it fits the definition of Complementary Information, as it provides context and updates about AI system use and governance without describing a specific harm or plausible harm event.
Thumbnail Image

January meeting that made Sam Altman 'pause' OpenAI's 'adult mode' plan; as employees warned ... - The Times of India

2026-03-17
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the planned use of an AI-generated adult content feature. The concerns raised by internal advisers about emotional dependence, compulsive use, and minors accessing adult content represent plausible risks of harm. Since the feature rollout has been delayed and no actual harm has been reported, the event does not meet the criteria for an AI Incident. Instead, it fits the definition of an AI Hazard because the AI system's intended use could plausibly lead to harm in the future. The article focuses on internal deliberations and safety concerns rather than reporting realized harm or incidents.
Thumbnail Image

OpenAI's X-rated adult mode delayed over safety concerns, report claims

2026-03-17
Mashable
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system with explicit content capabilities and age verification technology. The high error rate in age classification and concerns about emotional dependence indicate credible risks of harm to minors and users, including exposure to developmentally inappropriate content and psychological harm. Since the adult mode has not yet been launched, no actual harm has occurred, but the plausible future harm is significant and directly linked to the AI system's design and deployment plans. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT's upcoming erotic chat mode risks exposing millions of kids to adult content

2026-03-17
Digital Trends
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use (introduction of adult mode) could directly lead to harm, specifically exposure of minors to adult content and emotional harm from over-reliance on the AI. The article reports that the age verification system misclassifies minors as adults about 12% of the time, implying realized or imminent harm to children. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm or risk of harm to a vulnerable group (minors), including potential violations of protections for children and harm to their health and well-being. Although the launch is delayed, the existing misclassification and potential exposure already constitute realized harm or at least a direct risk that is materializing. Therefore, this is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's own advisers call ChatGPT erotica a "sexy suicide coach

2026-03-17
Boing Boing
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (ChatGPT) and its planned sexually explicit chat feature, which is intended to be deployed despite expert warnings. The warnings highlight plausible future harms, including mental health risks and inappropriate access by minors due to imperfect age verification. No actual harm has been reported yet, but the credible risk of harm to vulnerable users, especially minors, is significant. The firing of a safety executive opposing the release underscores internal conflict about safety concerns. Since harm is not yet realized but plausibly could occur, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI wants a sexy ChatGPT

2026-03-18
Morning Brew
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) explicitly used in a new "adult mode" feature enabling sexualized conversations. The article references documented harm (a child's suicide linked to sexualized AI chats) and credible concerns about emotional harm and access by minors, indicating direct or indirect harm to health and well-being. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The planned launch and existing issues with age verification further support the classification as an incident rather than a mere hazard or complementary information.
Thumbnail Image

OpenAI Advisers Alarmed as ChatGPT May Soon Let Users Have Erotic Conversations With 'Adult Mode'

2026-03-17
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event involves the use and potential development of an AI system (ChatGPT) with new capabilities that could plausibly lead to harm, particularly emotional harm to vulnerable users and exposure of minors to inappropriate content. However, the article does not report any realized harm or incidents but rather internal warnings and ethical concerns about possible future harms. Therefore, this qualifies as an AI Hazard, as the AI system's development and use could plausibly lead to an AI Incident involving harm to persons or communities.
Thumbnail Image

OpenAI contra las cuerdas: El "Modo Adulto" de ChatGPT que escandaliza a sus propios asesores

2026-03-17
FayerWayer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the potential use (or misuse) of a new mode that could lead to significant harms such as digital harassment, hate speech, and security breaches. Since these harms have not yet materialized but are plausibly foreseeable if the "Adult Mode" is implemented without safeguards, this constitutes an AI Hazard. The article does not report any realized harm or incident but rather a credible risk and internal conflict about future harm potential, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI's Own Advisers Tried to Kill ChatGPT 'Adult Mode' -- the Company Ignored Them

2026-03-17
Technology Org
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its new "adult mode" feature, which is AI-powered text generation. The harms include emotional dependence, exposure of minors to sexual content, and potential encouragement of self-harm or violence, all of which are direct or indirect harms to persons and communities. The failure of age verification and content filtering systems constitutes malfunction and misuse of the AI system. The harms are realized, not just potential, as evidenced by reported cases and internal concerns. Hence, this meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI's Wellbeing Advisory Board Unanimously Opposes Adult ChatGPT Mode

2026-03-17
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its proposed adult mode, which would generate erotic AI chat content. The advisory board's unanimous opposition is based on credible risks of harm, including emotional harm, suicide, and exposure of minors due to imperfect age detection. Although no actual harm from the adult mode has occurred yet (since it has not been launched), the potential for significant harm is clearly articulated and plausible. The event centers on the development and intended use of an AI system that could lead to serious harms, fitting the definition of an AI Hazard. It is not an AI Incident because the harms are not yet realized, nor is it Complementary Information or Unrelated, as the focus is on the potential risks of a specific AI system's deployment.
Thumbnail Image

OpenAI's Wellbeing Advisory Board Unanimously Opposes Adult ChatGPT Mode

2026-03-17
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its proposed adult mode, which would generate erotic AI chat content. The harms described include mental health risks, emotional dependency, and potential exposure of minors to inappropriate content, all of which are direct or indirect harms to persons (harm category a). The presence of lawsuits alleging wrongful deaths linked to ChatGPT's outputs further confirms realized harm. The advisory board's unanimous opposition and the company's decision to proceed despite these warnings highlight a failure in safety governance, increasing the risk of harm. Therefore, this event meets the criteria for an AI Incident due to the AI system's use leading to significant harm to users' wellbeing and safety.
Thumbnail Image

OpenAI warned against creating X-rated 'adult mode' as it could create a 'sexy suicide coach'

2026-03-17
UNILAD
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT and other AI chatbots) is explicitly involved, with its use and potential misuse directly linked to harm to individuals, including emotional dependence and suicide. The article reports realized harm (a minor's suicide linked to AI chatbot interaction) and concerns about future harm from enabling adult content in AI chatbots. The involvement of AI in causing or contributing to these harms meets the criteria for an AI Incident, as the harm to health and well-being of persons has occurred and is directly or indirectly linked to the AI system's use and malfunction (e.g., misclassification of age, inability to fully block harmful content).
Thumbnail Image

OpenAI entre la espada y la pared: el "modo adulto" de ChatGPT escandaliza incluso a sus propios asesores

2026-03-17
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose use is planned to be expanded to include adult content. The article explicitly mentions a significant failure rate in age verification, leading to minors potentially accessing harmful content. This constitutes a plausible risk of harm and violation of legal protections (rights of minors). Since the harm is not confirmed as having occurred but the risk is credible and significant, the event fits the definition of an AI Hazard rather than an AI Incident. The concerns from internal advisors and the potential for misuse reinforce this classification.
Thumbnail Image

'Sexy Suicide Coach:' OpenAI Delays AI Porn Feature over Safety Uproar

2026-03-19
Breitbart
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (ChatGPT) and a planned feature involving erotic text conversations, which is an AI system use case. The AI's age-prediction system is malfunctioning with a 12% error rate, potentially exposing minors to adult content, which could plausibly lead to psychological harm and violation of child protection laws. The feature has been delayed due to these safety concerns, indicating that harm has not yet occurred but is plausible. The article does not report any realized harm or incidents but focuses on the potential risks and the company's response. Thus, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT's 'Adult Mode' Could Spark a New Era of Intimate Surveillance

2026-03-19
Wired
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) with enhanced memory and personalization features that will be used for generating erotic content. The article raises concerns about the surveillance aspect of data collection and retention, which could plausibly lead to violations of privacy and related harms. However, there is no indication that harm has already occurred, only that it could plausibly occur once the feature is released and widely used. Therefore, this qualifies as an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI.
Thumbnail Image

ChatGPT's 'Adult Mode' Could Spark a New Era of Intimate Surveillance

2026-03-19
DNYUZ
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (ChatGPT) in a new adult mode that collects and retains intimate user data, which has already resulted in privacy breaches and data exposure incidents. These constitute violations of user privacy and potentially human rights related to data protection. The article describes realized harms (past data leaks) and ongoing risks of harm from the AI system's data retention and surveillance capabilities. Hence, it meets the criteria for an AI Incident, as the AI system's use has directly and indirectly led to harm to individuals' privacy and rights.
Thumbnail Image

ChatGPT Disebut Mau Hadirkan Fitur Mode Dewasa, tapi Jutaan Anak Terancam Lolos Verifikasi Usia

2026-03-18
Pikiran-Rakyat.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned feature that would allow explicit content. The age verification system's failure rate implies a credible risk that minors will access harmful content, leading to psychological harm and emotional dependency, which are recognized harms under the framework. Since the feature has not been launched and harm is not yet realized but is plausible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The internal company conflict and expert warnings reinforce the plausibility of future harm. Thus, the event is best classified as an AI Hazard.
Thumbnail Image

오픈AI, '소라' 중단 이어 '성인모드' 개발 무기한 보류

2026-03-27
서울신문
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (OpenAI's generative AI tools) and their development and use. However, no direct or indirect harm has occurred yet; the company is responding to concerns and ethical risks by delaying release and conducting further research. This fits the definition of Complementary Information, as it provides updates on responses to potential AI-related risks and governance decisions, rather than reporting an AI Incident or AI Hazard with realized or imminent harm.
Thumbnail Image

오픈AI, 전문가·투자자 '난색'에 '성인모드' 무기한 보류 | 연합뉴스

2026-03-26
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system designed to generate adult content, which raises significant ethical and legal concerns, including potential harm to minors and the generation of illegal content. Although no harm has materialized because the feature has been postponed indefinitely, the article clearly outlines credible risks that could plausibly lead to AI incidents if the system were deployed. Therefore, this situation fits the definition of an AI Hazard rather than an AI Incident. It is not merely general AI news or a complementary update, as the postponement is directly linked to the plausible risk of harm from the AI system's use.
Thumbnail Image

오픈AI, 전문가·투자자 난색에 '성인모드' 개발 무기한 보류

2026-03-26
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the postponement of an AI feature ('adult mode') due to concerns about ethical risks, potential exposure of minors, and difficulties in filtering harmful content. Although no actual harm has occurred, the concerns and challenges indicate a plausible risk of future harm if the system were deployed. The AI system's development and intended use are central to the event, and the decision to delay reflects recognition of these hazards. There is no indication of a realized incident or harm, nor is the article primarily about responses to past incidents or general AI news. Hence, the classification as AI Hazard is appropriate.
Thumbnail Image

오픈AI, 투자자 '난색'에 '성인모드' 무기한 보류 - 전파신문

2026-03-26
jeonpa.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (the adult mode for AI-generated content). The postponement is due to concerns about ethical risks, including the AI generating illegal or harmful content and failure of age verification leading to minors accessing adult content. These concerns indicate a plausible risk of harm (to communities, minors, and potentially legal violations) if the system were deployed. Since no actual harm has occurred and the release is postponed indefinitely, this fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI systems.
Thumbnail Image

오픈AI, 성인용 AI 출시 무기한 연기...IPO 앞두고 리스크 지우기? - 시사저널

2026-03-27
시사저널
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as an adult-content generative AI. The postponement is due to concerns about potential harms, including minors accessing inappropriate content and the challenge of filtering illegal and harmful data. Since no actual harm has occurred yet, but the risks are credible and significant, this fits the definition of an AI Hazard. The article does not describe realized harm but focuses on the plausible future harm and the decision to delay the AI system's deployment to mitigate these risks.
Thumbnail Image

오픈AI, 챗GPT '성인모드' 출시 무기한 보류

2026-03-27
국민일보
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns its development and intended use. The adult mode feature could plausibly lead to harms such as exposure of minors to inappropriate content, dissemination of illegal or harmful material, and ethical violations. However, since the feature has not been released and no harm has occurred yet, this situation constitutes an AI Hazard rather than an AI Incident. The article focuses on the potential risks and the decision to delay the feature to avoid those risks.
Thumbnail Image

오픈AI, 챗GPT '성인모드' 무기한 연기

2026-03-27
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (ChatGPT and its age prediction system) and the challenges in safely deploying an adult content mode. The postponement is due to concerns about the AI system's malfunction (age prediction errors) and potential misuse (minors accessing adult content), which could lead to harm. Since no harm has yet occurred but there is a credible risk of harm if the system were deployed, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and the decision to delay deployment to mitigate that risk.
Thumbnail Image

ChatGPT no es médico, pero América Latina lo usa así: los riesgos de confiar tu salud a la Inteligencia Artificial

2026-03-26
infobae
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (ChatGPT) used for health consultations, which can influence users' decisions and health-related behaviors. However, it does not describe any actual harm or violation resulting from the AI's use, only potential risks and warnings about privacy and reliability. There is no mention of a specific event where harm occurred or was narrowly avoided. Therefore, the content fits best as Complementary Information, providing context and guidance about AI's role in health without reporting an AI Incident or AI Hazard.
Thumbnail Image

ChatGPT : L'autorisation des conversations érotiques mise en suspens par OpenAI

2026-03-27
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use, specifically the planned allowance of erotic conversations. However, since the project has been suspended indefinitely before deployment or harm occurrence, and the article focuses on the decision-making and research process rather than any actual or imminent harm, it does not qualify as an AI Incident or AI Hazard. The content primarily provides complementary information about OpenAI's governance response to potential risks associated with AI use in erotic conversations, fitting the definition of Complementary Information.
Thumbnail Image

OpenAI Abandons ChatGPT's Adult Mode, Days After Shutting Down Sora

2026-03-27
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (ChatGPT with an adult mode and the Sora AI video generator) and their development and use. However, the event focuses on the decision to pause and shut down these AI products due to concerns about safety and ethical issues, not on any realized harm. Since no direct or indirect harm has occurred, but there is a credible risk that such AI systems could lead to harms (e.g., misuse generating non-consensual or indecent content), this qualifies as an AI Hazard. It is not an AI Incident because no harm has materialized, nor is it Complementary Information or Unrelated, as the focus is on the potential for harm and the company's response to it.
Thumbnail Image

OpenAI pauses plans to launch erotic chatbot indefinitely- Moneycontrol.com

2026-03-26
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article discusses a planned AI system (an erotic chatbot) whose rollout has been paused indefinitely because of concerns about potential harms such as emotional dependence and access by minors. Since the feature has not been launched and no harm has materialized, this situation represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard, as the AI system's development and intended use could plausibly lead to harm if deployed without adequate safeguards.
Thumbnail Image

OpenAI shelves erotic chatbot 'adult mode' indefinitely after uproar over user safety: report

2026-03-26
New York Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the erotica chatbot) whose development and intended use raised credible concerns about potential mental health harms, including delusions, unhealthy emotional attachments, and encouragement of violent thoughts. Although no direct harm has occurred because the feature was not released, the concerns and research findings indicate a plausible risk of significant harm if the system were deployed. The company's decision to shelve the feature indefinitely reflects recognition of this hazard. Since the event focuses on potential future harm rather than realized harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT : l'autorisation des conversations érotiques reporté sine die

2026-03-26
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses the postponement of a feature due to concerns about potential harms and reputational risks. However, no actual harm or incident has occurred from the erotic conversation feature, only a precautionary suspension. The mention of lawsuits and regulatory investigations relates to broader AI mental health impacts but does not describe a new incident directly caused by the AI system in question. The focus is on the company's response and strategic decision-making in light of potential risks, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI is putting ChatGPT adult model on hold, here is why

2026-03-27
India Today
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and ethical concerns associated with the development of an adult mode for ChatGPT, which could plausibly lead to harms such as emotional dependence or exposure of minors to inappropriate content. However, since no actual harm or incident has been reported and the development has been paused before release, this constitutes a plausible future risk rather than a realized harm. Therefore, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI abandonne finalement le " mode adulte " et les tchats érotiques dans ChatGPT

2026-03-27
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the planned use of an AI-generated adult mode with erotic content. The decision to abandon this feature is due to concerns about potential harms, including exposure of minors to explicit content and generation of inappropriate sexual dialogues, which could plausibly lead to harms such as psychological harm to minors or violation of rights. Since the feature was never launched and no actual harm has been reported, the event does not qualify as an AI Incident. It is not Complementary Information because the article is not about a response to a past incident but about halting a planned feature due to potential risks. Therefore, the event is best classified as an AI Hazard, reflecting the plausible future harm that the adult mode could have caused if implemented.
Thumbnail Image

OpenAI shelves plans for sexually-explicit chatbot 'indefinitely' amid amid mounting concerns

2026-03-27
News.com.au
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (OpenAI's chatbot) and its planned sexually-explicit mode, which was intended to generate erotic conversations. Although the feature was never released, the concerns raised relate to plausible future harms such as emotional harm, reputational damage, and risks to minors. Since the product was shelved before deployment, no direct or indirect harm has occurred. The article focuses on the company's decision to pause development and conduct further research, as well as regulatory and legal responses to AI chatbots' potential harms. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI risks rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Après deux reports, OpenAI suspend indéfiniment son projet d'autoriser les conversations érotiques dans ChatGPT, car trop cher et trop risqué

2026-03-27
BFMTV
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns the use of AI-generated erotic content, which could lead to harm such as exposure of minors to inappropriate content and associated psychological harm. The article states that the project has been suspended indefinitely before launch due to these risks and technical limitations, indicating that no harm has yet occurred. Therefore, this is an AI Hazard because the AI system's use could plausibly lead to harm, but no incident has materialized. The article also discusses internal and external concerns and the company's strategic refocus, but these are contextual and do not change the classification.
Thumbnail Image

Adiós al 'Modo adulto' de ChatGPT: OpenAI suspende "indefinidamente" la llegada de esta función, según un informe

2026-03-26
El Español
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT with an 'Adult Mode') that could generate explicit content. However, since the feature has been suspended indefinitely and has not been deployed or used, no actual harm has occurred. The article discusses potential risks and concerns but does not describe any realized harm or incidents. Therefore, this situation represents a plausible future risk related to AI development but no incident or harm has materialized yet. It is best classified as Complementary Information because it provides context on OpenAI's governance and ethical considerations regarding AI content generation, rather than reporting an AI Incident or Hazard.
Thumbnail Image

ChatGPT axes plans for X-rated version after staff left 'uncomfortable'

2026-03-27
The Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) and its planned adult mode, which was cancelled due to concerns about potential harms including exposure of underage users to adult content and unhealthy emotional attachments to AI. Although no harm has yet occurred, these concerns represent credible risks that could plausibly lead to AI Incidents if the mode had been launched. The event is about the potential for harm and the decision to avoid it, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it directly involves an AI system and its development/use.
Thumbnail Image

L'autorisation des conversations érotiques sur ChatGPT mise en suspens

2026-03-27
20minutes
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the suspension of a feature due to concerns about potential harms, including exposure of minors to explicit content and mental health risks. No realized harm from this specific feature is reported, only potential harm and reputational/legal risks. The decision to suspend the feature is a precautionary measure in response to these plausible risks. Thus, this event fits the definition of an AI Hazard, as it involves circumstances where the use of an AI system could plausibly lead to harm, but no direct or indirect harm has yet occurred from this feature. The references to lawsuits and investigations provide context but do not indicate that this specific feature caused harm, so the classification is not AI Incident or Complementary Information.
Thumbnail Image

OpenAI Postpones 'Adult Mode' Launch Indefinitely

2026-03-27
Chosun.com
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the feature 'Adult Mode' would use AI to generate adult content. The postponement is due to concerns about potential harms, including ethical risks and technical limitations in age verification, which could plausibly lead to harm if deployed. However, since the feature has not been launched and no harm has occurred, this event represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

ChatGPT no tendrá una versión para adultos: OpenAI abandonó su plan más picante

2026-03-26
Todo Noticias
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and concerns about its potential to generate harmful adult content and deepfakes. The cancellation is a response to these plausible risks, including illegal content and ethical issues, but no direct or indirect harm has been reported as having occurred. Thus, it fits the definition of an AI Hazard, as the AI system's development could plausibly lead to harm, but no incident has materialized.
Thumbnail Image

Les conversations érotiques dans ChatGPT reportées jusqu'à nouvel ordre

2026-03-26
Le Parisien
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses a planned change in its use that was halted due to concerns about potential harms. Since no harm has yet occurred but there is a credible risk that allowing erotic conversations could lead to harms (e.g., inappropriate content, misuse, reputational damage), this qualifies as an AI Hazard. The event is about the plausible future harm from the AI system's use, not an incident of realized harm, nor is it merely complementary information or unrelated news.
Thumbnail Image

ChatGPT : les conversations érotiques, c'est fini jusqu'à nouvel ordre, annonce OpenAI

2026-03-27
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and concerns about its potential to cause harm, particularly to minors exposed to erotic content. Although no realized harm is described, the suspension reflects recognition of plausible future harm (e.g., exposure of minors to explicit content, mental health risks). Therefore, this is best classified as an AI Hazard, since the AI system's use could plausibly lead to harm, but no incident has yet occurred or been reported in this context.
Thumbnail Image

OpenAI Is Shelving Its Planned ChatGPT 'Adult Mode' Days After Dropping Sora

2026-03-26
CNET
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of AI systems (ChatGPT and other chatbots) and addresses concerns about potential harms related to sexualized content accessible to minors and exploitative material. However, no actual harm has been reported as having occurred from the adult mode, which was never launched. The shelving of the project is a preventive measure to avoid plausible future harm. Therefore, this situation constitutes an AI Hazard, as the AI system's development and potential use could plausibly lead to harm, but no incident has yet materialized.
Thumbnail Image

'This was never just about sex' -- ChatGPT's 'adult mode' being shelved reveals a much bigger AI problem

2026-03-27
TechRadar
Why's our monitor labelling this an incident or hazard?
The article centers on the potential risks and controversies of deploying AI systems with adult or emotionally suggestive capabilities, which could plausibly lead to harm or controversy in the future. However, no actual harm or incident is described. The AI systems mentioned (ChatGPT's adult mode and Sora AI video maker) are discussed in terms of their potential for causing social or reputational issues, but no realized harm or incident is reported. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harm if deployed, but no harm has yet occurred.
Thumbnail Image

After reported protests from employees and investors, OpenAI puts 'Adult Mode' plans on hold 'indefinitely'

2026-03-27
The Times of India
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and its proposed feature (adult mode) that could plausibly lead to harm, such as exposure of minors to inappropriate content and unhealthy emotional attachments. However, no actual harm or incident has been reported; the project is suspended before release. Therefore, this is a case of a plausible future risk related to AI system use, fitting the definition of an AI Hazard rather than an Incident. The article focuses on the potential for harm and the decision to halt development, not on realized harm or a response to an incident, so it is not Complementary Information.
Thumbnail Image

OpenAI shelves erotic chatbot 'indefinitely'

2026-03-26
The Verge
Why's our monitor labelling this an incident or hazard?
Although the AI system (ChatGPT) is involved and the content relates to sexualized AI outputs, the article does not report any realized harm or incidents caused by the AI system. Instead, it highlights a decision to halt development and deployment due to concerns about possible future harms and safeguarding issues. Therefore, this event represents a plausible risk scenario (hazard) but no direct or indirect harm has materialized. However, since the main focus is on the company's strategic decision and internal discussions rather than a specific AI hazard event or realized harm, it is best classified as Complementary Information providing context on governance and risk management responses.
Thumbnail Image

OpenAI's Adult mode for ChatGPT may have been shelved indefinitely: Report

2026-03-27
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (ChatGPT) and a proposed feature (Adult Mode) that involves AI-generated explicit content and intimate interactions. The feature's development and potential use raise significant ethical and safety concerns, including emotional dependency and access by minors, which are plausible harms. Since the feature has not been deployed and no actual harm has occurred, but the risks are credible and significant, this qualifies as an AI Hazard. The event is not an AI Incident because no realized harm has occurred, nor is it Complementary Information or Unrelated, as the focus is on the potential risks of a specific AI system feature.
Thumbnail Image

OpenAI Doesn't Want You Talking Dirty to ChatGPT. 'Adult Mode' Paused Indefinitely

2026-03-26
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article centers on the potential harms and concerns related to the use of AI systems generating sexually explicit content and the emotional impact on users, which could plausibly lead to harm. However, it does not report any realized harm or specific event where the AI system directly or indirectly caused injury, rights violations, or other harms. The mention of lawsuits and regulatory concerns indicates ongoing societal and governance responses. Therefore, this qualifies as Complementary Information, providing context and updates on AI-related risks and company strategies, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI shelves erotic Chatbot indefinitely, reassesses long-term implications

2026-03-26
Firstpost
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (erotic chatbot, text-to-video model) and their development and intended use. However, no actual harm or incident has occurred; the decision to shelve the product is a precautionary or strategic response to concerns about potential societal and privacy risks. There is no indication that the AI system caused or contributed to any injury, rights violation, or other harm. Therefore, this is not an AI Incident or AI Hazard. Instead, it is a governance and strategic decision reflecting societal and ethical considerations, which fits best as Complementary Information, providing context on AI development and responses to concerns.
Thumbnail Image

老司機不用等了!ChatGPT成人模式無限期擱置 OpenAI證實 - 自由電子報 3C科技

2026-03-27
自由時報
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about its potential use involving sexual content, which could plausibly lead to harms such as emotional dependency or exposure of minors to inappropriate content. However, since the feature is postponed and no harm has occurred, this situation represents a plausible future risk rather than an actual incident. Therefore, it qualifies as an AI Hazard, as the development and potential use of this AI feature could plausibly lead to harm, but no harm has yet materialized.
Thumbnail Image

反彈聲浪大 OpenAI無限期擱置成人版聊天機器人計畫 | 國際 | 中央社 CNA

2026-03-27
Central News Agency
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) is explicitly mentioned, and the plan to enable adult content could have led to social or reputational harm, but since the plan was postponed indefinitely before deployment, no harm occurred or is imminent. The article focuses on the decision and the reasoning behind it, which is a governance response to potential AI risks. The mention of shutting down Sora is also an update on AI ecosystem management. Hence, this is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI Is Pulling Back on Its Erotic Chatbot. Here's Why.

2026-03-26
VICE
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (an erotic chatbot) whose development and intended use raise credible risks of harm, including psychological harm from emotional attachments and exposure of minors to adult content. However, since the chatbot has not been released and no harm has materialized, the situation is best classified as an AI Hazard. The article focuses on the plausible future harms that could arise if the system were deployed, justifying the classification as a hazard rather than an incident or complementary information.
Thumbnail Image

OpenAI "indefinitely" shelves plans for erotic ChatGPT

2026-03-26
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned use to generate erotic content, which was halted due to concerns about potential harms, including mental health risks and unsafe outputs. Since no actual harm from the 'adult mode' has occurred and the plan is shelved indefinitely, this constitutes a plausible future risk rather than a realized incident. The article also discusses existing lawsuits related to ChatGPT's mental health impacts, but the main focus here is on the shelving of the adult mode due to potential harms. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future harm that the adult mode could have caused if deployed.
Thumbnail Image

Projet avorté: Les conversations érotiques ne seront pas autorisées dans ChatGPT

2026-03-26
Le Matin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and use, specifically regarding enabling erotic conversations. However, the project was suspended before deployment, and no actual harm or incident has occurred. The concerns raised are about potential risks, such as exposure of minors to explicit content and reputational damage, which are plausible future harms but not realized incidents. Therefore, this qualifies as an AI Hazard because the AI system's development and intended use could plausibly lead to harm, but no harm has yet materialized. It is not Complementary Information because the main focus is on the suspension due to potential risks, not on responses to past incidents or ecosystem updates.
Thumbnail Image

OpenAI shelves plans for erotic chatbot

2026-03-26
Punch Newspapers
Why's our monitor labelling this an incident or hazard?
The article does not report any new AI Incident where harm has directly or indirectly occurred due to the AI system's development, use, or malfunction. Instead, it details OpenAI's decision to halt a potentially risky AI feature and discusses regulatory and legal contexts addressing AI harms, especially to minors. The presence of age-verification technology and the shelving of the explicit chatbot feature are responses to previously identified or potential harms, making this a governance and mitigation update. Hence, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI abandons yet another side quest: ChatGPT's erotic mode | TechCrunch

2026-03-26
TechCrunch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and the development of a new feature (erotic mode) that was planned but now indefinitely paused. The feature's development and potential use could plausibly lead to harms such as inappropriate content generation, ethical concerns, or misuse, which aligns with the definition of an AI Hazard. However, since the feature was never released and no harm has materialized, it does not meet the criteria for an AI Incident. The article does not primarily focus on responses or updates to a past incident, so it is not Complementary Information. It is not unrelated because it concerns AI system development and potential harm. Thus, AI Hazard is the appropriate classification.
Thumbnail Image

OpenAI suspende sus planes de lanzar un chatbot erótico dentro de ChatGPT

2026-03-26
La Capital
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (a chatbot with adult content capabilities). Although the chatbot was never launched and no harm has yet occurred, the article clearly outlines plausible future harms that could arise from its deployment, including exposure of minors to inappropriate content and the risk of unhealthy user relationships. These potential harms are credible and significant, making this a case of an AI Hazard rather than an Incident, since no actual harm has materialized. The article focuses on the cancellation decision due to these risks, not on a realized harm or incident.
Thumbnail Image

OpenAI is retreating from its NSFW chatbot plans

2026-03-26
Android Authority
Why's our monitor labelling this an incident or hazard?
The article discusses a planned AI feature (an NSFW chatbot) that has been postponed before release due to concerns about possible harms such as emotional attachment or other effects. Since no harm has occurred and the AI system is not currently in use or causing incidents, this situation represents a plausible future risk rather than an actual incident. The main content is about the company's response and research efforts, which aligns with complementary information about AI development and governance rather than a direct incident or hazard.
Thumbnail Image

OpenAI suspende sus planes para lanzar su chatbot de contenido sexual explícito

2026-03-26
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the chatbot with explicit content generation capability) whose deployment was planned but then suspended due to concerns about social and reputational risks. No direct or indirect harm has occurred yet, but the potential for harm (e.g., social harm, reputational damage, misuse) is credible and plausible. The event is about the potential risk of harm from the AI system's use, not about an incident where harm has already happened. It is not merely complementary information because the main focus is on the suspension decision due to risk concerns, not on responses to past incidents or broader ecosystem updates. Therefore, the classification is AI Hazard.
Thumbnail Image

OpenAI Cancels Spicy "Adult Mode" Chatbot as Crisis Deepens

2026-03-26
Futurism
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the adult mode chatbot) whose use has directly led to mental health crises and risks of underage exposure to explicit content, which are harms to persons and communities. The company's acknowledgment of a 10% error rate in age restriction and concerns about AI-induced psychosis indicate actual harms or significant risks that have materialized. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as harm has occurred and the AI system's role is pivotal.
Thumbnail Image

Primero Sora y ahora OpenAI cancela su función de ChatGPT para adultos

2026-03-26
Merca2.0 Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and suspension of an AI system (ChatGPT) intended for erotic content generation, which is an AI system use case. No direct or indirect harm has materialized; instead, the suspension is motivated by concerns about potential social and regulatory harms. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as psychological risks and regulatory violations if deployed. The event is not an AI Incident because no harm has occurred, nor is it Complementary Information or Unrelated, as it focuses on the AI system's development and its potential risks.
Thumbnail Image

OpenAI drops plans to release an adult chatbot

2026-03-26
engadget
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (an adult-oriented chatbot) whose development and intended use raised significant concerns about potential harms, including psychological effects and exposure of minors to inappropriate or illegal content. The feature was never released, so no direct harm occurred, but the plausible risks and the company's decision to halt the release indicate a credible potential for harm. This fits the definition of an AI Hazard, as the event concerns a circumstance where the AI system's use could plausibly lead to an AI Incident, but no incident has yet occurred.
Thumbnail Image

OpenAI detiene el desarrollo de modo erótico para ChatGPT por críticas

2026-03-26
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the suspension of a planned AI feature before its release, motivated by ethical and reputational concerns. There is no indication that the erotic mode caused any direct or indirect harm, nor that its development posed a credible risk of harm at this stage. The focus is on the company's decision-making and strategic priorities in response to criticism, which aligns with governance and societal response information. Hence, the event is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

ChatGPT : l'autorisation des conversations érotiques reportée sine die

2026-03-26
La Croix
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of an AI system (ChatGPT) and concerns about its potential to cause harm through erotic conversations, especially regarding exposure of minors and mental health impacts. Although no actual harm has been reported from this feature since it has not been enabled, the article highlights credible risks and legal scrutiny that justify classifying this as an AI Hazard. The indefinite postponement is a response to these plausible risks, not a report of realized harm, so it does not qualify as an AI Incident. The article is not merely general AI news or a product update, but focuses on the potential harms and the company's response, so it is not Complementary Information either.
Thumbnail Image

OpenAI Shelves Plans for Erotic Chatbot

2026-03-27
Newser
Why's our monitor labelling this an incident or hazard?
An AI system (the erotic chatbot) was under development and its use raised concerns about plausible future harms including psychological harm and risks to vulnerable populations. Since no harm has yet occurred and the project is paused to study potential impacts, this constitutes an AI Hazard rather than an Incident. The event focuses on potential risks and precautionary measures rather than realized harm or incident.
Thumbnail Image

Why OpenAI Shelved ChatGPT's Erotic 'Adult Mode' Indefinitely? What Led To The Move; Debate Over AI Boundaries Explained

2026-03-27
NewsX
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and use. However, no actual harm has occurred as the feature was never launched; the decision to pause is a precautionary measure based on plausible risks such as user safety and child protection. Therefore, this situation represents a potential risk of harm that could plausibly lead to an AI Incident if the feature were deployed without adequate safeguards. The article primarily reports on the decision to halt development and the reasoning behind it, which is a governance and strategic response to potential AI risks. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

ChatGPT is not getting an erotic mode, after all

2026-03-26
Digital Trends
Why's our monitor labelling this an incident or hazard?
The article does not report a new AI Incident or AI Hazard but rather provides complementary information about OpenAI's strategic decision to pause a potentially risky AI feature. It references past harms linked to ChatGPT but does not describe a new event where the AI system directly or indirectly caused harm or where a new plausible harm is emerging from the erotic mode feature itself. The focus is on the company's response to known issues and societal concerns, fitting the definition of Complementary Information.
Thumbnail Image

重新訓練成本高且技術困難,OpenAI 無限期擱置 ChatGPT 成人模式

2026-03-27
TechNews 科技新報 | 市場和業內人士關心的趨勢、內幕與新聞
Why's our monitor labelling this an incident or hazard?
The event involves the development and use considerations of an AI system (ChatGPT) and discusses potential psychological harm and legal risks associated with an adult content mode. However, since the adult mode was never launched and no harm has occurred, this situation represents a plausible risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the development and potential deployment of such a mode could plausibly lead to harm, but no harm has materialized yet. The article primarily focuses on the decision to halt development to avoid these risks.
Thumbnail Image

OpenAI drops plans to release an adult chatbot

2026-03-26
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the adult chatbot) that was in development but never released due to concerns about generating harmful or illegal content. Since the chatbot was not deployed, no direct or indirect harm has occurred yet. However, the potential for harm from such a system is credible, given the difficulties in controlling inappropriate outputs and the ethical concerns raised by employees and investors. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm if released.
Thumbnail Image

ChatGPT mete anuncios: El usuario como moneda de cambio para anunciantes

2026-03-26
Urgente 24
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction of AI-driven advertising within ChatGPT, which is an AI system, and the use of AI for precise ad targeting. However, it does not describe any actual harm occurring to users or other parties, nor does it report any incident where the AI system caused injury, rights violations, or other harms. The concerns raised are about potential impacts on user trust and experience, which are plausible future issues but not confirmed incidents or hazards at this stage. Therefore, this event is best classified as Complementary Information, as it provides context and updates about AI system deployment and its societal implications without reporting a specific AI Incident or AI Hazard.
Thumbnail Image

OpenAI adult chatbot put on hold 'indefinitely,' according to report

2026-03-26
KRON4
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system (the adult mode chatbot) and concerns about its potential misuse to generate sexualized content, including illegal child pornography. However, it does not describe any incident where the AI system directly or indirectly caused harm. The decision to pause the project is a preventive measure in response to these concerns. This fits the definition of an AI Hazard, as the development and potential use of the AI system could plausibly lead to harm, but no harm has yet occurred or been reported in this context.
Thumbnail Image

OpenAI puts erotic chatbot plans on hold 'indefinitely'

2026-03-26
Financial Times News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (an erotic chatbot) whose development and intended use raise significant ethical and social concerns. Although the product release is on hold and no direct harm has been reported, the potential for harm to minors and societal impacts is credible and recognized by OpenAI and stakeholders. The challenges in training the AI to safely handle explicit content and the imperfect age verification system further support the plausibility of future harm. Since no actual harm has yet occurred, but plausible harm is evident, the event is best classified as an AI Hazard.
Thumbnail Image

OpenAI shelves plans for erotic chatbot

2026-03-26
The Manila times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a sexually explicit chatbot) whose development and potential use could plausibly lead to harms such as emotional harm, exploitation, or negative effects on minors. Since the product was shelved before release, no direct or indirect harm has occurred yet. The article focuses on the potential risks and regulatory/legal concerns, making this an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential harm and the decision to halt development due to these risks.
Thumbnail Image

Les conversations érotiques ne seront pas autorisées dans ChatGPT

2026-03-26
L'essentiel
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and use, specifically regarding a feature allowing erotic conversations. However, the project has been suspended before deployment, and no harm has occurred or is reported to have plausibly occurred. The article mainly discusses the decision to halt the feature due to potential risks and reputational concerns, as well as technical challenges. Therefore, this is a case of a potential risk that has not materialized, and the main focus is on the company's response and strategic decision. This fits the definition of Complementary Information, as it provides context and updates on AI system development and governance responses without describing an AI Incident or AI Hazard.
Thumbnail Image

El plan de un ChatGPT al estilo PornHub tiene un problema, y OpenAI no quiere solucionarlo

2026-03-26
Hipertextual
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT with adult content capabilities) whose development and intended use could plausibly lead to harms such as illegal content generation and exposure to harmful sexual material. Although the system was not released and no harm has occurred, the article highlights credible concerns about potential future harms, including violations of laws and societal norms. This fits the definition of an AI Hazard, as the AI system's development and potential use could plausibly lead to an AI Incident if deployed without proper controls. There is no indication of realized harm or incident, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the risk and cancellation of a specific AI system with potential for harm.
Thumbnail Image

OpenAI shelves 'erotic' ChatGPT plans

2026-03-27
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article discusses a planned AI feature that could plausibly lead to harm if deployed, such as emotional dependence or exposure of minors to explicit content. However, since the feature has been shelved and no harm has materialized, this situation represents a potential risk rather than an actual incident. Therefore, it qualifies as an AI Hazard due to the plausible future harm that the adult mode could have caused if implemented.
Thumbnail Image

OpenAI suspende sus planos para lanzar un chatbot de contenido erótico

2026-03-26
TVN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and planned deployment of an AI chatbot with explicit content capabilities, which involves AI system use. However, the plans were suspended before launch, and no actual harm or incident has occurred. The concerns raised relate to potential social and reputational risks, which align with plausible future harms. There is no indication of realized harm or ongoing incident, nor is the article primarily about responses to past incidents or ecosystem updates. Therefore, the event fits the definition of an AI Hazard, as the AI system's development and intended use could plausibly lead to harm if deployed.
Thumbnail Image

First Sora, Now Sexy Chat? OpenAI Cancels Erotic ChatGPT Mode - Decrypt

2026-03-26
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves the development and planned use of an AI system (erotic chatbot mode) that was canceled due to concerns about potential harms, specifically unhealthy emotional dependency and harmful behavior. Since the AI system was not deployed and no direct harm has occurred, the event does not meet the criteria for an AI Incident. However, the concerns and internal warnings indicate a credible risk that such a system could plausibly lead to harm if launched, fitting the definition of an AI Hazard. The article also discusses broader societal and research context but does not report new realized harms or legal actions, so it is not Complementary Information. It is not unrelated because it directly concerns AI system development and potential harm.
Thumbnail Image

ChatGPT erotic chatbot? OpenAI says no

2026-03-27
Stuff
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about its potential use in generating sexualised content, which could plausibly lead to harms such as negative effects on user wellbeing or societal impact. However, since the feature has been paused before release and no harm has materialized, this situation represents a plausible future risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

El chatbot de contenido erótico de OpenAI: ¿Sigue el proyecto en marcha?

2026-03-26
www.expreso.ec
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a chatbot with explicit content capabilities) whose development and intended use raised concerns, leading to its suspension before launch. Since no harm has occurred and the suspension is a precautionary or strategic measure, this constitutes a plausible risk scenario rather than an actual incident. The article also includes broader context about legal and social challenges faced by tech companies but does not report new harm caused by the AI system itself. Therefore, the event is best classified as Complementary Information, as it provides context and updates on AI governance and industry responses without describing a realized AI Incident or a direct AI Hazard.
Thumbnail Image

OpenAI Shelves ChatGPT Adult Mode Indefinitely After Safety Concerns

2026-03-26
Windows Report | Error-free Tech Life
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses the development and intended use of a new feature (adult mode) that could allow explicit conversations. However, the feature has been paused before release due to plausible risks of harm, such as exposure of minors to inappropriate content and potential emotional harm. Since no harm has materialized and the feature is not active, this situation represents a plausible risk of harm that could arise if the feature were launched without adequate safeguards. Therefore, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI shelves erotic ChatGPT after staff, investors, & advisors revolt

2026-03-26
The Next Web
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned feature (adult mode) that was under development. The feature's development revealed technical and ethical challenges that could lead to harms such as mental health consequences and exposure of minors to explicit content. The presence of lawsuits alleging ChatGPT's involvement in user deaths underscores the serious risks. However, since the feature was never released and no direct harm from it occurred, the event does not meet the criteria for an AI Incident. Instead, it fits the definition of an AI Hazard because the shelved feature could plausibly have led to significant harms if deployed. The article also discusses broader governance and commercial responses, but the primary focus is on the plausible harm from the AI system's development and intended use, justifying classification as an AI Hazard.
Thumbnail Image

OpenAI"无限期"搁置其情色聊天机器人计划 - FT中文网

2026-03-26
英国金融时报中文版
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (an adult chatbot) whose development and potential use could plausibly lead to harms such as exposure of minors to inappropriate content and unhealthy user dependency. Since the harms are potential and the release has been postponed to avoid these risks, this qualifies as an AI Hazard rather than an AI Incident. There is no indication that harm has already occurred, and the main focus is on the plausible future risk and the company's response to it.
Thumbnail Image

OpenAI Reportedly Delays ChatGPT 'Adult Mode' Indefinitely

2026-03-27
Tech Times
Why's our monitor labelling this an incident or hazard?
The article centers on a product development decision by OpenAI to delay a specific AI feature, without any mention of harm caused or plausible harm that could arise from the AI system's use or malfunction. The AI system (ChatGPT) is involved, but no incident or hazard is described. The content is about a strategic shift and internal prioritization, which fits the definition of Complementary Information as it provides supporting context about AI development and governance without reporting new harm or risk.
Thumbnail Image

豪掷5000亿美元,孙正义要做"AI电力之王"?-钛媒体官方网站

2026-03-25
tmtpost.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (OpenAI, AI data centers, AI models) and discusses their development and use. However, it does not describe any direct or indirect harm caused by these AI systems, nor does it describe a plausible future harm event. The Rakuten AI model controversy relates to intellectual property and transparency but does not document a breach of rights or harm caused by AI deployment. The large-scale AI infrastructure investment and Japan's AI policy shifts are strategic and ecosystem-level developments, not incidents or hazards. Hence, the article fits the definition of Complementary Information, as it provides supporting data and context about AI developments and governance without reporting a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI is No Longer Working on 'Adult Mode' for ChatGPT

2026-03-26
Thurrott.com
Why's our monitor labelling this an incident or hazard?
The article describes a development decision related to an AI system's feature that could plausibly lead to harm (exposure of minors to sexual content, unhealthy emotional attachments). However, no realized harm or incident is reported. The focus is on potential risks and the company's response to them, which fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it concerns AI system development and potential harm.
Thumbnail Image

OpenAI shelves plans for erotic chatbot

2026-03-26
Inquirer
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's chatbot with a planned sexually explicit mode) and discusses the company's decision to halt its deployment due to concerns about potential harms, including emotional attachments and impacts on minors. Although no incident of harm has occurred from this feature, the concerns and regulatory scrutiny indicate a credible risk that such a system could lead to AI incidents if released. The event is about the plausible future harm from the AI system's use, fitting the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because it directly concerns AI system development and its potential impacts.
Thumbnail Image

OpenAI shelves plans for erotic chatbot

2026-03-26
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system (a sexually explicit chatbot) whose development has been halted due to concerns about potential risks. However, no actual harm or incident has occurred, nor is there a specific credible imminent hazard described. The focus is on the company's response to perceived risks, which fits the category of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

反彈聲浪大 OpenAI無限期擱置成人版聊天機器人計畫 | 國際焦點 | 國際 | 經濟日報

2026-03-27
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned feature involving adult content, which was postponed due to concerns about potential social and reputational harm. However, since the feature was never launched and no harm has materialized, this does not qualify as an AI Incident. Nor is it an AI Hazard because the article does not describe a credible imminent risk of harm from the AI system's use, only concerns and internal debate. The main focus is on OpenAI's response and governance decision, making this Complementary Information about AI ecosystem developments and risk management.
Thumbnail Image

OpenAI无限期搁置\"成人模式\"开发计划 回归核心产品战略

2026-03-26
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential use of an AI system (ChatGPT with an "adult mode" feature) that could plausibly lead to harms such as exposure of minors to inappropriate content and increased emotional dependency on AI, which are recognized social harms. However, since the feature has not been launched and no harm has materialized, this situation constitutes a plausible risk rather than an actual incident. Therefore, it fits the definition of an AI Hazard, as the development and intended use of the AI system could plausibly lead to an AI Incident if deployed without adequate safeguards.
Thumbnail Image

OpenAI Doesn't Want You Talking Dirty to ChatGPT. 'Adult Mode' Paused Indefinitely

2026-03-26
PCMag UK
Why's our monitor labelling this an incident or hazard?
The article centers on OpenAI's decision to halt a feature due to concerns about possible harms and ongoing legal challenges, but it does not report a concrete AI Incident or a specific AI Hazard event. The harms discussed are potential or ongoing legal and reputational risks rather than a clearly articulated harm caused by the AI system's development, use, or malfunction. Therefore, this is best classified as Complementary Information, as it provides context and updates on responses to AI-related risks and challenges without describing a new incident or hazard.
Thumbnail Image

No More Dirty Talk: ChatGPT's "Adult Mode" Suspended "Indefinitely" Over OpenAI's Age Prediction Inaccuracy - TechRound

2026-03-27
TechRound
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its use in generating adult content, which requires reliable age verification—a complex AI problem. The suspension of the feature is due to the AI's current inability to accurately predict user age, which could plausibly lead to harm by exposing minors to adult content, a violation of legal and ethical standards. Since the feature has not been released and no harm has occurred yet, this situation constitutes an AI Hazard rather than an AI Incident. The article primarily discusses the potential risks and the decision to delay deployment to avoid harm, fitting the definition of an AI Hazard.
Thumbnail Image

OpenAI Halts 'Adult Mode' Chatbot Over Safety Concerns & Business Priorities - The News Chronicle

2026-03-27
The News Chronicle
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and discusses the company's decision to halt a feature that could plausibly lead to harms such as emotional harm, exposure of minors to explicit content, and ethical risks. Since no harm has yet occurred and the decision is a precautionary measure to avoid potential harms, this qualifies as an AI Hazard. The focus is on plausible future harm rather than realized harm, and the event does not describe an incident or complementary information about a past incident.
Thumbnail Image

OpenAI参与Isara 9400万美元融资:开发人工智能代理群

2026-03-28
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically multi-agent AI systems under development by Isara. While no harm has occurred yet, the technology's nature and scale imply plausible future risks, such as coordination failures, cascading errors, or misuse in critical decision-making domains like finance and geopolitics. The investment and research focus indicate ongoing development rather than an incident. The article does not report any actual harm or legal violations, nor does it primarily discuss responses to past incidents. Hence, it does not qualify as an AI Incident or Complementary Information. It is not unrelated because it clearly concerns AI system development with potential implications. Thus, the classification as AI Hazard is appropriate.
Thumbnail Image

成人模式难产 OpenAI无限期搁置情色聊天机器人计划

2026-03-26
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (an adult-themed chatbot) that could plausibly lead to harms such as exposure of minors to harmful content and unhealthy emotional dependence. However, since the product has been postponed indefinitely and no actual harm has been reported or occurred, this situation constitutes a potential risk rather than a realized incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The article focuses on the postponement decision and concerns rather than on a realized harm or incident, so it is not Complementary Information or Unrelated.
Thumbnail Image

OpenAI's adult chatbot probably isn't happening

2026-03-26
Mashable SEA
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (ChatGPT and video generation app) and references harms linked to AI use (suicides after ChatGPT interactions), but it does not describe a new incident or hazard currently unfolding. Instead, it reports OpenAI's strategic decision to halt certain AI features to mitigate risks and lawsuits. This fits the definition of Complementary Information, as it updates on societal, legal, and governance responses to AI-related harms rather than describing a new harm or plausible future harm event.
Thumbnail Image

OpenAI pauses erotic ChatGPT plans after internal pushback

2026-03-27
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the development and planned use of an AI system (erotic ChatGPT) that was paused due to safety concerns about sexualized AI content, including illegal sexual behavior in datasets. This indicates AI system involvement in development and use stages. Although no direct harm has occurred because the product was not launched, the concerns and internal pushback reflect a credible risk of harm, such as violations of rights or exposure to harmful content. The shutdown of the AI video generator due to copyright infringement is mentioned but not detailed as a new incident here. The main focus is on the potential for harm and the company's response to mitigate it, which aligns with the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI无限期搁置ChatGPT成人聊天模式

2026-03-27
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article does not report any actual harm caused by the adult mode AI system, nor does it describe a specific event where the AI system malfunctioned or was misused leading to harm. Instead, it reports a company decision to suspend development due to concerns about potential future harms and internal opposition. This fits the definition of Complementary Information, as it provides context on governance and strategic responses to AI development risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

OpenAI放弃ChatGPT成人模式开发计划

2026-03-27
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The event involves the development and use of an AI system (ChatGPT) and the decision to suspend a feature due to concerns about possible harmful outcomes. However, since the feature was never released and no harm has materialized, this situation represents a plausible risk rather than an actual incident. The article focuses on the strategic decision and responses to criticism rather than reporting any realized harm or malfunction. Therefore, it fits the definition of an AI Hazard, as the development could plausibly have led to harm if continued, but no harm has yet occurred.
Thumbnail Image

OpenAI无限期搁置成人版ChatGPT计划

2026-03-27
ai.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the development and planned use of an adult version that would generate sexually explicit content. The decision to postpone the project is due to credible warnings about potential harms, including psychological harm and risks of unhealthy emotional attachment, which are forms of injury to health and possible violation of rights. Since the adult version has not been released and no direct harm from it has occurred yet, but the risks are credible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The article also references existing lawsuits related to ChatGPT's mental health impacts, but these are not directly caused by the adult version under development. The main focus is on the plausible future harm from the adult version, justifying classification as an AI Hazard.
Thumbnail Image

OpenAI Suspends Plans For 'Adult' Mode | Silicon UK Tech News

2026-03-27
Silicon UK
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and concerns about the potential harms from an erotic mode feature that was under development but now suspended. Since no actual harm has been reported, but the concerns about exposure to minors and emotional dependency are credible and plausible risks, this fits the definition of an AI Hazard. The article does not describe an AI Incident because no harm has materialized, nor is it merely complementary information since the main focus is on the suspension due to potential harms. Hence, the classification is AI Hazard.
Thumbnail Image

Did OpenAI abandon erotic ChatGPT?

2026-03-27
AllToc
Why's our monitor labelling this an incident or hazard?
The article focuses on OpenAI's internal decision to halt a feature that would allow more explicit content generation, which is a response to scrutiny and concerns about AI behavior in sensitive content areas. There is no indication of actual harm occurring, nor a plausible immediate risk of harm from the paused feature. This is a governance and strategic update rather than an incident or hazard. Therefore, it qualifies as Complementary Information, providing context on AI development and societal responses rather than describing an AI Incident or AI Hazard.
Thumbnail Image

OpenAI shelves 'adult mode' chatbot plans indefinitely after backlash

2026-03-27
Computing
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its proposed feature that could have led to harms related to exposure to explicit content and emotional dependence, which are plausible social harms. However, since the feature was not deployed and no harm has occurred, this qualifies as an AI Hazard rather than an AI Incident. The article primarily discusses the potential risks and the company's decision to halt development, which aligns with the definition of an AI Hazard as a circumstance where AI use could plausibly lead to harm. The broader context and commentary do not change this classification.
Thumbnail Image

OpenAI Killed Its Flirty Chatbot Before It Ever Launched -- And the Reason Says Everything About AI's Identity Crisis

2026-03-26
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the flirtatious chatbot) whose development and intended use raised concerns about emotional harm and manipulation. Since the chatbot was canceled before release, no direct harm occurred, so it is not an AI Incident. However, the article clearly outlines the plausible risks of psychological harm and emotional dependency that such a system could cause if deployed, fitting the definition of an AI Hazard. The article does not primarily focus on responses or updates to past incidents, so it is not Complementary Information. It is not unrelated because it centers on an AI system and its potential harms.
Thumbnail Image

ChatGPT : OpenAI abandonne pour l'instant son mode adulte/érotique

2026-03-27
KultureGeek
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) designed to generate adult/erotic content. The decision to abandon the feature is due to unresolved risks, including a high error rate in age verification that could allow minors access to explicit content, which would be a violation of legal protections for minors and could cause harm. Since the feature was never launched and no harm has occurred, but the risk of harm is credible and plausible, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential for harm and the reasons for halting the project, not on realized harm or incidents.
Thumbnail Image

OpenAI Shelves Erotic Chatbot After Opposing Internal Adviser Vote

2026-03-27
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT with an age-prediction and content moderation system) whose malfunction (12% misclassification of minors as adults) and use (erotic chatbot interactions) have directly and indirectly led to harms including exposure of minors to explicit content and psychological harm culminating in suicides. The presence of multiple lawsuits and documented cases of ChatGPT-linked suicides confirms realized harm. The shelving of the feature is a response to these harms and risks, but the harms have already occurred. Hence, this is an AI Incident rather than a hazard or complementary information. The event is not unrelated as it centrally concerns AI system use and its consequences.
Thumbnail Image

OpenAI急轉彎:擱置Sora與成人聊天計畫 營利壓力浮現 | yam News

2026-03-27
蕃新聞
Why's our monitor labelling this an incident or hazard?
The article centers on OpenAI's business and product strategy changes, specifically the cancellation and postponement of AI-powered applications. While these applications involve AI systems, there is no mention or implication of harm caused or plausible harm that could arise from these decisions. The content does not describe any incident or hazard related to AI harms but rather provides context on company decisions and market positioning. Therefore, it is best classified as Complementary Information, as it provides supporting context about AI system development and deployment without reporting an incident or hazard.
Thumbnail Image

OpenAI以6.5亿美元估值注资旧金山初创公司Isara,助力其开发AI智能体协作软件,瞄准金融、生物科技等行业复杂问题。

2026-03-26
证券之星
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the development and funding of an AI system (AI agent collaboration software) but does not describe any harm or risk of harm resulting from its development or use. The focus is on investment and technological progress, with no mention of incidents, malfunctions, or credible risks of harm. Hence, it does not meet the criteria for AI Incident or AI Hazard. It fits the definition of Complementary Information as it provides supporting data and context about AI ecosystem developments.
Thumbnail Image

"QuitGPT"抵制行动或成AI导火索,伦理使用争议持续发酵Australia ChinaTown News 中文华人新闻 - NEWS.CHINA.COM.AU 这里是生活在墨尔本咱自家人的地盘!把客场当主场,视异乡为故园。澳洲唐人街 - 中华澳网 China Town News澳洲唐人街;澳大利亚华人社区和主流新闻媒体

2026-03-26
澳洲唐人街
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (ChatGPT, Claude) and their development, use, and governance. It discusses protests (QuitGPT) motivated by concerns over AI's potential misuse in autonomous warfare and mass surveillance, and legal disputes reflecting governance challenges. However, no actual harm or incident caused by AI systems is reported. The harms discussed are potential and ethical in nature, with credible risks of future misuse. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems and their societal impacts are central to the article.
Thumbnail Image

山姆·奥特曼最新发声:谈AI扩张、基础设施将开启全球丰裕新时代

2026-03-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article does not report any specific AI Incident or AI Hazard. It does not describe any event where AI systems have caused or could plausibly cause harm. Instead, it is a high-level discussion and update on AI development, infrastructure investment, and economic impact, which fits the definition of Complementary Information. It provides important context and insights into AI's role in society and economy but does not focus on any harm or risk event. Therefore, the classification is Complementary Information.
Thumbnail Image

OpenAI:用1.4万亿美元,采购存储!

2026-03-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems indirectly as it concerns infrastructure for AI development, but there is no mention or implication of harm, malfunction, or risk of harm caused or potentially caused by AI systems. The focus is on resource investment and market dynamics, which is informative but does not meet criteria for AI Incident or AI Hazard. It is therefore best classified as Complementary Information, providing context on AI ecosystem developments without describing an incident or hazard.
Thumbnail Image

投资者质疑、员工担忧,OpenAI成人聊天机器人项目无限期搁置

2026-03-26
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the adult chatbot) and discusses the potential social harms that could arise from its deployment, such as emotional dependency and minors accessing adult content. These are credible risks that have led to the project's indefinite postponement. Since no realized harm is reported and the project is not active, it does not qualify as an AI Incident. The focus is on potential future harm and risk mitigation, fitting the definition of an AI Hazard rather than Complementary Information or Unrelated news.
Thumbnail Image

OpenAI io团队被指控窃取商业机密

2026-03-27
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article details a lawsuit alleging trade secret theft involving AI-related technology development, which implicates AI systems indirectly through the products involved. However, no direct or indirect harm from AI system malfunction or misuse is reported as having occurred yet. The event centers on legal claims and potential intellectual property violations, which could lead to harm if proven but currently remain allegations under judicial review. This fits the definition of Complementary Information as it provides context and updates on AI ecosystem legal disputes without describing a concrete AI Incident or imminent hazard.
Thumbnail Image

法官批准相关证词:微软或需向马斯克支付250亿美元赔偿

2026-03-26
新浪财经
Why's our monitor labelling this an incident or hazard?
While the case involves AI entities and discusses AI risks, the article primarily reports on legal and financial disputes and expert testimonies without describing any direct or indirect harm caused by AI system development, use, or malfunction. There is no indication that AI systems have caused injury, rights violations, or other harms at this stage. The mention of AI risk expert testimony is about potential risks but does not describe an incident or hazard event. Therefore, this is best classified as Complementary Information, providing context and updates on legal and governance responses related to AI.
Thumbnail Image

OpenAI shelves plans for erotic chatbot amid rising safety and reputational concerns

2026-03-27
Head Topics
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential use of an AI system (a sexually explicit chatbot) that could plausibly lead to harm, such as emotional harm, reputational damage, or risks to minors. However, since the product has been put on hold and no harm has yet occurred, this situation represents a plausible future risk rather than an actualized harm. Therefore, it qualifies as an AI Hazard. The article does not describe any realized harm or incident caused by the AI system, but rather the precautionary shelving of the project due to potential risks.
Thumbnail Image

Why Is OpenAI Pausing Erotic ChatGPT Plans? All You Need To Know

2026-03-27
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event involves the use and development of AI systems (ChatGPT and Sora) and addresses potential risks related to sexualized AI content and emotional attachments. However, no actual harm or incident has been reported; the company is pausing the project due to plausible concerns about future harm. This fits the definition of an AI Hazard, as the development and use of these AI systems could plausibly lead to harms such as exposure of minors to inappropriate content or unhealthy emotional dependencies. The shutdown of Sora also reflects a response to criticism but does not describe a realized harm. Therefore, the classification is AI Hazard.
Thumbnail Image

OpenAI星际之门施工现场揭秘:40万颗芯片,1.2吉瓦电力,全球最大算力集群,奥特曼、孙正义解释为何5000亿美金豪赌AI?

2026-03-24
maker.zhiding.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems, specifically large-scale AI computational infrastructure supporting advanced AI models and the pursuit of AGI. The event concerns the development and use of AI systems and infrastructure but does not describe any direct or indirect harm resulting from these systems. It discusses potential future risks and societal impacts but does not report any actual incidents or plausible immediate hazards. The focus is on providing detailed background, investment scale, technical and societal context, and stakeholder perspectives. This aligns with the definition of Complementary Information, which enhances understanding of AI developments and ecosystem without reporting new incidents or hazards.
Thumbnail Image

OpenAI axes NSFW adult feature ChatGPT users begged for

2026-03-27
UNILAD Tech
Why's our monitor labelling this an incident or hazard?
The AI system (ChatGPT) was directly involved in providing adult content that led to concerns about mental health harms (AI psychosis) and underage access due to verification flaws. These constitute violations of user safety and potential harm to vulnerable groups (minors), fitting the definition of an AI Incident. The discontinuation is a response to these harms and risks, but the harms have already manifested or are ongoing, not merely potential future risks. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI无限期搁置成人版ChatGPT计划_手机网易网

2026-03-27
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and discusses the development and intended use of an adult version that would generate explicit content. The concerns raised include potential psychological harm to users, risks of unhealthy emotional attachment, and exposure of minors to adult content due to imperfect age verification. These are credible and plausible harms that could result from the AI system's use if launched. However, since the adult version has been postponed indefinitely and no new harm from this version has materialized yet, this is not an AI Incident. The article also references past lawsuits related to ChatGPT's mental health impacts, but these are background context rather than new incidents. The main focus is on the potential risks and the decision to delay deployment, fitting the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it clearly involves AI and its potential harms.
Thumbnail Image

ChatGPT ne racontera pas d'histoires érotiques : OpenAI lâche le projet de mode adulte

2026-03-27
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT) and its intended use to generate erotic content, which was paused due to concerns about potential harms, especially to minors and psychological effects. However, no actual harm has occurred, nor is there a report of malfunction or misuse leading to harm. The focus is on the potential risks and the company's decision to halt development to prevent such risks. This fits the definition of Complementary Information, as it provides context on governance and safety considerations related to AI development without describing a new AI Incident or AI Hazard.
Thumbnail Image

成人模式难产 OpenAI无限期搁置情色聊天机器人计划 - cnBeta.COM 移动版

2026-03-26
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (OpenAI's conversational AI/chatbot) whose development and use have led to concerns about harm to minors and unhealthy dependencies, which are forms of harm to health and rights. The mention of lawsuits alleging harm to minors indicates realized harm. The AI system's malfunction (age prediction errors) contributes to these harms. The postponement and internal debate reflect responses to these harms but do not negate their occurrence. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

OpenAI halts "Adult Mode" as advisors, investors, and employees raise red flags

2026-03-26
The Decoder
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the erotic chatbot) and its development being halted due to concerns about societal impact and technical failures in age verification that could have led to harm (minors accessing adult content). Since no actual harm has been reported and the project is paused before deployment, the event is about plausible future harm rather than realized harm. The involvement of the AI system in the potential harm is clear, and the decision to halt development reflects recognition of this hazard. Thus, the event is best classified as an AI Hazard.
Thumbnail Image

OpenAI Shelves ChatGPT Adult Mode Indefinitely Amid Safety Concerns

2026-03-27
Stack Umbrella
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT with an Adult Mode feature) whose development and intended use raise significant safety and ethical concerns. The article explicitly discusses potential harms including emotional dependency, mental health risks, and exposure of minors to explicit content due to a 10% error rate in age verification. Although the feature has been paused before causing harm, the plausible future harms are credible and directly linked to the AI system's use. Hence, this is an AI Hazard rather than an AI Incident, as no realized harm is reported yet. The article focuses on the decision to pause and reconsider the feature to prevent these harms.
Thumbnail Image

別想瑟瑟了!OpenAI大砍「成人模式計畫」全面回歸商業與戰爭戰場 | ETtoday AI科技 | ETtoday新聞雲

2026-03-27
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
The article discusses OpenAI's strategic pivot away from certain AI features (adult mode, video generation) towards commercial and defense applications. While AI systems are involved, there is no indication of any direct or indirect harm caused by these AI systems, nor any plausible future harm described. The focus is on corporate strategy and product development decisions, which fits the definition of Complementary Information as it provides context and updates on AI ecosystem developments without reporting new incidents or hazards.
Thumbnail Image

OpenAI Might Never Release the Adult Mode in ChatGPT

2026-03-27
Gadgets 360
Why's our monitor labelling this an incident or hazard?
The event involves the development and potential use of an AI system (ChatGPT) with a new feature that could plausibly lead to harm, specifically emotional harm to users and exposure of minors to explicit content. However, since the feature has not been released and no harm has yet occurred, this situation represents a credible risk or potential for harm rather than an actual incident. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The article does not focus on responses or updates to past incidents, so it is not Complementary Information, nor is it unrelated to AI harms.
Thumbnail Image

OpenAI Shelves 'Adult Mode' Chatbot Amid Ethical Concerns, Strategic Turn - Techstrong.ai

2026-03-26
Techstrong.ai
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the erotic chatbot) under development and the technical and ethical challenges faced, including a significant error rate in age verification that could allow minors to access explicit content. This represents a credible risk of harm to health and well-being (harm to persons) and potential violations of ethical standards. No actual harm is reported as having occurred yet, but the plausible future harm is significant enough to classify this as an AI Hazard. The company's decision to suspend development and focus on research and safer applications is a governance response but does not change the classification of the event as a hazard rather than an incident or merely complementary information.
Thumbnail Image

After dropping Sora, OpenAI pauses erotic chatbot "adult mode" indefinitely: Report

2026-03-27
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (ChatGPT) and a proposed feature ('adult mode') that would generate erotic content. Although the feature has not been deployed and no harm has occurred, the internal debates and concerns about potential emotional dependence, exposure of minors to explicit content, and societal impact indicate plausible risks of harm if the feature were released. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to harms such as harm to health, violation of rights, or harm to communities if implemented without adequate safeguards.
Thumbnail Image

Sora isn't the only thing OpenAI shut down this month - 9to5Mac

2026-03-29
9to5Mac
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (Sora app and ChatGPT) and their development and use. However, it does not report any realized harm or direct or indirect incidents caused by these AI systems. The shutdown and pause are precautionary or strategic decisions to avoid potential harms related to content moderation challenges. This fits the definition of Complementary Information, as it provides updates on AI system development and governance responses without describing new incidents or hazards.
Thumbnail Image

Disney ends OpenAI tie-up after Sora shutdown

2026-03-29
Bangkok Post
Why's our monitor labelling this an incident or hazard?
The article centers on the discontinuation of an AI product and the ending of a business partnership, with no mention or implication of harm or risk of harm caused by the AI system. The AI system (Sora) was operational but shut down voluntarily by the developer for strategic reasons. No direct or indirect harm is described, nor is there a credible risk of future harm from this shutdown. The information enhances understanding of AI ecosystem dynamics and corporate strategy, fitting the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI Won't Proceed With Launch of Sexy Chatbot

2026-03-28
InsideHook
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a chatbot designed for adult conversations) whose development and intended use could plausibly lead to harm, given the cited lawsuits related to ChatGPT causing injuries or suicides. Since the adult chatbot was not launched, no actual harm has occurred from it, but the potential for harm is credible. The article mainly reports on the cancellation decision influenced by these concerns, indicating a plausible future risk rather than a realized incident. Hence, this qualifies as an AI Hazard.
Thumbnail Image

Why OpenAI shelved ChatGPT adult mode?

2026-03-28
AllToc
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incident caused by the AI system. Instead, it details a corporate decision to halt a potentially controversial AI feature before release, reflecting risk management and governance considerations. There is no direct or indirect harm reported, nor a plausible imminent harm scenario described. Therefore, this is best classified as Complementary Information, providing insight into AI governance and product strategy rather than an AI Incident or Hazard.
Thumbnail Image

OpenAI suspende de manera indefinida el modo adulto de ChatGPT

2026-03-26
europa press
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and the event concerns its use and development. The suspension is due to the AI system's inability to reliably filter illegal or inappropriate content and to accurately control age verification, which could lead to harm to minors (harm to health or well-being). Although no harm has yet occurred, the described issues present a plausible risk of harm if the feature were launched. Therefore, this event qualifies as an AI Hazard because it highlights a credible potential for harm stemming from the AI system's malfunction or limitations, leading to the suspension of the feature to prevent such harm.
Thumbnail Image

OpenAI suspende por tiempo indefinido su versión de ChatGPT erótico

2026-03-26
infobae
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and use concerning erotic content generation. The suspension is motivated by concerns about potential harms such as illegal content generation (e.g., bestiality, sexual terror), underage access due to imperfect age controls, and reputational risks. These concerns indicate plausible future harms that the AI system could cause if deployed as intended. Since no actual harm or incident has been reported, and the focus is on preventing potential risks, the classification as an AI Hazard is appropriate. The article also includes complementary information about broader AI risks and governance proposals, but the main event is the suspension due to plausible harm risks, fitting the AI Hazard definition.
Thumbnail Image

OpenAI suspende la función de conversaciones eróticas de ChatGPT: estas son las razones

2026-03-26
20 minutos
Why's our monitor labelling this an incident or hazard?
The event involves the development and intended use of an AI system (ChatGPT with an 'adult mode') and highlights technical and ethical challenges that could plausibly lead to harm, such as generating illegal or harmful content. However, since the feature was never launched and no harm has occurred, this situation represents a potential risk rather than an actual incident. Therefore, it qualifies as an AI Hazard because the AI system's development and intended use could plausibly lead to harm if deployed without adequate safeguards.
Thumbnail Image

OpenAI cancela el 'modo adulto' de ChatGPT: era demasiado peligroso por varios motivos - ElNacional.cat

2026-03-27
ElNacional.cat
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT with age prediction and conversational capabilities) whose development and intended use (adult mode) were halted due to credible concerns about potential harms, including minors accessing inappropriate content and psychological impacts. Since the feature was never launched and no harm has materialized, it does not qualify as an AI Incident. However, the credible risk of harm from the AI system's use justifies classification as an AI Hazard. The article focuses on the potential risks and the decision to cancel the feature to avoid these risks, fitting the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

OpenAI suspende planes para un chatbot erótico

2026-03-27
Milenio.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (the erotic chatbot) whose development and intended use raised significant concerns about potential harms, including exposure of minors to sexual content and fostering unhealthy emotional attachments. Since the product was suspended before launch, no direct harm has occurred yet. However, the described risks are credible and plausible, fitting the definition of an AI Hazard. The article does not describe any realized harm or incident, nor does it focus on responses or updates to past incidents, so it is not an AI Incident or Complementary Information. It is not unrelated because it clearly involves an AI system and potential harm.
Thumbnail Image

La dueña de ChatGPT suspende indefinidamente su plan de lanzar un chatbot erótico

2026-03-26
Público.es
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT with an adult mode) whose development and intended use involve generating erotic content. The difficulties in controlling illegal content and the high error rate in age verification present a credible risk of harm to minors and societal harm if the system were launched. Since the launch has been suspended indefinitely and no actual harm has been reported, this is a plausible future harm scenario rather than a realized incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

OpenAI dice adiós de forma indefinida al modo adulto de ChatGPT

2026-03-26
Business Insider
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) is explicitly involved, and the event concerns its development and use. The failure of the age control system with a high error rate could plausibly lead to harm to minors by exposing them to inappropriate adult content, which is a form of harm to groups of people (minors). Although no actual harm is reported yet, the credible risk of minors accessing adult content due to AI system shortcomings qualifies this as an AI Hazard rather than an Incident. The article does not report realized harm but highlights plausible future harm due to AI system malfunction and development challenges.
Thumbnail Image

Portaltic.-OpenAI suspende de manera indefinida el modo adulto de...

2026-03-26
Notimérica
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) whose development and use are directly linked to potential harm—specifically, minors accessing inappropriate adult content due to an unreliable age verification system and the AI's inability to reliably filter illegal or harmful content. Although no harm has yet occurred, the described issues present a credible risk of harm to minors and violation of protections intended to safeguard them. Therefore, this situation constitutes an AI Hazard, as the AI system's malfunction and development challenges could plausibly lead to an AI Incident if the adult mode were launched without resolving these problems.
Thumbnail Image

OpenAI elimina el 'modo adulto' tras meses de controversia interna | Sitios Argentina.

2026-03-27
SITIOS ARGENTINA - Portal de noticias y medios Argentinos.
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm or incidents caused by the AI system. Instead, it discusses the potential risks and challenges associated with the feature, leading to its suspension before deployment. This aligns with an AI Hazard scenario, but since the feature was never fully implemented or caused harm, and the article mainly covers the decision and reasoning behind halting the project, it is best classified as Complementary Information. It provides context on governance and ethical responses to AI development rather than describing an AI Incident or Hazard.
Thumbnail Image

OpenAI cancela su "modo para adultos" indefinidamente, y muchos expertos ya hablan de crisis interna

2026-03-27
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its development and use, but the article does not describe any direct or indirect harm resulting from the AI system's deployment or malfunction. The cancellation of the feature is a preventive measure to avoid potential ethical and legal issues, but no incident or hazard has materialized. Therefore, this is not an AI Incident or AI Hazard. The article mainly provides contextual information about OpenAI's internal decision-making and the broader AI ecosystem challenges, which fits the definition of Complementary Information.
Thumbnail Image

ChatGPT ya no tendrá el 'modo erótico'

2026-03-27
El Comercio
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the involvement of an AI system (ChatGPT) designed to generate erotic content, which was suspended due to safety and verification failures that could have allowed minors access to inappropriate content. This indicates a plausible risk of harm (e.g., exposure of minors to sexual content, violation of legal protections) if the system had been deployed. Since no actual harm is reported and the project was halted before launch, this fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the suspension decision due to safety concerns, not on responses to a past incident. It is not unrelated because it involves an AI system and potential harm.
Thumbnail Image

OpenAI tenía que elegir entre "ser la empresa que tiene una IA erótica" o competir con Anthropic. Y ha elegido lo obvio

2026-03-29
Xataka
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems. The cancellation of the erotic chatbot project is a strategic business decision to avoid potential reputational and social risks, not an event where AI caused harm. The mention of new AI developments and product plans is informational and does not indicate any direct or indirect harm or plausible future harm. Therefore, this is best classified as Complementary Information, as it provides context and updates on AI development and corporate strategy without reporting an AI Incident or AI Hazard.
Thumbnail Image

Lightcap, de OpenAI, ve la escasez de memoria como un cuello de botella para la IA

2026-03-29
Bloomberg Línea
Why's our monitor labelling this an incident or hazard?
The article does not describe any event where AI system development, use, or malfunction has led or could plausibly lead to harm. It mainly reports on resource constraints and strategic plans for AI infrastructure expansion, which are typical ecosystem developments. No AI Incident or AI Hazard is indicated, nor is the article focused on responses or governance measures addressing specific harms. Hence, it fits the category of Complementary Information as it provides context and updates about the AI ecosystem without reporting harm or risk of harm.