OpenAI Halts Launch of Explicit Content Chatbot Amid Risk Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

OpenAI indefinitely suspended plans to launch a chatbot capable of generating explicit sexual content, known internally as "mode Citron," due to internal criticism and concerns from employees and investors about potential social and reputational risks. The decision was made to avoid possible harm before any incidents occurred.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (ChatGPT) and a proposed feature (erotic mode) that was under development but has been suspended indefinitely before release due to concerns about potential harms. Since the feature was not deployed and no harm has occurred, this situation represents a plausible risk of harm that has been averted or deferred. Therefore, it fits the definition of an AI Hazard, as the development and intended use of the AI system feature could plausibly lead to harms (e.g., inappropriate content generation, psychological harm) if launched. There is no indication of an AI Incident (actual harm), Complementary Information (response to a past incident), or unrelated news.[AI generated]
AI principles
AccountabilitySafety

Industries
Consumer services

Affected stakeholders
Business

Harm types
Reputational

Severity
AI hazard

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Versiunea "erotică" a ChatGPT a fost suspendată pe termen nedefinit la OpenAI - HotNews.ro

2026-03-27
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and a proposed feature (erotic mode) that was under development but has been suspended indefinitely before release due to concerns about potential harms. Since the feature was not deployed and no harm has occurred, this situation represents a plausible risk of harm that has been averted or deferred. Therefore, it fits the definition of an AI Hazard, as the development and intended use of the AI system feature could plausibly lead to harms (e.g., inappropriate content generation, psychological harm) if launched. There is no indication of an AI Incident (actual harm), Complementary Information (response to a past incident), or unrelated news.
Thumbnail Image

OpenAI suspendă o funcție de conversații din ChatGPT. E pe termen nedeterminat

2026-03-27
DCnews
Why's our monitor labelling this an incident or hazard?
The article discusses the development and use of an AI system (ChatGPT) and its planned feature (adult mode) that was suspended due to technical and ethical concerns. However, no actual harm or incident has occurred; rather, the suspension is a precautionary measure in response to plausible risks such as minors accessing inappropriate content and the AI generating illegal or harmful content. Therefore, this event represents a plausible future risk (AI Hazard) rather than a realized harm (AI Incident). It is not merely complementary information because the main focus is on the suspension decision due to potential harms, not on updates or responses to past incidents. It is not unrelated because it involves AI system development and potential harm.
Thumbnail Image

Fanteziile digitale ale lui ChatGPT, un mister. Versiunea "fierbinte" a platformei a fost blocată

2026-03-27
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (ChatGPT) and its planned use (an adult mode with erotic content). However, the launch has been postponed indefinitely due to concerns about potential misuse and harm. Since no actual harm has occurred and the article focuses on the potential risks and strategic decisions to avoid them, this qualifies as an AI Hazard. It reflects a credible risk that the AI system's use could plausibly lead to harm, but no direct or indirect harm has materialized yet. Therefore, the classification is AI Hazard.
Thumbnail Image

OpenAI renunță la modul erotic din ChatGPT și își schimbă strategia către clienți business și armată

2026-03-26
ziarulnational.md
Why's our monitor labelling this an incident or hazard?
The article focuses on OpenAI's strategic shift and project suspensions, including the halted 'adult mode' for ChatGPT, without reporting any actual harm or incidents caused by AI systems. The mention of military contracts and business focus relates to future directions but does not describe a credible or imminent AI hazard. The content is about company decisions and market competition, which fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

OpenAI suspende sus planes para lanzar un chatbot de contenido erótico

2026-03-27
El Comercio Perú
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (a chatbot with explicit content generation capabilities) and its development and intended use. However, no harm has occurred, nor is there a report of a malfunction or misuse causing harm. The suspension is a precautionary measure in response to concerns about possible social and reputational risks. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about governance and risk management in AI deployment.
Thumbnail Image

OpenAI suspende de manera indefinida sus planes de lanzar el modo adulto de ChatGPT: esta sería la razón

2026-03-27
Semana.com Últimas Noticias de Colombia y el Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (ChatGPT) and its planned feature for generating adult content. The suspension is due to concerns about potential harms, including illegal content generation and failure of age controls, which could plausibly lead to harm if the feature were launched. Since no harm has yet occurred and the product was not released, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the risk and decision to suspend the feature due to potential harms, not just an update or response to a past incident.
Thumbnail Image

OpenAI decidió suspender su chatbot de contenido erótico por riesgos y críticas

2026-03-26
Excélsior
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a chatbot with explicit content generation capabilities) whose development and intended use raised concerns about potential harms, including reputational and social risks. However, since the product was suspended before launch and no actual harm or incident resulting from its use is reported, this constitutes a plausible risk rather than a realized harm. Therefore, the event qualifies as an AI Hazard because the AI system's use could plausibly lead to harms such as psychological, reputational, or legal damage, but no direct or indirect harm has yet occurred. The article also provides broader context on regulatory and societal responses, but the main focus is on the suspension decision due to potential risks, not on a past incident or complementary information about mitigation of an existing incident.
Thumbnail Image

OpenAI canceló el lanzamiento de su chatbot con contenido explícito

2026-03-26
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (a chatbot with explicit content capabilities) whose development and intended use raised concerns about social risks and reputational damage. However, since the product launch was canceled before deployment, no direct or indirect harm has occurred. The event thus represents a credible potential risk of harm that was averted by cancellation. This fits the definition of an AI Hazard, as the AI system's use could plausibly have led to harms related to social impact and reputational damage if it had been launched.
Thumbnail Image

Creadores de ChatGPT toman decisión sobre chatbot de contenido erótico sexual

2026-03-27
El Financiero, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (ChatGPT with a 'mode Citron' for explicit sexual content) and the decision to suspend its release due to concerns about risks. However, no direct or indirect harm has occurred as a result of this AI system's use or malfunction. The concerns are about potential social and reputational harm, which could plausibly arise if the system were deployed. This fits the definition of an AI Hazard, as it involves a circumstance where the use of an AI system could plausibly lead to harm, but no harm has yet materialized.
Thumbnail Image

OpenAI cancela su plan de lanzar un chatbot sexual explícito

2026-03-27
Tribuna Noticias
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a chatbot capable of generating explicit sexual content) whose development and intended use posed plausible risks of harm, including social and reputational harms. Since the product was not launched and no harm has materialized, this constitutes an AI Hazard rather than an AI Incident. The article focuses on the potential risks and the decision to halt the project to avoid those risks, fitting the definition of an AI Hazard. References to other incidents provide context but do not change the classification of this event.