AI Chat App Developers Convicted for Generating Obscene Content in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Shanghai, developers of the AI chat app 'Alien Chat' were convicted for intentionally modifying AI prompts to generate and monetize obscene content, violating Chinese law. The court found their actions directly enabled the AI to produce illegal material, resulting in prison sentences and fines. The case highlights developer liability for AI-generated harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (a large language model integrated into the AC software) whose development and use directly caused the production of obscene content, which is illegal and harmful under applicable law. The developers intentionally modified system prompts to optimize the AI for generating such content, establishing a causal link between the AI system's outputs and the harm. The criminal convictions of the developers and a user further confirm the realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and development leading to violations of law and harm to communities.[AI generated]
AI principles
AccountabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

1名AI伴侣聊黄用户被追究刑责

2026-01-14
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a large language model integrated into the AC software) whose development and use directly caused the production of obscene content, which is illegal and harmful under applicable law. The developers intentionally modified system prompts to optimize the AI for generating such content, establishing a causal link between the AI system's outputs and the harm. The criminal convictions of the developers and a user further confirm the realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use and development leading to violations of law and harm to communities.
Thumbnail Image

用户在APP上与AI聊"黄色内容",两名开发者一审分别获刑四年、一年半,AI服务涉黄案今日二审

2026-01-14
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a large language model integrated into AC software) that was used to generate obscene content. The developers' deliberate actions in modifying system prompts to produce such content and the resulting large-scale dissemination of pornographic material constitute a direct AI Incident. The harms include violation of laws, harm to community standards, and criminal activity related to obscene content production and distribution. The AI system's role is pivotal as it generated the harmful content, and the developers' involvement in guiding the AI's outputs establishes causation. Therefore, this is classified as an AI Incident.
Thumbnail Image

AI聊天App涉黄案二审:技术不背锅 违法必追责

2026-01-14
新浪网
Why's our monitor labelling this an incident or hazard?
The AI system ('Alien Chat') is explicitly described as an AI chat application that generated over 3,600 segments of obscene content, with a large user base and significant monetization. The developers intentionally altered the AI's prompt system to bypass moral constraints, enabling sustained production of illegal content. The harm is realized and legally recognized as criminal production of obscene materials, which is a violation of law and harmful to society. The AI system's role is pivotal as the source of the obscene content generation. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and the developers' deliberate actions.
Thumbnail Image

国内首起AI开发者涉黄刑案二审,辩方:有望申请专家证人出庭

2026-01-14
新浪网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a large language model) used in an app that generated obscene content due to prompt modifications by developers. This led to criminal convictions for producing obscene materials, which constitutes a legal violation and harm to societal norms. The AI system's use directly caused the harm, fulfilling the criteria for an AI Incident. The article focuses on the legal case and harm caused, not just potential or future risks, nor is it merely complementary information or unrelated news.
Thumbnail Image

国内首起AI聊黄案有哪些法律细节?

2026-01-14
新浪网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (an AI chat companion app) whose development (modification of AI model prompts to bypass ethical constraints) and use (generating obscene content) directly led to harm recognized under criminal law (production and distribution of obscene materials). The developers' actions in altering the AI system to produce illegal content constitute a direct cause of harm, fulfilling the criteria for an AI Incident. The article details realized harm, legal consequences, and societal impact, not just potential risks or complementary information.
Thumbnail Image

用户与AI聊"黄色内容",两名开发者一审分别获刑,AI服务涉黄案今日二审

2026-01-14
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a large language model integrated into AC software) whose use and modification by developers directly led to the production of illegal pornographic content. The developers' actions in modifying system prompts to facilitate 'chatting about yellow content' demonstrate direct involvement in causing harm. The harms include violation of laws against producing and distributing obscene materials, which is a clear legal and societal harm. The criminal convictions confirm that harm has materialized. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

本次庭审的争议焦点为:被告人编写、修改系统提示词(Prompt)在多大程度上影响了模型生成淫秽内容。 [全文]

2026-01-14
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a large language model used in the Alien Chat app) whose outputs were manipulated through prompt engineering to generate obscene content. This content was recognized by the court as obscene material, leading to criminal charges and sentencing of the defendants. The AI system's use and the defendants' modification of prompts directly caused the harm (production and distribution of illegal obscene content). Therefore, this is an AI Incident as the AI system's use and manipulation have directly led to harm under the legal framework.
Thumbnail Image

紫牛头条|国内首例AI服务提供者涉黄判刑案二审开庭,记者现场直击

2026-01-14
yangtse.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Alienchat) that was designed and operated to produce obscene content, which led to criminal convictions of the developers. The AI system's outputs caused harm by producing illegal obscene materials, fulfilling the criteria of harm to communities and violation of applicable laws. The AI system's development and use directly caused this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

国内首例,用户在APP上与AI聊"黄色内容",二审未当庭宣判

2026-01-14
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the AI companion chat app) that generated obscene content in conversations with users. The developers were convicted for producing obscene materials, indicating that the AI system's use directly led to legal harm and societal harm. The case involves the AI system's development and use, with the AI's outputs being central to the harm. This meets the criteria for an AI Incident because the AI system's malfunction or misuse caused violations of law and harm to communities. The detailed legal proceedings and convictions confirm that harm has materialized, not just a potential risk.
Thumbnail Image

AI陪伴变"聊黄",别让温情变越界__南方+_南方plus

2026-01-15
static.nfnews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (a large language model used in an AI companion chat app) whose use led directly to harm recognized by the legal system: production and distribution of obscene content, which is a violation of law and harms societal norms. The developers' intentional removal of content filters and facilitation of such content further implicates the AI system's role in causing harm. The harm is realized and legally recognized, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article's focus is on the incident and its legal and societal implications, not merely on potential future harm or responses.
Thumbnail Image

AI时代"快播案"二审开庭,辩方律师希望申请"提示词测黄"实验

2026-01-15
新浪网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (a large language model integrated into the AC chat software) whose use directly led to the generation of illegal and harmful content (obscene/sexual content). The defendants were criminally prosecuted and convicted for producing obscene materials through the AI system, which constitutes harm under the category of violations of applicable law protecting fundamental rights (specifically laws against obscene materials). The AI system's development, use, and modification (including prompt engineering) are central to the incident. Therefore, this is an AI Incident because the AI system's use directly caused harm and legal violations.
Thumbnail Image

"AI服务涉黄第一案"二审:谁该为技术越界负责?

2026-01-15
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of a large language model AI system that was configured with prompts encouraging the generation of sexually explicit content, which was then disseminated to users. The developers were criminally convicted for producing obscene materials, indicating direct harm and legal violations caused by the AI system's outputs. The AI system's role is pivotal as it generated the harmful content, and the developers' deliberate prompt engineering was a key factor. This meets the criteria for an AI Incident because the AI system's use directly led to violations of law and harm to communities/public interest. The article does not merely discuss potential or future harm, but actual legal consequences and harm realized through the AI system's outputs.
Thumbnail Image

用户与AI聊黄开发者获刑

2026-01-15
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Alien Chat) that generates chat content through AI-user interaction. The AI system's outputs included large amounts of sexually explicit content, which the court classified as obscene material, leading to criminal prosecution of the developer. This constitutes a direct harm related to violations of legal and societal norms (harm to communities and breach of applicable law). The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

首例"AI陪伴涉黄案"始末:AI和用户聊黄,平台获刑? - 21经济网

2026-01-15
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (AlienChat) that uses a large language model to generate content in one-on-one chats. The AI system was modified with 'jailbreak' prompts to bypass content restrictions, leading to the generation of obscene content. The platform was criminally prosecuted and found guilty of producing obscene materials, which constitutes a violation of law and harm to societal order. The AI system's use and the developers' modifications directly caused this harm. Therefore, this qualifies as an AI Incident under the framework, as it involves direct harm (legal violations and societal harm) caused by the AI system's use and malfunction (or misuse).
Thumbnail Image

AI恋人陪聊,涉黄被判刑

2026-01-16
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (AlienChat) that uses large language models to generate chat content. The operators' use and configuration of the AI system directly led to the production and dissemination of obscene content, which is illegal and harmful. The court's criminal judgment confirms that harm has materialized. The event clearly meets the definition of an AI Incident because the AI system's use has directly led to violations of law and harm to communities. The detailed description of the AI system's role, the operators' actions, and the legal consequences support this classification. Although there is debate about responsibility, the court ruling and the described harms confirm the incident status.