LiblibAI Generates Inappropriate Content Due to Moderation Failure

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

LiblibAI, an AI content generation platform operated by Beijing Singularity Xingyu Technology, produced sexually explicit videos after users bypassed moderation with complex prompts. The incident, exposed by CCTV, highlighted flaws in content safety mechanisms. The company apologized, initiated technical fixes, and upgraded moderation to prevent future harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system generated inappropriate content that bypassed safety controls, directly leading to harm in the form of unsafe and non-compliant content dissemination. The company's response and remediation efforts are complementary information but do not negate the fact that the AI system's malfunction caused harm. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's outputs.[AI generated]
AI principles
SafetyAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
BusinessGeneral public

Harm types
ReputationalPsychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

哩布哩布AI:在个别复杂提示词组合及规避表达的边界场景下 平台存在生成不符合规范内容的情况

2026-04-14
东方财富网
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's response to potential content safety issues related to AI-generated outputs, emphasizing remediation and prevention. There is no indication that harm has materialized or that an AI system malfunction directly caused harm. The event is primarily about governance and mitigation measures following a risk detection, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

2026-04-14
guancha.cn
Why's our monitor labelling this an incident or hazard?
The AI system generated inappropriate content that bypassed safety controls, directly leading to harm in the form of unsafe and non-compliant content dissemination. The company's response and remediation efforts are complementary information but do not negate the fact that the AI system's malfunction caused harm. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's outputs.
Thumbnail Image

知名AI平台哩布哩布AI涉黄?记者实测:已无法生成擦边内容,但黑色产业链仍在

2026-04-14
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate content, including misuse through obfuscated prompts to bypass content filters. The AI platform's failure to initially prevent the generation of borderline illegal content and the subsequent exposure and remediation demonstrate direct involvement of AI in causing harm related to illegal and harmful content dissemination. The harms include violations of legal regulations and risks to community safety and rights. The article also discusses the platform's responsibility and legal implications, confirming the harm is realized and linked to AI use and malfunction. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

央视曝光AI"造黄"软件,一公司回应

2026-04-14
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that generated harmful sexually explicit content, which constitutes a violation of legal and ethical norms (harm under category (c): violations of human rights or breach of applicable law). The AI system's failure to restrict such content directly caused harm by producing and enabling access to inappropriate material. The company's response and regulatory measures are complementary information but do not negate the fact that harm occurred. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use and malfunction in content moderation.
Thumbnail Image

央视曝光生成内容涉黄,哩布哩布AI回应 - 21经济网

2026-04-14
21jingji.com
Why's our monitor labelling this an incident or hazard?
The AI system's use directly led to the generation of inappropriate and potentially harmful content, which constitutes a violation of content safety and can be considered harm to communities or a breach of obligations under applicable law protecting users from harmful content. The incident is materialized harm caused by the AI system's failure to restrict such content, thus qualifying as an AI Incident. The company's response and remediation efforts are complementary information but do not negate the incident classification.
Thumbnail Image

14:36 生成内容涉黄 哩布哩布AI回应

2026-04-14
每日经济新闻
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved (the 哩布哩布AI content generation platform). The incident involves the AI system generating harmful content that violates content safety norms, which can be considered harm to communities and a violation of content standards. The harm has already occurred as the inappropriate content was generated and accessible. The platform's response and remediation efforts are complementary information but do not negate the fact that the incident occurred. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's outputs.
Thumbnail Image

央视曝光哩布哩布AI生成内容涉黄 官方致歉:已对风险路径全面封堵

2026-04-14
驱动之家
Why's our monitor labelling this an incident or hazard?
An AI system (哩布哩布AI) was used to generate content that violated content safety standards by producing sexually explicit material. This constitutes a harm to communities and a failure in the AI system's content moderation mechanisms, thus meeting the criteria for an AI Incident. The platform's response and remediation efforts are complementary information but do not negate the fact that harm occurred. Therefore, this event is classified as an AI Incident.
Thumbnail Image

哩布哩布AI就内容安全问题致歉,称已启动专项排查与技术修复

2026-04-14
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system's malfunction in content generation directly led to the creation and dissemination of inappropriate and harmful content, which constitutes harm to communities and violation of content safety standards. The harm has already occurred, and the AI system's role is pivotal. Therefore, this qualifies as an AI Incident. The company's response and remediation efforts are complementary information but do not negate the incident classification.
Thumbnail Image

央视曝光AI造黄产业链,被点名公司发文致歉:将进一步完善管理与审核流程

2026-04-14
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates content based on user prompts. The harm is realized as the AI system has been used to produce and spread illegal pornographic content, which is a violation of applicable laws and harms community standards. The platform's failure to effectively filter and block such content constitutes a malfunction or inadequate use of the AI system. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and the breach of legal and regulatory requirements.
Thumbnail Image

被央视曝光的涉黄AI软件致歉 平台漏洞引关注

2026-04-14
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The AI systems in question are generative AI applications that produced inappropriate adult content due to insufficient or bypassed content moderation. This directly caused harm by disseminating harmful content, which affects community standards and potentially violates laws. The presence of AI is explicit, and the harm is realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information. The regulatory context mentioned supports the seriousness but does not overshadow the primary incident of harm caused by the AI systems' outputs.
Thumbnail Image

央视点名后,哩布哩布AI回应:已完成整改,将持续完善内容安全机制_天极网

2026-04-14
天极网
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm caused by the AI system, nor does it describe a plausible future harm scenario. Instead, it details the company's corrective actions following a prior issue, which aligns with providing updates on mitigation and governance responses. Therefore, this is Complementary Information as it enhances understanding of the AI ecosystem and responses to AI-related content safety concerns without describing a new AI Incident or AI Hazard.
Thumbnail Image

知名AI平台哩布哩布AI涉黄?记者实测:已无法生成擦边内容,但黑色产业链仍在

2026-04-15
千龙网
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (哩布哩布AI) that was used to generate borderline pornographic content, which is a violation of legal and regulatory frameworks concerning obscene content. The AI system's misuse via evasive prompt words directly led to the generation and dissemination of harmful content, constituting harm to communities and violation of laws. The platform's acknowledgment and remediation efforts do not negate the fact that harm occurred. Additionally, the existence of a black market for illicit AI-generated content further supports the presence of ongoing harm. Hence, this is an AI Incident due to realized harm caused by the AI system's use and misuse.
Thumbnail Image

被点名后哩布哩布AI回应:将持续完善内容安全机制-证券之星

2026-04-15
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the AI system's use and the company's response to previously identified content safety issues. There is no indication that the AI system caused direct or indirect harm at this time; rather, the company is taking corrective and preventive measures. This fits the definition of Complementary Information, as it provides an update on responses and improvements following concerns about AI content safety, without reporting a new incident or hazard.
Thumbnail Image

生成内容涉黄,哩布哩布AI道歉

2026-04-14
新浪财经
Why's our monitor labelling this an incident or hazard?
The AI system's use led to the generation of harmful content violating content safety norms, which constitutes harm to communities and a breach of content standards. The apology and remediation indicate that the harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident because the AI system's outputs directly caused harm through inappropriate content generation.
Thumbnail Image

刚刚 | 涉黄?哩布哩布AI道歉!

2026-04-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system generated content that violates content safety standards, which constitutes harm to communities and platform users. The incident is directly linked to the AI system's malfunction in content moderation and safety mechanisms. The platform's apology and remediation efforts confirm the realization of harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

被央视点名生成内容涉黄,哩布哩布AI道歉!

2026-04-14
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved, generating content based on user prompts. The generated content is sexually explicit, violating norms and legal/ethical standards, which harms social communities and public morality. This harm has already occurred, making it an AI Incident. The platform's response and apology are complementary but do not negate the incident classification.
Thumbnail Image

央视曝光哩布哩布AI生成内容涉黄 官方致歉:已对风险路径全面封堵

2026-04-14
新浪财经
Why's our monitor labelling this an incident or hazard?
An AI system (哩布哩布AI) was used to generate content that violated content safety norms, producing sexually explicit material. This constitutes a harm related to content safety and community harm, as inappropriate adult content can negatively impact communities and platform users. The incident has already occurred, and the platform's response is a follow-up to this realized harm. Therefore, this qualifies as an AI Incident due to the AI system's use leading to harmful content generation and the failure of safety mechanisms.
Thumbnail Image

知名AI平台哩布哩布AI涉黄?记者实测:已无法生成擦边内容,但黑色产业链仍在

2026-04-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (哩布哩布AI) whose use and malfunction (failure to fully block illicit content generation) directly led to the generation of borderline pornographic content, which is illegal and harmful. The platform's initial failure to prevent such content and the ongoing black market for AI-generated adult content represent realized harms related to violations of laws and community standards. The article also details the platform's remediation efforts, but the harm has already occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

离谱!央视曝光多款AI应用涉黄漏洞,一句话就能绕过审核?

2026-04-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating harmful content due to security loopholes, leading to real harms such as exposure of minors to inappropriate content and violations of personal rights. The AI system's failure to adequately filter or prevent such content is a direct cause of these harms. The platform's acknowledgment and remediation efforts confirm the incident's materialization. The presence of a gray industry exploiting these vulnerabilities further supports the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

哩布哩布AI因提示词漏洞生成违规视频致歉,全面升级内容安全体系

2026-04-14
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system generated harmful content due to a prompt vulnerability that allowed bypassing safety filters, resulting in the creation of inappropriate videos. This is a direct harm caused by the AI system's malfunction in content moderation and safety, impacting community standards and potentially violating platform policies and societal norms. The company's response and remediation efforts confirm the recognition of harm and the AI system's role in causing it. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

央视曝光AI应用生成内容涉黄,哩布哩布AI回应

2026-04-14
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate explicit content that violates content safety norms, causing harm to community standards and potentially violating legal or ethical obligations. The incident involves the AI system's use leading directly to harmful outputs. The platform's response and remediation efforts are complementary information but do not negate the fact that harm occurred. Hence, the event is classified as an AI Incident.
Thumbnail Image

央视点名后,哩布哩布AI回应:已完成整改

2026-04-14
life.3news.cn
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm caused by the AI system but rather reports on the company's response to previously identified content safety issues. The focus is on remediation, compliance, and governance improvements, which are responses to earlier concerns. Therefore, this is Complementary Information as it provides an update on mitigation and governance following a prior issue, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

央视点名AI涉黄,尺度震惊全网!_手机网易网

2026-04-15
m.163.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating inappropriate adult content without triggering any restriction mechanisms, indicating a failure in the AI system's content moderation safeguards. The harm is realized as explicit content is being produced and distributed, which is harmful to communities and violates applicable laws and regulations. The involvement of the AI system in generating and enabling this content is direct and central to the incident. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.