Chinese Platforms Crack Down on Harmful AI-Generated Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Major Chinese platforms like WeChat, Douyin, and Toutiao have intensified enforcement against AI-generated content that replaces human creators, citing issues such as misinformation, copyright infringement, and low-quality material. Actions include mass content deletions, account bans, and updated policies to mitigate ongoing harms from automated AI content production.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating content that violates platform policies, including potential intellectual property infringements and dissemination of low-quality or misleading information. The platforms' enforcement actions indicate that harm has occurred or is ongoing, such as violations of rights and harm to communities through misinformation or unauthorized content. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations and harms that the platforms are actively addressing.[AI generated]
AI principles
AccountabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
WorkersGeneral public

Harm types
Economic/PropertyPublic interestReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

【AI】微信打擊公眾號AI自動化創作,多平台相繼整治AI生成內容

2026-04-10
ET Net
Why's our monitor labelling this an incident or hazard?
The article focuses on platforms updating and enforcing rules against AI-generated automated content creation and taking measures like content deletion and account restrictions. This is a societal and governance response to AI misuse, aiming to mitigate risks and harms associated with AI-generated content. No specific AI Incident (harm realized) or AI Hazard (plausible future harm) is described as the main event. Instead, the article reports on ongoing regulatory actions and platform policies, which fits the definition of Complementary Information.
Thumbnail Image

中國網路平台整治AI生成式內容 微信批量刪除 | 兩岸 | 中央社 CNA

2026-04-10
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating content that violates platform policies, including potential intellectual property infringements and dissemination of low-quality or misleading information. The platforms' enforcement actions indicate that harm has occurred or is ongoing, such as violations of rights and harm to communities through misinformation or unauthorized content. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to violations and harms that the platforms are actively addressing.
Thumbnail Image

中國網路平台整治AI生成式內容 微信批量刪除 | 陸股透視 | 兩岸 | 經濟日報

2026-04-10
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of generative AI content creation tools being used on major Chinese platforms. However, the article focuses on the platforms' enforcement actions and policy updates to address misuse or low-quality AI-generated content rather than describing a specific AI Incident or AI Hazard causing or plausibly leading to harm. The described actions are responses to previously identified issues and aim to mitigate harms such as misinformation, copyright infringement, and low-quality content dissemination. Therefore, this is best classified as Complementary Information, as it provides updates on societal and governance responses to AI-related challenges rather than reporting a new incident or hazard.
Thumbnail Image

AI生成農場文狂發!夫妻1年爽賺近千萬 下場慘了 | 兩岸(大陸) | 三立新聞網 SETN.COM

2026-04-11
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI for fully automated content creation and the platform's response to this misuse, which involves AI system use and its consequences. However, the article does not report any realized harm such as injury, rights violations, or community harm caused by the AI-generated content. The platform's enforcement actions are a governance response to policy violations, not an incident of harm. Therefore, this event is best classified as Complementary Information, as it provides context on societal and governance responses to AI misuse rather than describing an AI Incident or AI Hazard.
Thumbnail Image

AI生成農場文狂發微信 夫妻一年爽賺近千萬後遭封號 | ETtoday AI科技 | ETtoday新聞雲

2026-04-11
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for automated content generation, which is explicitly mentioned and confirmed by the platform's detection and ban. However, the article does not report any direct or indirect harm such as injury, human rights violations, or significant harm to communities or property. The platform's response is a governance action to enforce content creation policies. Therefore, this event does not meet the criteria for an AI Incident or AI Hazard. It is best classified as Complementary Information because it provides context on AI content governance, platform policy enforcement, and the evolving landscape of AI-generated content management.
Thumbnail Image

傳AI寫公眾號年賺200萬人民幣 微信規範非真人寫作

2026-04-12
on.cc東網
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used for automated content generation, which is explicitly mentioned. However, there is no indication that the AI-generated content has directly or indirectly caused harm such as misinformation, rights violations, or other significant harms. Instead, the focus is on the platform's regulatory response to prevent fully automated AI content creation, which could plausibly lead to harms if unchecked. Since no actual harm has been reported and the main focus is on the platform's governance measures and policy updates, this event is best classified as Complementary Information.
Thumbnail Image

WeChat larang konten yang dibuat sepenuhnya dengan AI

2026-04-11
ANTARA News - The Indonesian News Agency
Why's our monitor labelling this an incident or hazard?
The article focuses on WeChat's updated operational rules banning fully AI-generated content and automated mass publication, which is a governance measure. It does not report any realized harm or incident caused by AI systems, nor does it describe a plausible future harm event occurring now. Instead, it is a policy response to potential risks associated with AI-generated content. Therefore, this qualifies as Complementary Information, providing context on societal and governance responses to AI-related challenges.
Thumbnail Image

WeChat Larang Konten AI Sepenuhnya, Kreativitas Manusia Jadi Prioritas

2026-04-11
investor.id
Why's our monitor labelling this an incident or hazard?
The article centers on a platform's policy change and enforcement measures to address the risks posed by AI-generated content. It highlights concerns about potential harms (e.g., misinformation, copyright infringement) but does not describe a concrete AI Incident where harm has already occurred. Instead, it is a governance response aimed at mitigating AI-related risks. Therefore, this qualifies as Complementary Information, as it provides important context and updates on societal and governance responses to AI-related challenges without reporting a new AI Incident or AI Hazard.
Thumbnail Image

WeChat larang konten yang dibuat sepenuhnya dengan AI - ANTARA News Jawa Timur

2026-04-11
Antara News
Why's our monitor labelling this an incident or hazard?
The article focuses on WeChat's updated operational rules to restrict AI-generated content and automated publishing, aiming to promote genuine human creativity and prevent misuse of AI for mass content production. There is no indication that an AI system caused harm or that harm has occurred or is imminent. Instead, this is a societal and governance response to potential risks associated with AI content generation. Therefore, it qualifies as Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Aplikasi WeChat larang penggunaan konten dengan AI

2026-04-11
Antara News Kepri
Why's our monitor labelling this an incident or hazard?
The article focuses on WeChat's enforcement of rules against AI-generated content without human expression, aiming to promote genuine human creativity. There is no mention of any harm caused or plausible harm that could arise from AI systems in this context. The event is about a platform's policy change and enforcement, which fits the definition of Complementary Information as it relates to societal and governance responses to AI.
Thumbnail Image

WeChat larang konten yang dibuat sepenuhnya dengan AI dan alat lainnya - ANTARA News Megapolitan

2026-04-11
Antara News
Why's our monitor labelling this an incident or hazard?
The article does not report any harm caused by AI systems, nor does it describe any incident or hazard involving AI malfunction or misuse leading to harm. Instead, it details a platform's updated operational rules restricting AI-generated content, which is a governance response to potential issues with AI content. Therefore, this is Complementary Information as it provides context on societal and governance responses to AI use, without describing a specific AI Incident or AI Hazard.
Thumbnail Image

China's Wechat Bans AI-generated Content - Report

2026-04-11
BERNAMA
Why's our monitor labelling this an incident or hazard?
The article reports on WeChat's decision to prohibit AI-generated content without human intervention, which is a regulatory or platform policy measure. There is no indication of harm caused or plausible harm from AI systems in this event. The focus is on the platform's stance and rules, which is complementary information about societal and governance responses to AI.
Thumbnail Image

Tencent moves to rein in AI content flood on WeChat with stricter rules

2026-04-10
South China Morning Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in content generation and distribution, but the article centers on Tencent's policy update and enforcement to control AI-generated content rather than an incident where AI caused harm. There is no mention of realized harm such as misinformation causing community harm or rights violations. The article is primarily about a governance and platform response to AI-related challenges, making it Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

WeChat restricts AI-generated content, stresses role of human creators

2026-04-10
China News 中国新闻网
Why's our monitor labelling this an incident or hazard?
The article centers on WeChat's policy update to restrict AI-generated content and promote human creators, which is a governance response to concerns about AI's impact on content sustainability and credibility. No specific AI Incident or AI Hazard is described; rather, the event is about societal and platform-level responses to AI developments. Therefore, it fits the definition of Complementary Information, as it provides context and updates on governance measures related to AI without reporting a new harm or credible risk event.
Thumbnail Image

WeChat Bans Automated Content Publishing Due to Rise in Replacement of Human Creators

2026-04-10
yicaiglobal.com
Why's our monitor labelling this an incident or hazard?
The article primarily reports on new rules and enforcement actions by WeChat and other platforms against AI-generated content that lacks human creativity or is produced in bulk. These actions are responses to concerns about AI's impact on content quality and creator replacement but do not describe any realized harm or incident caused by AI systems. The focus is on policy updates and content moderation measures, which constitute complementary information about societal and governance responses to AI-related challenges, not an AI Incident or Hazard.
Thumbnail Image

WeChat Tightens Curbs on AI-Generated Content After Viral Income Claim

2026-04-10
City News Service
Why's our monitor labelling this an incident or hazard?
The article centers on platform policy changes and enforcement actions aimed at managing AI-generated content to maintain content quality and prevent misuse. There is no indication that AI systems have directly or indirectly caused harm such as misinformation, rights violations, or other damages. The event is primarily about governance responses and updates to platform rules addressing AI use, which fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

20:04 微信:持续强化"自媒体"创作者信息来源标注

2026-03-27
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The announcement involves AI-generated content but focuses on platform policy and content labeling to mitigate misinformation risks. There is no direct or indirect harm reported or occurring, nor is there a plausible immediate risk of harm from the AI system itself. The event is about a societal and governance response to AI content, enhancing transparency and user awareness, which fits the definition of Complementary Information.
Thumbnail Image

·以批判性思维抵御AI科普中的虚假迷雾

2026-03-30
光明网
Why's our monitor labelling this an incident or hazard?
The article explicitly addresses the risks and limitations of AI-generated science communication, including misinformation and deepfakes, which are recognized AI-related harms. However, it does not report a particular incident or event where such harms have materialized or where an AI system malfunctioned or was misused to cause harm. Instead, it discusses the broader ecosystem, societal implications, and the importance of critical thinking to mitigate these risks. This aligns with the definition of Complementary Information, which includes updates or analyses that enhance understanding of AI impacts without describing a new AI Incident or Hazard.
Thumbnail Image

2026-03-28
人民网
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI Incident or AI Hazard event but rather details a governance and regulatory response to ongoing problems related to AI-generated and staged content in short videos. It emphasizes the importance of labeling and transparency to prevent harm but does not report a new incident or a direct or plausible future harm event. Therefore, it fits the definition of Complementary Information as it provides context and updates on societal and governance responses to AI-related harms in the digital content ecosystem.
Thumbnail Image

晶采观察|数智赋能正能量 中国网络媒体论坛明晰三个方向

2026-03-31
China Daily
Why's our monitor labelling this an incident or hazard?
The article centers on the discussion and promotion of AI and digital intelligence technologies as enablers of positive content production and dissemination. It does not report any realized harm, violation, or malfunction related to AI systems, nor does it warn of potential harm. The content is primarily about strategic directions, technological empowerment, and governance goals to ensure AI benefits society and media ecosystems. Therefore, it fits the definition of Complementary Information, providing context and updates on AI's role in media without describing an AI Incident or AI Hazard.
Thumbnail Image

微信:"自媒体"创作者应规范标注来源,近期聚焦3类治理

2026-03-27
南方网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (deep synthesis technology) by 'self-media' creators and the platform's enforcement against accounts spreading AI-generated false information and causing public misunderstanding and social harm. The harms include misinformation, social discord, and violation of content norms, which fall under harm to communities and possibly violations of rights. Since the AI system's misuse has directly led to these harms, this qualifies as an AI Incident rather than a hazard or complementary information. The platform's governance and enforcement are responses to an ongoing AI Incident involving misinformation and social harm caused by AI-generated content.
Thumbnail Image

华为应用市场获评"2025年清朗行动成效突出平台奖"_中华网

2026-03-30
m.tech.china.com
Why's our monitor labelling this an incident or hazard?
The article does not report any harm or incident caused by AI systems, nor does it describe any plausible future harm. Instead, it details positive governance measures and industry commitments to regulate AI-generated content and ensure compliance with laws and standards. This fits the definition of Complementary Information, as it provides context and updates on societal and governance responses to AI content challenges without describing a new AI Incident or AI Hazard.
Thumbnail Image

案例丨AI生成不是免责事由!发布者未尽核实义务侵害他人名誉权应担责

2026-03-31
财经网
Why's our monitor labelling this an incident or hazard?
The event involves the use of a generative AI system to produce content that was published and caused reputational harm, which is a violation of rights under applicable law. The AI system's outputs were used without proper verification, leading to harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (violation of rights, specifically name and reputation rights). The court's decision confirms the harm and responsibility linked to the AI-generated content.
Thumbnail Image

华为应用市场获评"2025年清朗行动成效突出平台奖"_天极网

2026-03-30
天极网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI intelligent review systems and risk assessment models) in content governance and security. However, the article does not describe any harm or incident caused by AI systems, nor does it indicate any plausible future harm. Instead, it focuses on positive governance actions, compliance with regulations, and the promotion of healthy AI content development. Therefore, this is best classified as Complementary Information, as it provides supporting context about AI ecosystem governance and responses rather than reporting an AI Incident or Hazard.
Thumbnail Image

AI内容"亮身份"是规范更是信任基石

2026-03-29
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article centers on a new regulatory framework mandating AI-generated content to be labeled, which is a governance response to AI-related risks like misinformation and trust erosion. It does not describe a specific event where AI use or malfunction has directly or indirectly caused harm, nor does it report a plausible future harm from AI systems. Instead, it focuses on the policy and societal response to AI content risks, making it Complementary Information as it provides context and updates on AI governance without reporting a new AI Incident or Hazard.
Thumbnail Image

YouTube正被AI垃圾视频淹没:算法在"催更",老板在"发愁

2026-03-30
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in content creation and recommendation algorithms. It discusses the challenges and risks posed by the mass production of AI-generated low-quality videos, which could plausibly lead to harm to the community and content ecosystem by degrading user experience and information quality. However, it does not report any actual realized harm or incidents caused by these AI systems. The focus is on the potential and ongoing challenge rather than a specific incident of harm. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

AI生成不是免责理由,发布者未核实侵害他人名誉被判担责

2026-03-31
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
The event describes a case where an AI system was used to generate content that defamed a deceased livestreamer and their affiliated organization, causing reputational harm. The AI-generated content was published without proper verification, leading to a court ruling that the publisher is liable for the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to the reputation of a person or group, which is a violation of rights under applicable law. The involvement of AI in generating the harmful content and the resulting legal consequences confirm this classification.
Thumbnail Image

有视频未必有真相,短视频也要"验明正身" 看点

2026-03-28
qlwb.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content in short videos that are used to deceive viewers and generate illicit profits, which constitutes a violation of public rights and harms communities by spreading misinformation. This is a direct harm caused by the use of AI systems in content generation and dissemination. The regulatory measures to label AI-generated content are a response to this ongoing harm. Therefore, the event qualifies as an AI Incident because the development and use of AI systems have directly led to harm through misinformation and deception in short videos.
Thumbnail Image

清除AI"数字泔水",以"治"促"智"

2026-04-14
news.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating large volumes of low-quality, automated content that harms the information ecosystem and public discourse, which fits the definition of harm to communities and violation of rights. The AI systems' use in automated content creation directly leads to these harms. The platforms' deletion of such content and regulatory measures are responses but do not negate the occurrence of harm. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is ongoing and materialized.
Thumbnail Image

新华鲜报丨清除AI"数字泔水" 以"治"促"智"

2026-04-15
新华网广东频道
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for automated content generation that has directly led to harms such as the spread of low-quality, misleading, or false information, crowding out original content, and negatively impacting social discourse and public trust. The platforms' removal of such AI-generated content and enforcement of rules confirm the recognition of these harms. The AI systems' development and use are central to the incident, and the harms to communities and cultural environment are clearly articulated. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

令人反胃的"数字泔水"该清掉了

2026-04-16
大洋网
Why's our monitor labelling this an incident or hazard?
The article describes the presence and proliferation of AI-generated low-quality content that misleads and pollutes the information environment, which can be considered harm to communities and social cognition. However, the article focuses on the platform's response and the need for better mechanisms rather than describing a specific incident of harm or a direct event causing harm. Therefore, it is best classified as Complementary Information, as it provides context and updates on responses to AI-related harms rather than reporting a concrete AI Incident or AI Hazard.
Thumbnail Image

微信小红书集体对AI代笔说不

2026-04-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for automated content generation that directly leads to harm by flooding platforms with low-quality, AI-generated content that displaces genuine creators and pollutes the online information environment. This constitutes harm to communities and a violation of intellectual property and labor rights of original creators. The article reports that this harm is occurring, not just potential, and that platforms are taking measures to address it. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

媒体:AI代笔年赚200万?"数字泔水"为何能成"数字肥水"

2026-04-12
新浪财经
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly described as generating content automatically without depth or original thought, leading to widespread dissemination of low-quality content that harms the online community and content creators' rights to fair recognition and opportunity. This constitutes harm to communities and a violation of the information environment, fitting the definition of an AI Incident. The platforms' algorithmic incentives and user engagement patterns exacerbate the issue, but the AI system's role in producing and enabling this content is pivotal. The article also discusses governance measures as complementary information but the main event is the realized harm from AI-generated low-quality content.
Thumbnail Image

锐评|AI代笔年赚200万?"数字泔水"为何能成"数字肥水"_京报网

2026-04-12
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used for automated content creation (AI System involvement). The use of these AI systems has directly led to harms including damage to communities (pollution of the online content ecosystem), violation of creators' rights and interests (economic harm to genuine creators), and broader societal harm through misinformation and low-quality content proliferation. The article describes realized harms, not just potential ones, and discusses platform responses to mitigate these harms. Hence, this qualifies as an AI Incident due to the direct and indirect harms caused by AI-generated low-quality content and its systemic effects.
Thumbnail Image

抖音:累计下架AI侵权视频超53.8万条 处罚违规账号4000多个

2026-04-23
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as it concerns AI-generated content and AI-related infringements. However, the main focus is on the platform's response—removal of infringing content and penalizing accounts—rather than the occurrence of a new AI Incident or a new AI Hazard. The harms (copyright infringement, misleading content) are already recognized and the platform's actions are mitigating these harms. Therefore, this is Complementary Information providing an update on ongoing governance and enforcement efforts related to AI harms.
Thumbnail Image

【AI】抖音:下架AI侵權視頻超53.8萬條,處罰違規帳號4000多個

2026-04-23
ET Net
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's response to AI-related infringements and misuse, detailing the scale of content removal and account penalties. It does not report a specific AI incident causing harm, nor does it describe a plausible future harm scenario. Instead, it provides complementary information about ongoing governance and mitigation efforts addressing AI misuse and infringement on the platform.
Thumbnail Image

AI短剧治理进入"深水区",制作方需守好合规第一道防线

2026-04-22
21jingji.com
Why's our monitor labelling this an incident or hazard?
The article does not describe a specific AI Incident where harm has occurred due to AI system use or malfunction. Instead, it outlines the governance challenges, legal complexities, and the need for improved regulatory frameworks and industry standards to address potential copyright infringements in AI-generated short dramas. This fits the definition of Complementary Information, as it provides context, expert analysis, and recommendations for managing AI-related risks without reporting a concrete incident or imminent hazard.
Thumbnail Image

10:53 抖音:已累计下架AI侵权视频超53.8万条,处罚违规账号4000多个

2026-04-23
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's governance and mitigation measures addressing AI-generated content that violates rights and misleads users. While the underlying AI-generated infringing content represents harms (violations of intellectual property and rights), the article's main narrative is about the platform's actions to combat these issues, making it complementary information rather than a new AI Incident or Hazard. The harms are background context to the enforcement update.
Thumbnail Image

抖音治理AI侵权:累计下架视频超53.8万条,处置利用"AI霸总"形象误导、诱导中老年人互动的不当内容3万多条,账号1300多个 2026-04-23 13:46

2026-04-23
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for face swapping, voice cloning, and generating misleading or infringing content. The harms include violations of intellectual property rights, personal rights (image and voice), and misleading vulnerable users, which are direct harms as defined under AI Incidents. The platform's removal of large volumes of such content and sanctioning of accounts confirms that harm has occurred and is ongoing. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

围剿AI侵权!抖音宣布下架视频超53.8万条,多平台发布治理举措

2026-04-23
华龙网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate or manipulate content (AI face-swapping, voice cloning, AI-generated videos). The use of these AI systems has directly led to violations of intellectual property rights and personal rights (e.g., unauthorized use of celebrity images and voices), which are harms under the AI Incident definition. The platforms' responses to remove content and penalize accounts confirm that harm has occurred. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

抖音整治AI不当内容 重点处置利用AI技术换脸、盗声

2026-04-23
东方财富网
Why's our monitor labelling this an incident or hazard?
The article focuses on the platform's actions to address and mitigate harms caused by AI-generated content misuse, such as deepfakes and voice theft, which are forms of rights violations and potentially harmful content. Since the main narrative is about the platform's governance measures and challenges in detection, rather than describing a new incident or hazard itself, this fits the definition of Complementary Information. The harms are implied as ongoing background issues, but the article's primary focus is on the response and management efforts.
Thumbnail Image

抖音:治理利用AI技术盗用名人肖像等典型违规行为

2026-04-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The misuse of AI for deepfakes, unauthorized use of celebrity images and voices, and generation of misleading content constitutes violations of intellectual property rights and harms to individuals and communities. The platform's removal of over 538,000 infringing videos and penalties against thousands of accounts confirms that these harms have materialized. Therefore, this event qualifies as an AI Incident because the development and use of AI systems have directly led to significant harms as defined in the framework.
Thumbnail Image

使用AI技术"复活"逝去亲人,可能触犯哪些法律规定?

2026-04-20
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to generate images and voices of deceased individuals, which constitutes AI system involvement. The harms described include violations of personality rights, data protection laws, and facilitation of criminal fraud, all of which are realized or ongoing harms linked to the AI system's use. The discussion of legal cases and penalties confirms that these harms have materialized. Hence, this is not merely a potential risk or complementary information but an AI Incident due to direct harm caused by AI system use.
Thumbnail Image

未经同意用AI制作他人逝去亲属的视频,会违反哪些法律规定?

2026-04-21
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating videos of deceased individuals without consent, which infringes on personality rights and personal information protection laws, causing direct harm to individuals' legal rights. It also discusses civil and criminal consequences arising from such unauthorized AI use. Since the AI system's use has directly led to violations of legal rights and potential harms, this qualifies as an AI Incident under the definitions provided.
Thumbnail Image

莫让AI换脸技术肆意侵权

2026-04-22
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI face-swapping/deepfake technology) that directly led to harm: infringement of portrait rights and reputation of a known actor, as well as consumer fraud and market disruption. The court ruling confirms the harm has materialized and legal responsibility is assigned. Therefore, this qualifies as an AI Incident because the AI system's use directly caused violations of rights and harm to individuals and communities. The article also discusses broader societal and regulatory responses, but the primary focus is on the realized harm from the AI system's misuse.
Thumbnail Image

抖音:2026年累计下架AI侵权视频50余万条

2026-04-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating infringing content (videos with AI-generated impersonations and voice theft), which constitutes a violation of intellectual property and personal rights, a form of harm under the AI Incident definition (c). The platform's removal of such content and penalization of accounts indicates that harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident due to realized violations of rights caused by AI-generated content.
Thumbnail Image

抖音整治AI不当内容,重点处置利用AI技术换脸、盗声

2026-04-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article primarily reports on Douyin's governance actions and policy measures addressing previously identified AI misuse and infringement issues. It provides data on past takedowns and penalties, describes ongoing challenges, and outlines future improvements and community involvement. There is no description of a new AI incident causing harm or a new AI hazard posing plausible future harm. Instead, it is a societal and governance response to existing AI-related harms and challenges, enhancing understanding of the ecosystem and platform responsibility. Therefore, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

抖音下架3萬個「AI霸總」 還要整治AI換臉盜聲、魔改經典不當內容 | 大陸政經 | 兩岸 | 經濟日報

2026-04-24
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating misleading content (AI-generated fake personas, face-swapping, voice synthesis) that have directly caused harm, such as elderly users being defrauded and intellectual property infringements. The harms are realized and significant, including fraud and rights violations. The platform's removal of content and account penalties are responses to these incidents. Hence, this is an AI Incident, as the AI system's use has directly led to harm to people (fraud victims) and violations of rights (intellectual property).
Thumbnail Image

2026-04-23
guancha.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake generation, voice cloning) that have directly led to violations of rights (intellectual property, personal rights) and harm to communities (misleading consumers, exposure of minors to inappropriate content). The large-scale removal of infringing content and penalties against accounts confirm that harm has materialized. The platform's acknowledgment of difficulties in detection and ongoing efforts to improve governance further support the classification as an AI Incident rather than a mere hazard or complementary information. Therefore, this event qualifies as an AI Incident due to realized harms caused by AI-generated infringing content and the platform's response to mitigate these harms.
Thumbnail Image

53萬支影片全刪 易烊千璽成最大苦主 AI侵權黑幕曝光 - 自由娛樂

2026-04-23
自由時報電子報
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for face-swapping and voice synthesis to create unauthorized content, which constitutes a violation of intellectual property and personal rights. The harm is realized as the actor's likeness and voice are exploited without permission, causing reputational and economic harm. The platform's removal of infringing content and account penalties confirm the harm has occurred. Therefore, this qualifies as an AI Incident due to direct harm caused by AI-generated content infringing on rights.
Thumbnail Image

中國抖音整頓違規 下架53.8萬則AI侵權短影音 | 兩岸 | 中央社 CNA

2026-04-23
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating infringing and inappropriate content on a large scale, leading to the removal of such content and penalties for violating accounts. The harms include violations of intellectual property rights (unauthorized use of celebrity images and voices) and dissemination of harmful content (e.g., violent or pornographic material). These harms have materialized as the platform has taken action to remove content and penalize accounts. Therefore, this qualifies as an AI Incident due to direct involvement of AI-generated content causing rights violations and harmful dissemination.
Thumbnail Image

抖音下架3万多个AI"霸总"

2026-04-24
早报
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating misleading and infringing content that has been actively removed due to its harmful impact, such as misleading older adults and violating rights. The misuse of AI-generated images and voices to create deceptive or infringing content constitutes harm to communities and violations of rights. Since the harm has occurred and the AI system's use directly led to these harms, this qualifies as an AI Incident. The platform's actions to remove content and sanction accounts are responses to this incident, but the main event is the realized harm caused by AI misuse.
Thumbnail Image

中國抖音整頓違規 下架53.8萬則AI侵權短影音 | 陸股透視 | 兩岸 | 經濟日報

2026-04-23
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating or manipulating content that infringes on intellectual property and personal rights, such as unauthorized use of celebrity images and voices. The platform's removal of over half a million infringing videos and penalization of thousands of accounts indicates that harm has occurred. The harm falls under violations of intellectual property rights and personal rights, which are part of the AI Incident definition. The challenges in detection and governance further confirm the AI system's role in causing these harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

抖音累计下架AI侵权视频超53.8万条

2026-04-23
东方财富网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating infringing content (videos with AI-generated impersonations, AI-generated images and voices) that violate intellectual property and personal rights, which constitutes harm under the definition of AI Incident (violation of human rights or breach of intellectual property rights). The platform's removal of over 538,000 such videos and penalization of accounts confirms that harm has materialized. Therefore, this is an AI Incident. The platform's enforcement is a response but does not change the classification since the harm has already occurred.
Thumbnail Image

抖音:累计下架AI侵权视频超53.8万条,处罚违规账号4000多个

2026-04-23
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems to generate infringing and harmful content, which has directly led to violations of intellectual property rights and personal rights, as well as harm to the community through misleading and harmful content. The platform's actions are responses to these realized harms. Since the harms have already occurred and the AI system's role is pivotal in generating the infringing content, this qualifies as an AI Incident. The article focuses on the ongoing harm and the platform's mitigation efforts rather than just potential future harm or general AI news, so it is not a hazard or complementary information.
Thumbnail Image

抖音整治AI不当内容,重点处置利用AI技术换脸、盗声-证券之星

2026-04-23
wap.stockstar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for generating deepfake content (face swapping, voice cloning) that has caused realized harm by infringing on individuals' personality and intellectual property rights and disrupting social order. The platform's removal of over 538,000 infringing videos and penalization of over 4,000 accounts confirms that harm has materialized. The AI system's use is central to the incident, as the harms stem from AI-generated content misuse. Hence, this is an AI Incident rather than a hazard or complementary information, as the harms are ongoing and substantial.
Thumbnail Image

治理AI短剧侵权,不能仅靠平台

2026-04-23
大洋网
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems in the creation of short dramas through AI-driven content generation techniques. It addresses realized harms related to intellectual property rights violations due to unauthorized use and modification of copyrighted materials, including actors' likenesses and scripts. The discussion centers on ongoing infringement issues and the need for systemic governance responses, but it does not report a specific new incident of harm or a direct event causing harm. Instead, it provides an analysis and calls for regulatory and industry action, which aligns with providing complementary information about AI-related harms and governance challenges rather than reporting a discrete AI Incident or AI Hazard.
Thumbnail Image

抖音整治AI不当内容,重点处置利用AI技术换脸、盗声

2026-04-23
新浪新闻中心
Why's our monitor labelling this an incident or hazard?
This article primarily reports on the platform's governance actions and responses to previously identified AI-related harms, including infringement and misuse of AI-generated content. It does not describe a new AI incident or hazard but rather updates on mitigation efforts and ongoing challenges in managing AI misuse. Therefore, it fits the definition of Complementary Information, as it provides supporting data and context about AI incident management and governance without reporting a new incident or hazard itself.
Thumbnail Image

3万多条"霸总"内容被下架!一批AI侵权视频也遭到处罚

2026-04-23
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate content that infringes on intellectual property and personal rights, including unauthorized use of images and voices, misleading and offensive content, and fraudulent promotional materials. These actions have led to realized harm such as violations of intellectual property rights and personal rights, as well as harm to communities through misleading and deceptive content. The platform's removal and penalties are responses to these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

在AI技术应用越来越广泛的背景下,影视行业应如何建立有效的法律与伦理约束机制?

2026-04-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The content is a detailed discussion and proposal for establishing effective legal and ethical constraints on AI use in the film industry. It outlines potential risks and systemic issues but does not report a concrete AI Incident or AI Hazard event. There is no description of realized harm or a specific plausible future harm event caused by AI systems. The focus is on governance, policy, and ethical frameworks, which aligns with Complementary Information as it enhances understanding and informs future risk management without reporting a new incident or hazard.
Thumbnail Image

迪丽热巴此次起诉AI换脸侵权,法院判决的核心依据和标准是什么?

2026-04-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI face-swapping technology (an AI system) that generated images infringing on an individual's portrait rights, causing harm to the individual's rights and reputation. The court's ruling addresses the development and use of AI systems leading to violations of fundamental rights (portrait rights), which fits the definition of an AI Incident. The harm is realized (legal infringement and reputational harm), and the court's decision and regulatory responses are part of the incident's context. Therefore, this is an AI Incident.
Thumbnail Image

迪丽热巴AI换脸侵权案的判决,对短视频和娱乐行业有何警示?

2026-04-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake videos that infringe on the portrait rights of a public figure, causing direct harm to the individual's rights and reputation, as well as economic harm to the entertainment industry. The court ruling and associated harms meet the criteria for an AI Incident because the AI system's use directly led to violations of fundamental rights and other significant harms. The article also discusses legal and ethical responses, but the primary focus is on the realized harm caused by AI misuse, not just complementary information or potential hazards.
Thumbnail Image

在迪丽热巴案件中,AI换脸技术为何被认定为侵权并承担赔偿责任?

2026-04-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI face-swapping technology (an AI system) to generate images that closely resemble a real person without authorization, leading to legal recognition of infringement and compensation for damages. The AI system's use directly caused harm to the person's portrait rights and economic interests, fulfilling the criteria for an AI Incident. The article details realized harm (infringement and economic loss), legal consequences, and societal impact, which aligns with the definition of an AI Incident involving violations of human rights and intellectual property rights.
Thumbnail Image

平台重拳整治AI换脸盗用肖像等违规行为,已下架53.8万条侵权视频

2026-04-23
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for face swapping and voice synthesis, which have been used to create infringing and misleading content. The platform's removal of over 538,000 infringing videos and sanctioning of accounts indicates that harm has occurred, including violations of intellectual property rights and misleading or harmful content affecting communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations and harm that the platform is actively mitigating.
Thumbnail Image

3万多条"霸总"内容被下架!一批AI侵权视频也遭到处罚_京报网

2026-04-23
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate videos with face-swapping, voice cloning, and other manipulations that infringe on rights and mislead users. The harms include violations of intellectual property rights and personal rights, misleading and deceptive content, and potential harm to communities through misinformation and scams. Since these harms have already occurred and the platform is responding by removing content and penalizing accounts, this qualifies as an AI Incident. The event focuses on realized harms caused by AI-generated content and the platform's enforcement actions, not just potential future risks or general information.
Thumbnail Image

迪丽热巴遭短剧AI换脸,人脸权利不容滥用

2026-04-23
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (AI face-swapping technology) in a way that directly caused harm by infringing on the plaintiff's portrait rights and personal dignity, which are protected legal rights. The unauthorized use of AI to create and distribute manipulated video content for commercial gain constitutes a clear AI Incident under the framework, as it directly led to violations of human rights and legal obligations. The court's decision and the described harms confirm that this is not merely a potential risk but a realized incident of AI harm.
Thumbnail Image

抖音重拳整治AI侵权内容,已下架53.8万条违规视频

2026-04-23
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The article details a platform's enforcement against AI-generated content that infringes on rights, indicating prior harm or violations have occurred. However, the focus is on the platform's cleanup and regulatory actions, not on a new AI incident or hazard. Therefore, it fits the category of Complementary Information as it provides an update on responses to AI-related harms rather than describing a new incident or hazard itself.
Thumbnail Image

用「AI霸道總裁」形象誤導中老年人 抖音處置逾千個賬號

2026-04-24
香港經濟日報 hket.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create harmful content that misleads and infringes on individuals' rights, which constitutes violations of human rights and harms to communities. The platform's enforcement actions indicate that harm has occurred and is ongoing. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations and harm, and the event describes realized harm rather than potential harm or mere updates.
Thumbnail Image

AI恶搞"女司机与保安冲突"视频疯传,福建永安市网信办:已向平台举报,恶搞绝不允许

2026-04-24
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate manipulated videos that distort reality and cause harm to individuals' rights and reputations. The widespread dissemination of these AI-generated fake videos has already caused harm, including potential defamation and emotional distress. The involvement of AI in creating and spreading harmful content that violates legal rights fits the definition of an AI Incident, as the AI system's use has directly led to harm to persons and violations of rights. The article also discusses legal and societal responses but the primary focus is on the harmful AI-generated content itself.
Thumbnail Image

抖音下架3萬個「AI霸總」 還要整治AI換臉盜聲、魔改經典不當內容

2026-04-24
udn科技玩家
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating misleading deepfake videos and synthetic voices used to deceive elderly users, leading to financial scams and emotional harm. The platform's actions to remove millions of such videos and penalize accounts confirm that harm has materialized. The AI system's use in generating these harmful contents directly caused violations of rights and harm to communities (elderly victims). Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

中国4月清理"AI魔改"视频万余条

2026-04-30
早报
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to modify videos ('AI魔改'), which are then found to be in violation of content regulations. The removal of these videos is a response to the harm caused by the dissemination of unauthorized or misleading AI-generated content, which can be considered harm to communities and cultural heritage. Since the AI system's use has directly led to the creation and spread of these problematic videos, and the regulatory action is a response to this harm, this qualifies as an AI Incident. The event is not merely about potential harm or general AI news but about realized harm and regulatory intervention.
Thumbnail Image

清理违规视频11000余条 4月"AI魔改"视频治理成果公布

2026-04-30
China News
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems to alter video content ('AI魔改'), which has led to the spread of unauthorized or potentially harmful altered videos. The regulatory removal of these videos addresses the harm caused by such AI-generated content to communities and cultural heritage, which fits the definition of harm to communities or violation of rights. Since the harm has already occurred and is being addressed, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

清理违规视频11000余条 4月"AI魔改"视频治理成果公布

2026-04-30
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to modify video content ('AI魔改'), which is explicitly mentioned. The regulatory action is a response to the misuse of AI in creating altered videos that violate content rules, implying harm to communities or cultural integrity. However, the article reports on the governance and enforcement measures taken to address these issues rather than describing a new incident of harm or a potential hazard. Therefore, this is best classified as Complementary Information, as it provides an update on societal and governance responses to AI-related misuse rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

微信:下架AI魔改《三国演义》等经典影视作品的视频

2026-04-30
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create 'AI magic-modified' videos that distort classic cultural works and historical figures, which can mislead viewers and harm societal values and community cohesion. The harm is realized through the dissemination of misleading and harmful content, fulfilling the criteria for harm to communities and violation of cultural rights. Therefore, this qualifies as an AI Incident. The platform's actions to remove such content are responses to the incident, not the primary event itself.