Chinese Voice Actors Protest AI Voice Cloning, Content Creators Remove Infringing Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Multiple Chinese voice actors, including those from the film 'Nezha,' publicly condemned unauthorized AI voice cloning and dubbing, citing violations of personality and intellectual property rights. Following their statements, numerous content creators removed AI-generated videos from platforms, highlighting legal and ethical concerns over AI's use in voice synthesis.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems explicitly used for voice cloning and synthesis without authorization, which has directly led to violations of the voice actors' rights, including unauthorized use of their voices and infringement of their personality and intellectual property rights. This constitutes a clear AI Incident under the framework, as the AI system's use has directly caused harm (violation of rights) to individuals. The article also references ongoing legal cases and calls for enforcement, confirming the realized harm rather than just potential risk.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Workers

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

被"偷走声音","哪吒"怒了

2026-03-18
China News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for voice cloning and synthesis without authorization, which has directly led to violations of the voice actors' rights, including unauthorized use of their voices and infringement of their personality and intellectual property rights. This constitutes a clear AI Incident under the framework, as the AI system's use has directly caused harm (violation of rights) to individuals. The article also references ongoing legal cases and calls for enforcement, confirming the realized harm rather than just potential risk.
Thumbnail Image

"哪吒""太乙真人"配音演员相继发声抵制AI配音,大批二创博主下架相关视频,律师:未经许可公开使用构成侵权

2026-03-18
华龙网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and AI-generated dubbing, which are being used without authorization, constituting infringement of rights protected by law. The harm is realized as the voice actors' rights are violated, and unauthorized AI-generated content is disseminated online. The removal of infringing content and legal commentary further confirm the incident's nature. Therefore, this qualifies as an AI Incident due to direct harm (violation of intellectual property and personality rights) caused by the use of AI systems.
Thumbnail Image

"哪吒""太乙真人"配音演员相继发声抵制AI配音,大批二创博主下架相关视频

2026-03-18
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and synthesis without authorization, leading to violations of voice actors' personality rights and intellectual property rights. The harm is realized as unauthorized AI-generated content has been distributed online, infringing on creators' rights and disrupting the creative industry. The actors' statements and subsequent removal of infringing content confirm the direct link between AI use and harm. This fits the definition of an AI Incident because the AI system's use has directly led to a breach of legal rights and harm to creators.
Thumbnail Image

"哪吒""太乙真人"配音演员相继发声抵制AI配音,大批二创博主下架相关视频 看点

2026-03-18
qlwb.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and AI dubbing without consent, which constitutes unauthorized use of personal voice data. This use infringes on the voice actors' personality rights and intellectual property rights, which are recognized legal harms. The distribution of such AI-generated content has already occurred, causing harm to the creators and the industry. The actors' calls for removal and the subsequent takedown of videos confirm the harm is materialized. Hence, this is an AI Incident as the AI system's use directly leads to violations of rights and harms to individuals and communities.
Thumbnail Image

配音演员集体呼吁抵制AI侵权

2026-03-19
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate synthetic voices based on real voice actors without authorization, leading to violations of intellectual property and personal rights. This unauthorized use constitutes harm to the voice actors' rights and labor protections, fulfilling the criteria for an AI Incident. The infringement is occurring, not just a potential risk, and the AI system's use directly leads to harm (violation of rights).
Thumbnail Image

哪吒""太乙真人"配音演员相继发声抵制AI配音,大批二创博主下架相关视频,律师:未经许可公开使用构成侵权_手机网易网

2026-03-19
m.163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and synthesis without authorization, leading to infringement of voice actors' rights, which is a violation of intellectual property and personality rights under law. The unauthorized use and distribution of AI-generated voice content have caused harm to the creators' legal rights and disrupted the creative industry. The involvement of AI in generating the infringing content is clear, and the harm is realized and ongoing, as evidenced by public statements, legal analysis, and content removal actions. Thus, it meets the criteria for an AI Incident due to direct harm caused by AI system use.
Thumbnail Image

抵制AI配音...「哪吒」配音員譴責侵權 二創博主下架道歉 | 聯合新聞網

2026-03-19
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for voice cloning and synthesis without authorization, which directly infringes on the voice actors' personality and intellectual property rights, a form of harm under the framework. The unauthorized AI use has led to actual harm (rights violations and disruption of the industry), not just potential harm. The responses by voice actors and content creators further confirm the recognition of harm caused by AI misuse. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

数十名配音演员发文抵制"AI偷声":技术狂欢下,法律维权难在哪里?

2026-03-19
华龙网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice synthesis systems to clone and reproduce voice actors' voices without consent, which directly infringes on their personality rights and intellectual property rights, causing economic and reputational harm. The article explicitly states that unauthorized AI-generated voice usage has occurred on multiple platforms for commercial and public dissemination purposes. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and economic harm. The discussion of legal difficulties and calls for regulation are complementary information but do not negate the fact that harm has already occurred.
Thumbnail Image

AI+音乐:创新和旋律共鸣

2026-03-20
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for generating music and AI-based voice synthesis (AI翻唱). It describes how these AI systems' use has led to legal infringements including violations of personality rights (voice rights), intellectual property rights (copyright infringement), and unfair competition (economic harm and market disruption). These constitute realized harms under the AI Incident definition, specifically violations of human rights and intellectual property rights, and harm to economic interests of creators. The article does not merely discuss potential risks but documents ongoing issues and legal challenges arising from actual AI use, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

被AI"偷走"声音 配音演员集体维权:希望构建合法共存生态

2026-03-22
China News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for voice cloning without authorization, which directly infringes on voice actors' personality rights and copyright, constituting a violation of human and intellectual property rights. The harm is realized as actors' voices are used commercially without consent, impacting their legal rights and the industry's artistic integrity. The article discusses the challenges in legal recognition and enforcement, confirming the presence of an AI Incident rather than a mere potential hazard or complementary information.
Thumbnail Image

2026-03-23
guancha.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for voice synthesis based on unauthorized voice data collection. The use of AI to generate voices without consent has directly led to violations of personality rights and copyright, which are breaches of fundamental and intellectual property rights. The harm is realized and ongoing, as evidenced by the actors' public complaints and the presence of AI-generated content using their voices without permission. The article also discusses the difficulties in legal redress and the impact on the industry, confirming the direct link between AI use and harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

遭以AI盗用声音牟利 甄嬛配音员震怒喊告

2026-03-22
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI technology was used to synthesize the voice of the voice actor without her permission, and that this unauthorized use has caused harm to her legal rights and reputation. The AI system's use directly led to this harm through voice cloning and unauthorized commercial exploitation. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

清明假期火车票开售 深圳铁路部门预计发送旅客181.4万人次-36氪

2026-03-22
36氪:关注互联网创业
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to train on voice data and generate synthetic voices without consent, which constitutes a violation of intellectual property and personal rights. This is a direct harm caused by the use of AI technology, fitting the definition of an AI Incident due to breach of rights and commercial misuse of AI-generated content.
Thumbnail Image

多位知名配音演员集体维权 反对AI擅自采集声音素材

2026-03-23
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice synthesis trained on unauthorized voice data, leading to AI-generated content that infringes on the rights of voice actors. The harm is realized as actors face loss of control over their voice identity, potential legal risks, and devaluation of their artistic work. This is a direct violation of intellectual property and labor rights, fitting the definition of an AI Incident under violations of rights and harm to communities or individuals.
Thumbnail Image

多名配音演员集体维权 反对AI仿声侵权

2026-03-22
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice synthesis (AI voice cloning) without consent, which has directly led to harm in the form of intellectual property and personality rights violations against voice actors. The unauthorized AI-generated content is commercially exploited, causing economic and reputational harm to the actors. The article discusses realized harm and ongoing infringement, not just potential risks. Hence, this qualifies as an AI Incident due to violations of rights and harm caused by AI misuse.
Thumbnail Image

AI数字艺人"撞脸"明星,侵权不?

2026-03-22
大河网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create digital artists that closely mimic real human appearances and voices, which has led to actual legal disputes and court rulings recognizing infringement of personal rights (portrait and voice rights). This constitutes a violation of human rights and intellectual property rights due to unauthorized use of likeness and voice, fulfilling the criteria for an AI Incident. The article details realized harm through legal findings and compensation, not just potential risks or general discussion, thus it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

多名配音演员集体维权,技术狂欢下,法律维权难在哪里?

2026-03-22
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice synthesis and AI training on voice data without consent, leading to direct harm to voice actors' rights, reputation, and economic interests. The harm is realized and ongoing, including infringement of personality rights and copyright, which are violations of fundamental rights and intellectual property rights. The article discusses the legal challenges but confirms that the AI system's misuse has already caused harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI"偷声"成灰产,守护声音人格权刻不容缓

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for voice data collection, training, and synthesis without consent, which directly infringes on voice actors' personality rights and copyrights. This constitutes a violation of fundamental rights and intellectual property laws, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, as voice actors have publicly protested these infringements. Therefore, this is an AI Incident due to direct harm caused by AI misuse.
Thumbnail Image

让声音回归创作者 让AI学会尊重人

2026-03-23
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate synthetic voices mimicking real voice actors without authorization, which constitutes a violation of intellectual property rights and personal rights. The harm is realized as voice actors suffer from unauthorized use and monetization of their voices, which is a direct violation of their rights and harms their professional interests. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse in voice synthesis infringing on creators' rights.
Thumbnail Image

随着ai的兴起和泛滥,演员和配音演员基本上会被取代,艺术类学校招生雪上加霜!多名配音演员集体维权,能成功吗

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for voice cloning and virtual acting, which have directly caused harm to real actors and voice actors by replacing their work and infringing on their rights. The harms include economic injury, violation of labor rights, and disruption of the arts community and education. The collective legal actions by voice actors are responses to these realized harms. Therefore, this event meets the criteria for an AI Incident due to direct and ongoing harm caused by AI system use in the entertainment industry.
Thumbnail Image

配音演员声音被AI盗用,该如何依法维权?

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to clone and misuse voice actors' voices, which constitutes a violation of intellectual property and personality rights. The article describes actual harm occurring due to unauthorized AI use of voices in advertisements, films, and fake content, infringing on legal rights. Therefore, this is an AI Incident involving violations of rights caused by AI misuse.
Thumbnail Image

AI配音对真人配音员的职业生存造成了哪些具体冲击?

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used for voice synthesis that have directly led to economic harm (loss of jobs and income), violations of voice rights (intellectual property and personal rights infringements), and cultural/artistic harm (loss of emotional authenticity and audience rejection). The AI system's use and misuse are central to these harms, fulfilling the criteria for an AI Incident under the OECD framework. The harms are realized and ongoing, not merely potential, and the AI system's role is pivotal in causing these impacts.
Thumbnail Image

目前针对AI声音侵权的维权案例,法院是如何判决的?

2026-03-22
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used to clone voices without authorization, leading to legal rulings that recognize these as infringements of voice and personality rights, which are violations of fundamental rights under applicable law. The harms described include unauthorized commercial use of AI-generated voices causing economic loss and damage to personal rights. The courts' decisions and legal principles directly address harms caused by AI misuse. Hence, the event meets the criteria for an AI Incident due to realized harm from AI system use and misuse.
Thumbnail Image

AI仿声有多像?连本人都难辨!多位知名配音演员相继公开维权

2026-03-22
新浪财经
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and synthesis without consent, which has directly caused harm to voice actors by infringing on their personality rights and copyrights. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and intellectual property rights (harm category c). The article also discusses the real and ongoing impact on the actors and the industry, not just potential future harm. Hence, it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

点到为止丨都是嘴上造假

2026-03-23
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for voice synthesis without authorization, which directly infringes on human rights (voice personality and performance rights) and causes harm to the affected individuals (voice actors). The article describes realized harm through unauthorized use and the resulting legal and ethical issues. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to people.
Thumbnail Image

多位知名配音演员,集体维权!

2026-03-22
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice synthesis and training on voice data without consent, which has directly led to violations of voice actors' personality rights and copyright, constituting harm under the framework. The unauthorized use of AI to clone voices and generate content without permission is a clear example of AI misuse causing harm. The article details realized harm and legal challenges, not just potential risks, so it is an AI Incident rather than a hazard or complementary information. The focus is on the harm caused by AI voice cloning and the actors' collective rights defense, fitting the definition of an AI Incident.
Thumbnail Image

锐评|AI偷声,不能光靠演员联名反对

2026-03-23
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice synthesis and training on actors' voice data without consent, which constitutes a violation of personal rights and intellectual property. This unauthorized use has already led to harm such as commercial exploitation and reputational damage to actors, fulfilling the criteria of an AI Incident under violations of human rights and intellectual property rights. The article focuses on the harm caused by AI misuse rather than potential or future harm, so it is classified as an AI Incident.
Thumbnail Image

被AI"偷走"声音,配音演员集体维权:希望构建合法共存生态

2026-03-22
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for voice synthesis (AI voice cloning) without consent, which directly leads to violations of voice actors' personality rights and intellectual property rights, constituting harm under category (c) of AI Incidents. The article details realized harm (unauthorized use and commercial exploitation of AI-generated voices), legal challenges, and calls for regulatory and technical responses. Therefore, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

工人日报:AI"偷声"成灰产,守护声音人格权刻不容缓

2026-03-24
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice synthesis that directly leads to violations of human rights (personality rights) and intellectual property rights, causing harm to individuals and the industry. This constitutes an AI Incident because the AI system's use has directly led to realized harm through unauthorized voice replication and commercial exploitation, infringing legal rights and damaging livelihoods.
Thumbnail Image

1元买软件、5元买服务 配音演员声音是如何被批量"偷走"的?

2026-03-25
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for voice cloning and synthesis, which have been employed without authorization to replicate voice actors' voices. This unauthorized use has directly led to violations of the actors' personality and intellectual property rights, constituting harm under the AI Incident definition (violation of human rights and intellectual property rights). The article also discusses legal rulings confirming such violations and the difficulties in enforcing rights, further supporting the classification as an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in enabling the infringement.
Thumbnail Image

被AI"偷走"声音,知名配音演员接连维权!AI"换脸"获判,"偷声"困局何解?律师直指困境:举证难、成本高、获益小 2026-03-24 23:39

2026-03-24
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems for face-swapping and voice cloning without consent, which has directly led to legal rulings recognizing infringement of portrait rights and ongoing harms to voice actors. The harms are concrete and realized, including economic loss, reputational damage, and violations of personality rights. The involvement of AI in generating unauthorized synthetic faces and voices is clear and central to the incident. The article also discusses the legal and practical challenges in addressing these harms, reinforcing the direct link between AI use and harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI"换脸"获判,"偷声"困局何解?配音演员集体发声,律师直指困境:举证难、成本高、获益小 2026-03-24 22:48

2026-03-24
每日经济新闻
Why's our monitor labelling this an incident or hazard?
The event involves the use and misuse of AI systems (deepfake face-swapping and AI voice synthesis) that have directly led to violations of actors' personality rights, a form of human rights infringement under applicable law. The court ruling confirms that AI-generated face swaps constitute infringement, and voice actors report actual harm from AI-generated voice misuse. These constitute realized harms caused by AI system use, qualifying the event as an AI Incident. The article also includes complementary information about legal and industry responses, but the primary focus is on the incident of infringement and harm caused by AI misuse.
Thumbnail Image

新闻1+1|被AI偷走的"声音",该如何拿回来?

2026-03-24
厦门网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to clone voices without authorization, which directly leads to violations of voice actors' rights, a form of intellectual property and personality rights infringement. The harm is realized as voice actors have publicly opposed these unauthorized uses, indicating actual infringement and harm. The article also discusses the challenges in legal protection and platform responsibility, reinforcing the presence of harm caused by AI misuse. Hence, this is an AI Incident as the AI system's use has directly led to violations of rights.
Thumbnail Image

被AI"偷走"声音,知名配音演员接连维权!AI"换脸"获判,"偷声"困局何解?

2026-03-25
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for deepfake face and voice synthesis, which have directly led to violations of personality rights (a form of human rights and intellectual property rights). The legal rulings against AI-generated face usage and the ongoing voice cloning without consent causing reputational and economic harm to actors meet the criteria for AI Incidents. The harms are realized, not just potential, as evidenced by court decisions and active legal claims. Therefore, this event is classified as an AI Incident.
Thumbnail Image

被AI偷走的"声音",该如何拿回来?

2026-03-25
千龙网
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI Incident where harm has already occurred, nor does it describe a particular AI Hazard event with plausible future harm. Instead, it centers on the challenges and responses related to AI voice cloning infringement, including legal, technical, and platform governance aspects. This fits the definition of Complementary Information, as it provides context and updates on societal and governance responses to AI-related rights issues without detailing a concrete incident or hazard.
Thumbnail Image

AI"换脸"获判,"偷声"困局何解?配音演员集体发声,律师直指困境:举证难、成本高、获益小

2026-03-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake technology to create unauthorized face replacements and voice replications, which have led to legal rulings recognizing infringement of personal rights (portrait and voice rights). The AI system's use directly caused harm to individuals' rights and reputations, fulfilling the criteria for an AI Incident. The legal case and court ruling confirm realized harm, not just potential harm. The challenges in voice rights protection further illustrate ongoing incidents of harm. Although the article discusses regulatory and technical responses, these serve as complementary information rather than the main event. Therefore, the event is best classified as an AI Incident due to the direct infringement and harm caused by AI deepfake misuse.
Thumbnail Image

3月24日完整版新闻1+1|被AI偷走的"声音",该如何拿回来?

2026-03-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for voice synthesis and training on voice data without consent, which constitutes a violation of intellectual property rights and creators' legal rights. This is a direct harm caused by the use of AI systems, fitting the definition of an AI Incident. The collective action and public opposition indicate that harm has occurred, not just a potential risk. Therefore, this event qualifies as an AI Incident due to the realized violation of rights through AI misuse.
Thumbnail Image

多名配音演员公开发声,被AI偷走的"声音"该如何拿回?

2026-03-24
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used for voice cloning without authorization, which is a violation of rights and intellectual property. However, it does not describe a specific incident where harm has already occurred or a concrete case of infringement causing direct harm. Instead, it discusses the challenges platforms face in preventing unauthorized AI voice synthesis and the balance needed between rights protection and AI development. This fits the definition of Complementary Information, as it provides supporting data and context about AI-related rights issues and enforcement challenges without reporting a new AI Incident or AI Hazard.
Thumbnail Image

导演崔亮:AI时代"活人感"无价,在浪潮中划定影视新航道

2026-03-25
hea.china.com
Why's our monitor labelling this an incident or hazard?
The article does not describe any realized harm or incident caused by AI systems, nor does it report a specific event where AI use or malfunction led to injury, rights violations, or other harms. Instead, it documents a sectoral dialogue and legal considerations about AI's impact and governance in the entertainment industry. This fits the definition of Complementary Information, as it provides context, societal and governance responses, and expert perspectives on AI-related challenges without reporting a new AI Incident or AI Hazard.
Thumbnail Image

法治在线丨配音演员声音被批量"偷走" 该怎么证明"声音是我的"?

2026-03-27
China News
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to clone and generate voices, which are then used without authorization, constituting misuse of AI technology. The harms include violations of personal rights (voice/personality rights), reputational damage, and potential harm to communities (e.g., minors exposed to inappropriate content). The article describes actual realized harms and ongoing legal responses, making this an AI Incident rather than a hazard or complementary information. The detailed description of unauthorized AI voice cloning and its consequences fits the definition of an AI Incident due to direct harm caused by AI misuse.
Thumbnail Image

配音演员声音被批量"偷走" 该怎么证明"声音是我的"?

2026-03-27
成都全搜索
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to clone and generate voices without authorization, which directly infringes on the voice actors' rights and causes reputational and economic harm. The article details real, ongoing harm from AI misuse, including unauthorized commercial use and dissemination of misleading or harmful content. The legal case cited confirms the recognition of such harms as violations of rights due to AI-generated voice misuse. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and other significant harms.
Thumbnail Image

配音演员声音被批量"偷走" 该怎么证明"声音是我的"?

2026-03-28
杭州网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone and generate voice content imitating real voice actors, which constitutes the use of AI systems. The unauthorized use of these AI-generated voices directly leads to violations of intellectual property and personal rights, fulfilling the criteria for harm under the AI Incident definition (violations of human rights or breach of intellectual property rights). The article reports actual occurrences of such misuse and harm, not just potential risks, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI音乐模型Suno v5.5上线,写歌能模仿你的声音和风格

2026-03-28
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article focuses on the launch of a new AI music generation model with personalized features. Although voice cloning can raise potential risks (e.g., misuse for impersonation), the article highlights preventive measures and does not report any harm or incidents. There is no indication of realized harm or credible risk leading to harm at this stage. Hence, this is best classified as Complementary Information, providing context and updates about AI system development and features without constituting an AI Incident or AI Hazard.
Thumbnail Image

接陌生电话不要先出声

2026-03-29
finance.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice synthesis systems that extract and replicate voiceprints from brief voice samples, enabling scammers to impersonate individuals convincingly. This use of AI has directly led to harm through fraudulent scams targeting victims' families, which constitutes violations of personal rights and harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm. The article's focus on the ongoing scam activity and its consequences confirms this classification.
Thumbnail Image

注意!接到陌生电话不要先出声

2026-03-28
驱动之家
Why's our monitor labelling this an incident or hazard?
The event involves AI voice synthesis technology (an AI system) used by criminals to impersonate victims' relatives and commit fraud, which constitutes a violation of personal rights and causes harm to individuals (financial and psychological harm). The harm is realized as scams are occurring using this technology. Therefore, this qualifies as an AI Incident due to direct harm caused by the use of an AI system.
Thumbnail Image

哪吒、甄嬛,怒了

2026-03-28
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used for voice cloning and generation without consent, which directly leads to violations of intellectual property and personality rights, harming the actors economically and personally. The article details actual cases of harm, legal disputes, and the challenges in protecting voice rights against AI misuse. This meets the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and economic damage).
Thumbnail Image

接陌生电话别先出声 骗子可采集声音 5秒语音泄露你的声纹

2026-03-29
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice synthesis technology to extract and replicate voiceprints from short audio samples, which are then used by criminals to impersonate victims and conduct scams. This directly involves AI systems in the development and malicious use stages, causing harm to individuals through fraud and identity deception. The harm is realized as scams are actively occurring, not just potential. Hence, this qualifies as an AI Incident due to direct harm caused by AI-enabled voice cloning used in fraud.
Thumbnail Image

接陌生电话先别出声防AI盗声纹 5秒语音即可复刻

2026-03-30
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI voice synthesis technology (an AI system) used maliciously to replicate voiceprints for fraudulent calls. This use directly leads to harm by enabling scams that exploit trust and cause financial and emotional damage. The harm is realized, not hypothetical, as the article describes ongoing scams using this technology. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in voice cloning for fraud.
Thumbnail Image

紧急提醒!接到陌生电话不要先出声:5-10秒语音即可复刻声纹

2026-03-28
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI voice synthesis technology (an AI system) used maliciously to replicate voiceprints and conduct scams, which directly causes harm to individuals through fraud and privacy violations. The harms include violation of personal identity rights and financial loss, fitting the definition of an AI Incident. The article reports ongoing and realized harms, not just potential risks, so it is not merely a hazard or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

【全新诈骗陷阱】"ai 语音克隆"成诈骗新手段:打电话仅为收集受害者声音

2026-03-29
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice cloning technology (an AI system) in the scam's development and use stages. The AI system's outputs (cloned voices) are directly used to deceive victims, causing financial harm and violating their rights. The article describes realized harm (financial loss) caused by the AI-enabled scam, meeting the criteria for an AI Incident. Therefore, this event is classified as an AI Incident due to the direct involvement of AI in causing harm through fraudulent impersonation.
Thumbnail Image

聲音只值20元?中國AI仿聲亂象擴大 配音員集體發聲反侵權 | ETtoday AI科技 | ETtoday新聞雲

2026-03-29
ETtoday AI科技
Why's our monitor labelling this an incident or hazard?
The event involves AI voice cloning systems that replicate human voices without authorization, which is a clear AI system use. The unauthorized use and commercial exploitation of these AI-generated voices cause violations of intellectual property and personality rights, which are recognized harms under the AI Incident definition (c). The article describes realized harm to voice actors and ongoing infringement, not just potential harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

周杰伦仍是顶流,但腾讯音乐已被"汽水"击穿_手机网易网

2026-03-29
m.163.com
Why's our monitor labelling this an incident or hazard?
The article mentions AI technology's role in music creation and distribution, such as Qingshui Music's AI music creation platform and the increasing presence of AI-generated songs. However, it does not report any realized harm, violation of rights, injury, or disruption caused by AI systems. The discussion is about potential industry changes and strategic challenges, not about an incident or hazard involving AI. Therefore, the content fits the category of Complementary Information, as it provides context and updates on AI's evolving role in the music streaming ecosystem without describing a specific AI Incident or AI Hazard.
Thumbnail Image

央视揭AI假冒名人带货乱象

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology used to impersonate celebrities, which is an AI system. The use of these AI-generated fake videos and voices has directly led to harm: consumers have been misled into purchasing products under false pretenses, constituting financial harm and violation of rights (unauthorized use of likeness). Therefore, this qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

AI靠1秒音频偷走记者声音

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, specifically an AI voice cloning system that generates synthetic speech. The event involves the use of AI technology to produce realistic voice replicas without consent, which can plausibly lead to harms such as impersonation, misinformation, fraud, or reputational damage. Although no specific harm has yet occurred, the article emphasizes the heightened risk of misuse and abuse due to the ease of cloning voices. Therefore, this event represents a credible potential for harm stemming from AI use, qualifying it as an AI Hazard rather than an Incident, since no actual harm is reported as having occurred yet.
Thumbnail Image

仅需1秒你的声音就能被偷走 AI声音侵权乱象频发

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems for voice cloning, which is explicitly described. The misuse of AI voice cloning technology to steal and replicate voices without consent constitutes a violation of personal rights and privacy, which falls under harm category (c) - violations of human rights or breach of legal protections. While the article does not report a specific realized harm incident, it emphasizes the frequent occurrence of such misuse and the serious risks involved, indicating plausible future harm. Therefore, this situation qualifies as an AI Hazard due to the credible risk of AI-driven voice identity theft and related harms.
Thumbnail Image

大批演员抵制!每50字收费不到1元,克隆音色相似度超98%

2026-04-09
21jingji.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a voice cloning platform that generates highly similar voice replicas from short audio samples. The use of this AI system has directly led to violations of actors' rights, including unauthorized use of their voices, which is a breach of intellectual property and personality rights under applicable law. The article documents actual harm occurring, such as unauthorized commercial use, difficulty in legal redress, and widespread infringement. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and intellectual property rights (harm category c). The article also discusses ongoing harm and calls for regulatory responses, but the primary classification is AI Incident due to the realized harm.
Thumbnail Image

AI声音克隆乱象调查:上传3秒即可改词,每50字不到1元

2026-04-09
南方网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI voice cloning systems that generate unauthorized voice replicas causing direct harm to voice actors' rights and economic interests, constituting violations of intellectual property and personality rights under applicable law. The AI system's use has directly led to these harms through unauthorized cloning and commercial exploitation. The detailed descriptions of the cloning process, the scale of misuse, and the actors' testimonies confirm realized harm rather than potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI靠1秒音频偷走记者声音

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in cloning voices from minimal audio input and generating synthetic speech. The event focuses on the potential misuse risks of this technology, which could plausibly lead to harms like impersonation or fraud. Since no actual harm is reported but the risk is credible and increasing, this qualifies as an AI Hazard rather than an Incident. The article also discusses the need for protective measures, reinforcing the hazard nature of the event.
Thumbnail Image

配音演员集体发文抵制AI,版权风波背后折射了哪些焦虑?

2026-04-09
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used for voice synthesis and dubbing, which have directly led to violations of voice actors' rights and economic harm. The unauthorized training of AI models on voice data and the generation of AI voices without permission constitute a breach of intellectual property and personality rights, fulfilling the criteria for harm under the AI Incident definition. The article also references legal cases and collective actions taken by affected actors, confirming that harm has materialized rather than being a mere potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

触目惊心!AI靠1秒音频偷走记者声音

2026-04-09
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system (voice cloning AI) is explicitly involved, and the event shows how the technology can be used to generate highly realistic fake voice content. However, no actual harm or incident has been reported; the article focuses on the potential for misuse and the risks associated with the technology's accessibility. Therefore, this qualifies as an AI Hazard because it plausibly could lead to harms such as fraud, misinformation, or identity theft, but no direct or indirect harm has yet materialized.
Thumbnail Image

触目惊心!AI靠1秒音频偷走记者声音

2026-04-09
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
An AI system is clearly involved, as the voice cloning technology uses AI to generate realistic speech from minimal audio input. Although no actual harm has been reported yet, the article emphasizes the plausible future risks of misuse and abuse of cloned voices, which could lead to harms such as identity fraud, misinformation, or other violations of rights. Therefore, this event represents an AI Hazard because it plausibly could lead to an AI Incident in the future, but no direct harm has yet occurred.
Thumbnail Image

1秒克隆声音样本!谁在偷走我们的声音?记者调查_京报网

2026-04-09
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone human voices using deep learning from minimal audio samples. The AI's use has directly led to harms including intellectual property and personal rights violations, economic harm to voice actors, and fraud via AI-generated voice scams. These harms are materialized and ongoing, fulfilling the criteria for an AI Incident. The article also discusses legal and societal responses, but the primary focus is on the realized harms caused by AI voice cloning misuse.
Thumbnail Image

你可能也聽過!聲音被當AI旁白連中國官媒都在用 本人崩潰發聲 - 蒐奇 - 自由時報電子報

2026-04-11
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI voice cloning technology (an AI system) to generate voice content without the consent of the original voice actor, leading to a violation of his rights and causing personal harm (emotional distress). This fits the definition of an AI Incident because the AI system's use has directly led to harm in terms of rights violations and personal impact. The lack of legal protection and the widespread unauthorized use further confirm the incident nature rather than a mere hazard or complementary information.
Thumbnail Image

被AI"偷走"声音 中国配音演员生计告急 | 冲击 | 大纪元

2026-04-12
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI voice cloning systems have been used to infringe on voice actors' rights by replicating their voices without authorization, causing direct economic harm and reputational damage. The involvement of AI in the development and use of voice cloning technology is clear, and the harms described include violations of intellectual property rights and economic losses, which fall under the definition of AI Incident. The legal difficulties and ongoing infringement further confirm the realized harm rather than just potential risk.
Thumbnail Image

仅1秒即可盗声 太乙真人配音演员接连被AI 抢单 生存困境空前

2026-04-11
驱动之家
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (AI voice cloning) that have been used to clone voice actors' voices without consent, causing direct harm to their livelihoods and violating their rights. The harm is realized and ongoing, with thousands of infringement cases and canceled contracts. The event also discusses the potential for further criminal misuse of cloned voices. Therefore, this qualifies as an AI Incident due to direct harm caused by the use of AI systems in voice cloning and infringement.
Thumbnail Image

《哪吒2》配音演员声音被盗合作被取消 AI侵权引发行业危机

2026-04-11
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate unauthorized voice reproductions of actors, causing direct harm to their economic interests and professional relationships. The misuse of AI voice synthesis technology without consent constitutes a violation of intellectual property rights, which is a recognized harm under the AI Incident definition. The ongoing infringement and difficulty in legal enforcement further confirm the realized harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

太乙真人配音者声音被盗合作也被取消 AI侵权维权难

2026-04-11
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to clone and generate the voice of a known voice actor without authorization, which constitutes a violation of intellectual property rights and causes economic harm (loss of contracts). The AI system's misuse directly leads to harm to the actor's livelihood and rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and economic loss).
Thumbnail Image

声音被盗"太乙真人"合作被取消 那些被AI盗声盗脸的演员们

2026-04-12
华商网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for voice synthesis and facial likeness generation without authorization, which has directly caused harm to individuals' livelihoods, reputations, and rights. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to persons and communities. The article details realized harms, including canceled contracts for voice actors and defamation of individuals through AI-generated content, as well as legal and governance responses, but the primary focus is on the harms caused by AI misuse.
Thumbnail Image

《哪吒2》配音演员声音被盗合作被取消,AI"偷走"声音仅需1秒!如何维权?

2026-04-11
金羊网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone and generate human voices, which are explicitly described as being used without consent, causing direct harm to voice actors' rights and livelihoods. The article details realized harms including loss of contracts, infringement of personality and intellectual property rights, and societal harm from inappropriate AI-generated content. The involvement of AI in these harms is clear and direct, meeting the criteria for an AI Incident. The article also discusses legal and regulatory challenges, but the primary focus is on the realized harms caused by AI misuse.
Thumbnail Image

我的声音谁做主?多位配音演员发声抵制AI克隆

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to clone human voices without consent, which directly causes harm by infringing on voice actors' rights, damaging their economic interests, and violating intellectual property and personality rights. The harm is realized and ongoing, with legal cases confirming infringement. The AI system's development and use are central to the harm, meeting the criteria for an AI Incident under violations of human rights and intellectual property rights.
Thumbnail Image

配音师声音被AI化出售获赔25万元(主题)

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The case involves the use of an AI text-to-speech system that synthesized a person's voice without consent, constituting unauthorized use and infringement of voice personality rights. The AI system's development and use directly caused harm to the plaintiff by violating their rights and causing economic and emotional damages. The court ruling confirms the harm and legal responsibility, making this a clear AI Incident under the framework, as it involves realized harm due to AI system use.
Thumbnail Image

AI"偷"走你的声音怎么办?

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (AI-based voice synthesis) whose use without authorization has directly caused harm to a natural person's rights, specifically their voice rights, which are protected as personality rights under law. The unauthorized AI-generated voice was used commercially, causing reputational and personal rights harm. This fits the definition of an AI Incident because the AI system's use has directly led to a violation of fundamental rights (personality rights) and harm to the individual. The detailed legal case and court ruling confirm the harm has occurred and the AI system's role is pivotal.
Thumbnail Image

AI配音技术对配音行业的普通从业者造成了哪些具体冲击?

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use and deployment of AI voice synthesis systems that have directly led to economic harm (job and income loss) and violations of intellectual property and personal rights (unauthorized voice cloning and infringement). These harms are realized and ongoing, affecting individuals' employment, income, and legal rights. The AI systems' development and use are central to these harms, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

太乙真人声线被AI盗用,配音演员张珈铭痛失生计

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology cloning the voice actor's voice and its widespread unauthorized use in commercial contexts, causing direct economic harm and violation of rights. The harm is realized and ongoing, including canceled contracts and income loss. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident under violations of intellectual property and personality rights. The legal and societal challenges described further confirm the significance of the harm caused by AI misuse.
Thumbnail Image

面对AI语音侵权泛滥的现状,配音行业和从业者采取了哪些自救措施?

2026-04-11
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI voice cloning technology being used without authorization, causing infringement on voice actors' rights, which is a violation of intellectual property and personal rights. The legal cases and industry responses confirm that harm has already occurred. The AI system's use in unauthorized voice replication directly leads to these harms. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and economic harm to individuals and the industry.
Thumbnail Image

建立可溯源链条,让AI仿声侵权有迹可循

2026-04-10
wlaq.gmw.cn
Why's our monitor labelling this an incident or hazard?
The article centers on the risks and challenges posed by AI voice cloning technology used without consent, which could plausibly lead to violations of intellectual property and personal rights (harms under category (c)). However, it does not describe a specific event where such harm has already occurred or been directly caused by an AI system. Instead, it discusses the need for traceability and legal frameworks to prevent and address these harms. Therefore, it fits the definition of an AI Hazard, as it highlights plausible future harms from AI misuse and the need for governance measures to mitigate these risks.
Thumbnail Image

AI仿声诈骗案频发?1秒音频就能让声音被偷走

2026-04-10
t.cj.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used for voice cloning, which is a clear AI system involvement. The use of cloned voices without consent constitutes a violation of intellectual property and personal rights, which falls under harm category (c). The description of frequent AI voice fraud cases indicates that harm has already occurred, making this an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

配音演员集体维权:AI盗用声音乱象亟待法律与技术协同治理

2026-04-11
ai.zol.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that clone human voices and use them commercially without authorization, directly harming the voice actors' intellectual property rights and professional livelihoods, which constitutes a violation of rights under the framework. The article describes realized harm (voice theft and commercial misuse) and the associated negative impacts, meeting the criteria for an AI Incident. The discussion of legal and technical responses is complementary but secondary to the primary incident of voice theft and misuse by AI.
Thumbnail Image

《哪吒2》主要配音演员被AI抢走工作,现在生计都成问题

2026-04-12
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems cloning the voices of professional voice actors, leading to canceled contracts and widespread unauthorized use of their voices online. This unauthorized AI-driven voice replication has directly caused economic harm to the actors, which is a violation of their intellectual property rights and impacts their ability to earn a living. The involvement of AI in the development and use of voice cloning technology that infringes on rights and causes harm to individuals meets the definition of an AI Incident.