AI-Generated Deepfake Videos of Deceased Singer Li Wen Cause Harm and Legal Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated 'resurrection' videos of deceased singer Li Wen were created and spread online without family consent, causing significant psychological harm and distress to her family. Li Wen's mother, through legal representatives, demanded removal of the infringing content within seven days and warned of legal action against violators and platforms hosting the material.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system is explicitly mentioned as being used to recreate videos of the deceased artist. The use of AI-generated videos for commercial purposes without family consent constitutes a violation of rights and causes emotional harm to the family, which fits the definition of an AI Incident. The article describes realized harm (emotional distress) caused by the AI system's use, not just a potential risk. Therefore, this event qualifies as an AI Incident.[AI generated]
AI principles
AccountabilityHuman wellbeingPrivacy & data governanceRespect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
Psychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

李玟消失奥斯卡悼念影片 二姐李思林喊帮讨公道 | 娱乐

2024-03-28
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to recreate videos of the deceased artist. The use of AI-generated videos for commercial purposes without family consent constitutes a violation of rights and causes emotional harm to the family, which fits the definition of an AI Incident. The article describes realized harm (emotional distress) caused by the AI system's use, not just a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

李玟母亲公开声明:"AI复活李玟"带来二次伤害,7日内下架侵权内容

2024-03-29
新民网 - 为民分忧 与民同乐
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating the 'AI resurrection' videos of Li Wen. The use of AI to create these videos without consent has directly led to psychological harm and violation of the deceased's and family's rights, which fits the definition of an AI Incident. The legal actions and demands for content removal further confirm the harm caused by the AI-generated content. Therefore, this event qualifies as an AI Incident due to realized harm involving AI use.
Thumbnail Image

李玟母親:AI復活女兒涉侵權 務必7天內下架 | 噓!星聞

2024-03-29
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating content that simulates a deceased person, which is a clear use of AI technology. The unauthorized use of this AI-generated content has caused psychological harm to the family and infringes on personality rights, including image and privacy rights. This constitutes a violation of human rights and personal rights under applicable law, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, as the family reports significant distress and interference with their lives. Therefore, this is classified as an AI Incident.
Thumbnail Image

限7日下架!李玟86歲母親痛斥AI復活影片

2024-03-30
Yahoo News
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate videos simulating the deceased singer's image and voice, which is a clear AI system involvement. The use of this AI-generated content without consent has caused psychological harm and legal claims of personality rights infringement, which falls under violations of human rights and causes harm to individuals and their families. The event describes realized harm, not just potential harm, and the AI system's use is pivotal in causing this harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

李玟驟逝遭AI復活!85歲老母痛訴二次傷害 發4點聲明不忍了

2024-03-29
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as AI technology is used to generate videos recreating Li Wen's appearance and voice. The use of these AI-generated videos without consent has caused psychological harm to her family, constituting harm to persons (psychological injury). The family's legal response and demand for removal of the content further highlight the harm caused. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (psychological distress and violation of rights).
Thumbnail Image

李玟消失奧斯卡悼念影片 二姊喊幫討公道

2024-03-28
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The AI system's use to recreate the deceased singer's image is explicitly mentioned and involves AI-generated content. The family reports emotional distress, which can be considered harm to individuals or communities, but the article does not describe a direct AI Incident causing injury, rights violations, or other harms as per the definitions. The concern is about potential secondary harm and ethical issues, with no legal action taken yet. The main focus is on the family's reaction and societal implications rather than a concrete AI Incident or Hazard. Thus, it fits the definition of Complementary Information, providing supporting context about AI's societal impact and ethical concerns.
Thumbnail Image

李玟被用AI技術復活 86歲老母親再次心痛 怒發「4聲明」喊告 - 自由娛樂

2024-03-29
自由時報電子報
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate content (video and audio) that mimics a deceased person, which directly caused psychological harm to the family (harm to persons). The unauthorized use of the AI-generated likeness also implicates potential violations of rights (e.g., personality rights). The family's legal response and demand for removal indicate the harm has materialized. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

二次傷害!李玟驟逝遭「AI復活」 親姊李思林悲痛發聲 - 自由娛樂

2024-03-28
自由時報電子報
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as it is used to generate content that simulates the deceased singer, which is causing emotional harm to her family and potentially violating personal rights such as portrait rights. The harm is realized as the family expresses distress and considers legal action, indicating direct harm to persons and rights. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated content impersonating a deceased individual, leading to emotional and rights-related harm.
Thumbnail Image

看到「AI復活李玟」哭了! 二姊斥商業用途:對離去的人二度傷害 | ETtoday星光雲 | ETtoday新聞雲

2024-03-28
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating synthetic videos of a deceased individual, which has directly led to emotional harm to the family and concerns about infringement and misuse. The harm is realized (emotional distress and rights concerns), and the AI system's role is pivotal in causing this harm. Therefore, this event qualifies as an AI Incident due to violations of rights and harm to the family (harm to persons and potential breach of rights).
Thumbnail Image

AI復活李玟片瘋傳!86歲高齡母要求7日內下架:以溫情之名非法牟利 | ETtoday星光雲 | ETtoday新聞雲

2024-03-29
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic videos and voice of a deceased person, which constitutes AI involvement. The unauthorized use and distribution of these AI-generated videos have directly led to psychological harm to the family and violations of their rights, including potential intellectual property and privacy rights. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

李玟母亲声明:AI 侵权内容 7 日内必须下架,维护逝者尊严

2024-03-31
中关村在线
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the videos are AI-generated recreations of the deceased singer. The unauthorized creation and dissemination of these AI-generated videos have directly caused psychological harm and disturbance to the family, which qualifies as harm to persons (psychological harm). This is a violation of rights (likely intellectual property and personal rights) and thus meets the criteria for an AI Incident. The article describes realized harm and ongoing legal responses, not just potential harm or general AI news, so it is not a hazard or complementary information.
Thumbnail Image

李玟「AI復活」影片瘋傳 8旬母發聲明怒斥:7日內下架

2024-03-29
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate videos recreating a deceased person's image and voice, which constitutes the use of AI technology. The unauthorized use and commercial exploitation of this AI-generated content has caused psychological harm and violated personal rights, fulfilling the criteria for harm to individuals and violation of rights. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to harm and legal claims.
Thumbnail Image

李玟消失奧斯卡悼念影片!家人走不出悲痛哽咽 喊話幫討公道 | 音樂星勢力 | 娛樂 | NOWnews今日新聞

2024-03-27
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology used to create videos of the deceased artist, which has caused emotional distress to her family, a form of harm. The AI-generated content is being used in ways that could cause further harm, such as commercial exploitation or deception, which the family wants to prevent. Although no direct legal action or confirmed incident of harm beyond emotional distress is reported, the potential for harm is clear and ongoing. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm (emotional and reputational) to the family and community, but no confirmed AI Incident (direct harm) is described in the article.
Thumbnail Image

AI李玟復活開口唱歌!86歲高齡母親深陷痛苦 限7日內下架影片 | 火線辣星聞 | 娛樂 | NOWnews今日新聞

2024-03-29
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate synthetic media of a deceased individual without authorization, resulting in psychological harm and violation of the family's rights, including infringement of portrait rights. The harm is direct and realized, as the family reports significant emotional distress and interference with their lives. Therefore, this event meets the criteria for an AI Incident due to the AI system's use causing harm to persons and violation of rights.
Thumbnail Image

大陸網友用AI「復活」李玟 李思林:是2度傷害 - 大紀元

2024-03-29
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it generates realistic videos of a deceased person, which is a clear use of AI-generated content. The harm is realized and direct, as the family experiences emotional harm and considers the unauthorized use disrespectful and potentially exploitative. This constitutes a violation of rights and harm to the community (the family and fans), fitting the definition of an AI Incident. The article describes actual harm caused by the AI system's use, not just potential harm or general information, so it is classified as an AI Incident.
Thumbnail Image

大陆网友用AI"复活"李玟 李思林:是2度伤害 - 大纪元

2024-03-29
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it was used to generate realistic videos and audio of the deceased celebrity without consent. This unauthorized use constitutes a violation of rights (intellectual property and personal rights of the deceased and their family) and causes harm to the community by spreading potentially deceptive content. The family's response and legal actions indicate that harm has materialized. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

李玟母亲声明:AI侵权内容7日内必须下架,维护逝者尊严

2024-03-31
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (AI 'resurrection' videos) created and spread without consent, causing psychological harm and violation of rights to the deceased's family. The AI system's use in producing unauthorized deepfake videos is central to the harm described. The legal response and demand for takedown further confirm the recognition of harm caused by AI misuse. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use has directly led to harm to persons and violation of rights.
Thumbnail Image

李玟消失奧斯卡悼念影片 二姊喊幫討公道

2024-03-28
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a video of the deceased artist, which caused emotional distress to her family and raised concerns about unauthorized commercial use. This fits the definition of an AI Hazard because the AI-generated content could plausibly lead to harm (emotional harm to family, potential rights violations) though no direct or legally recognized harm or incident has been reported. The article focuses on the family's reaction and concerns rather than a confirmed AI Incident. Hence, the event is best classified as an AI Hazard.
Thumbnail Image

未走出喪妹之痛 李思林看李玟AI片倍傷感 - 20240328 - 娛樂

2024-03-27
明報新聞網 - 即時新聞 instant news
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate digital representations of deceased celebrities, which has caused emotional harm to the family members, a form of injury or harm to persons. The family also highlights concerns about misuse for commercial or fraudulent purposes, which could further exacerbate harm. The AI system's role in creating these digital likenesses is central to the harm experienced, fulfilling the criteria for an AI Incident. Although no physical harm is reported, emotional and psychological harm to family members is a recognized form of harm under the framework. Therefore, this event is classified as an AI Incident.
Thumbnail Image

李思林指李玟AI片如非懷念性質 等同二次傷害家人 (17:04) - 20240327 - SHOWBIZ - 明報 Our Lifestyle

2024-03-27
明報 Our Lifestyle
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate videos of a deceased person without family consent, causing emotional harm to the family members. This constitutes a violation of rights and causes harm to the community (the family and potentially broader societal norms about respect for the deceased). The AI system's use has directly led to harm (emotional distress) to persons, fitting the definition of an AI Incident. The article explicitly mentions the harm caused by the AI-generated content and the family's response, confirming realized harm rather than a potential risk.
Thumbnail Image

李玟母親:AI復活女兒涉侵權 務必7天內下架 | 娛樂 | 中央社 CNA

2024-03-29
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate content that simulates a deceased person, which has directly led to psychological harm and violation of personality rights of the deceased and their family. This fits the definition of an AI Incident because the AI system's use has directly caused harm to persons (psychological harm and violation of rights). The legal warning and demand for takedown further confirm the recognition of harm caused by the AI-generated content. Therefore, this event is classified as an AI Incident.
Thumbnail Image

李玟母親:AI復活女兒涉侵權 務必7天內下架(圖) - 影視熱議 -

2024-03-29
看中国
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate audiovisual content that simulates deceased individuals, which is a clear AI system involvement. The unauthorized use and dissemination of this AI-generated content have directly led to psychological harm to the family, which qualifies as harm to persons (psychological harm) and a violation of rights (personality rights). Therefore, this meets the criteria for an AI Incident because the AI system's use has directly caused harm and legal violations. The legal warnings and demands for takedown are responses to this incident, not the primary focus of the article, so this is not merely Complementary Information.
Thumbnail Image

李玟母亲:AI复活女儿涉侵权 务必7天内下架(图) - 影视热议 -

2024-03-29
看中国
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI to generate content that simulates deceased individuals, causing psychological harm to their families and infringing on legal rights. The AI system's use is central to the harm, as it produces unauthorized representations of the deceased, leading to emotional distress and legal infringement. Therefore, this qualifies as an AI Incident due to realized harm (psychological and rights violations) directly linked to the AI system's use.
Thumbnail Image

AI「復活」李玟帶來二次傷害 其母要求7日內下架侵權內容

2024-03-29
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to 'revive' Li Wen through generated videos, which are distributed online without family consent and for profit. This unauthorized use of AI-generated likeness causes direct harm to the family by infringing on their legal rights and causing psychological distress, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. Therefore, this is classified as an AI Incident.
Thumbnail Image

李玟85歲母親發律師信炮轟「AI復活李玟」 列4點聲明限七日內下架

2024-03-29
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated videos and voice content that simulate a deceased individual, which is an AI system's use. The unauthorized commercial exploitation of this AI-generated content has caused psychological harm to the family and infringes on their rights, fulfilling the criteria for harm under the AI Incident definition. The family's legal action and public statements confirm the harm has materialized, not just a potential risk. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

以温情之名非法牟利,李玟母亲要求七日内下架"AI复活李玟"视频

2024-03-29
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate videos of a deceased person without consent, leading to direct psychological harm and rights violations for the family. The AI-generated content is being commercially exploited, which constitutes illegal profit and infringement. The harm is realized and ongoing, including emotional distress and interference with the family's life, fulfilling the criteria for an AI Incident. The involvement of AI in generating the videos and the resulting harm to individuals and their rights is clear and direct.
Thumbnail Image

奥斯卡未将李玟加入致敬影片 二姐:帮我讨公道

2024-03-28
早报
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI technology being used to recreate videos of deceased celebrities, which is an AI system's use. The family expresses concern about the emotional and potential legal harm caused by commercial use of such AI-generated content, indicating plausible harm to the family and possibly rights violations. However, no direct harm or legal action has occurred yet, and the family is seeking to have the videos removed. This fits the definition of an AI Hazard, where AI use could plausibly lead to harm but no incident has yet materialized. The Oscar tribute omission is unrelated to AI and does not affect classification.
Thumbnail Image

广东一律所发声明:接受李玟母亲委托,要求7日内下架AI侵权内容

2024-03-29
杭州网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate unauthorized videos of a deceased individual, leading to violations of rights (privacy, image rights) and causing psychological harm to the family. This fits the definition of an AI Incident because the AI system's use has directly led to harm (violation of rights and psychological impact). The legal response and demands for content removal further confirm the recognition of harm caused by AI-generated content.
Thumbnail Image

新京报 - 好新闻,无止境

2024-03-29
bjnews.com.cn
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate videos of a deceased individual without family consent, which has caused significant psychological harm and interference with the family's life. This constitutes a violation of rights and harm to persons, fitting the definition of an AI Incident due to the direct harm caused by the AI system's use.
Thumbnail Image

李玟母亲要求7日内下架"AI复活李玟"视频

2024-03-29
163.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating videos that simulate the deceased singer's image and voice, constituting the use of AI in content creation. The unauthorized use and distribution of these AI-generated videos have directly caused psychological harm and distress to the family, which qualifies as harm to persons (psychological harm). Therefore, this event meets the criteria of an AI Incident because the AI system's use has directly led to harm (psychological and emotional) to individuals (the family).
Thumbnail Image

李玟AI復活片瘋傳!87歲老母悲痛發聲...「要求7日內下架所有影片」 | 娛樂星聞 | 三立新聞網 SETN.COM

2024-03-29
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating videos of deceased individuals, which directly led to psychological harm and distress to the family, as well as legal violations concerning personality rights. The AI-generated videos are causing real harm to the family and community, fulfilling the criteria for an AI Incident. The involvement of AI in generating the content and the resulting harm to the family and violation of rights justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

李玟被网友AI复活并被造谣生子,二姐斥责:我妈妈看到会被吓死

2024-03-29
163.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating the simulated videos of Li Wen, which are being used to spread false information and cause emotional harm to her family. This constitutes a violation of rights and harm to the community (family and fans), fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the family is emotionally affected and reputational damage is occurring. Therefore, this event is classified as an AI Incident.
Thumbnail Image

李玟被网友用AI"复活",李玟二姐:很伤感,担心妈妈被吓到

2024-03-28
163.com
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in generating synthetic video and audio of a deceased person, which has caused emotional distress to the family and raised concerns about potential rights violations. Although no physical harm or critical infrastructure disruption is involved, the emotional harm to the family and potential violation of personal rights constitute harm under the framework. The event describes realized harm (emotional distress and rights concerns) caused by the AI-generated content, thus qualifying as an AI Incident.
Thumbnail Image

李玟消失奧斯卡悼念影片!二姊忍無可忍發聲了:幫我討個公道 | 娛樂星聞 | 三立新聞網 SETN.COM

2024-03-28
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate videos of deceased individuals, which can plausibly lead to harm such as emotional distress to families and potential misuse for commercial or deceptive purposes. Since no actual harm has been reported yet, but there is a credible risk of harm, this qualifies as an AI Hazard. The article focuses on the potential negative consequences and the family's concerns rather than a realized incident.
Thumbnail Image

李玟母亲要求7日内下架AI侵权内容 2024-03-29 15:27

2024-03-29
sznews.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate deepfake videos ('AI复活李玟'), which is a clear AI involvement. The unauthorized use of Li Wen's likeness without family consent has led to psychological harm and interference with the family's life, which qualifies as harm to persons and a violation of rights. Therefore, this event meets the criteria of an AI Incident due to the realized harm caused by the AI-generated content.
Thumbnail Image

李玟AI復活影片狂傳....二姊「露面首度回應」:媽咪看到會嚇死 | 娛樂星聞 | 三立新聞網 SETN.COM

2024-03-28
三立新聞
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as AI technology is used to generate videos of deceased individuals. The use of AI-generated content without consent has directly led to emotional harm and rights violations, as expressed by the family members' reactions and public controversy. The AI system's use in creating these videos without authorization constitutes a breach of obligations intended to protect personal and intellectual property rights, fulfilling the criteria for an AI Incident. The article does not describe potential or future harm but actual harm and controversy arising from the AI-generated content's dissemination.
Thumbnail Image

李玟母亲:下架!立即停止! 2024-03-30

2024-03-30
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate unauthorized videos of a deceased individual, which has caused harm to the family through violation of portrait and voice rights and psychological distress. The use of AI to create these videos without consent constitutes a breach of intellectual property and personal rights, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal obligations. The harm is realized and ongoing, not merely potential, and legal measures are being taken in response.
Thumbnail Image

李玟母亲委托律师声明:"AI复活"给家人带来二次伤害,7日内下架_相关_侵权_网络

2024-03-29
搜狐新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create unauthorized 'AI resurrection' videos of a deceased individual, which has directly caused psychological harm to her family, constituting a violation of personal rights and privacy under applicable law. The use of AI-generated content without consent and for profit, causing distress and interference with the family's life, fits the definition of an AI Incident due to realized harm (psychological and rights violations).
Thumbnail Image

李玟母亲委托律师声明:"AI复活李玟"给家人带来二次伤害_相关_侵权_视频

2024-03-29
police.news.sohu.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated videos and audio simulating Li Wen, an AI system's use. The unauthorized use of AI to 'resurrect' Li Wen has directly caused psychological harm to her family and legal infringements, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the family has suffered emotional distress and legal rights violations due to the AI-generated content. Therefore, this is classified as an AI Incident.
Thumbnail Image

李玟妈妈发律师声明:应该给AI复活明星划出应用边界_数字_人们_肖像

2024-03-30
搜狐新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems being used to create digital likenesses of deceased celebrities without authorization, which has led to direct harms including infringement of personality rights, emotional harm to families, and unauthorized commercial exploitation. These harms fall under violations of human rights and legal obligations protecting personality rights. The AI system's use is central to these harms, fulfilling the criteria for an AI Incident. The article also discusses the need for legal and regulatory responses, but the primary focus is on realized harms caused by AI use, not just potential future risks or general commentary, so it is not merely Complementary Information or an AI Hazard.
Thumbnail Image

李玟姐姐回应AI李玟 称妈咪看到会吓死她_家人_遗物_商业

2024-03-27
搜狐--娱乐频道
Why's our monitor labelling this an incident or hazard?
The article discusses the family's reaction to the AI-generated likeness of Li Wen, highlighting concerns about disrespect and potential commercial misuse. While AI is involved in generating the likeness, no direct harm or incident is reported, only concerns and opinions. This fits the definition of Complementary Information, as it provides societal response and context to AI use rather than describing a realized harm or a plausible future harm event.
Thumbnail Image

"死而复生"成现实?AI复活生意爆火,第一批店家已经赚疯了

2024-03-29
163.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate realistic digital representations of deceased individuals, which is explicitly described. The use of these AI systems has directly led to harms including emotional distress to family members, violations of personality and intellectual property rights, and unauthorized commercial use. The article documents actual incidents of harm, including family members' objections and legal concerns, as well as emotional dependency and psychological risks to users. These harms are not hypothetical but have materialized, fulfilling the criteria for an AI Incident. The AI's role is pivotal as it enables the creation and dissemination of these digital resurrections, which cause the described harms. Hence, the classification as AI Incident is appropriate.