AI-Generated Fake Death Photo of Jackie Chan Sparks Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Facebook page with 1.5 million followers used AI to generate a fake image of Jackie Chan on a hospital bed, falsely announcing his death. The AI-created photo and misinformation spread widely online, causing public confusion and distress before being debunked by media and Jackie Chan himself. The incident highlights AI's role in spreading harmful misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a fake image that was part of a false news story about a celebrity's death. The AI-generated content caused harm by misleading the public and causing emotional distress, which qualifies as harm to communities. Since the AI system's use directly led to this harm, this event meets the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
ReputationalPsychologicalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

百萬粉專黑白照悼念 竟是AI合成照....成龍淪苦主 - 自由娛樂

2025-11-17
自由時報電子報
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake image that was part of a false news story about a celebrity's death. The AI-generated content caused harm by misleading the public and causing emotional distress, which qualifies as harm to communities. Since the AI system's use directly led to this harm, this event meets the criteria for an AI Incident.
Thumbnail Image

成龍遭傳過世!黑白病床照曝 百萬粉專悼念真相超離譜 | 噓!星聞

2025-11-18
聯合新聞網 udn.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false death news was based on an AI-generated image, indicating AI system involvement in creating misleading content. The harm is misinformation and public confusion, which is a form of social harm but not clearly articulated as causing direct injury, rights violations, or critical disruption. Since the misinformation is already circulating, it could be considered harm to communities, but the article frames it as a rumor debunked by fact-checking and the celebrity's continued activity. There is no indication of ongoing or escalating harm or a credible risk of future harm beyond the misinformation already addressed. Thus, the event is best categorized as Complementary Information, highlighting the role of AI in misinformation and the societal response to it, rather than an AI Incident or Hazard.
Thumbnail Image

成龍遭150萬粉專造謠死訊!AI生成黑白躺床照 真實健康狀況曝光 | 娛樂 | NOWnews今日新聞

2025-11-18
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake image used to spread false death information about a public figure. This misinformation can cause harm to communities by spreading false narratives and potentially distressing individuals. The use of AI-generated content to create and disseminate false information constitutes an AI Incident because the AI system's use directly led to reputational and informational harm. Although no physical injury or legal rights violation is explicitly mentioned, the harm to community trust and the individual's reputation fits within the scope of harm to communities or violation of rights under the framework. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

150萬粉專PO病床照 成龍遭濫用AI捏造死訊|壹蘋新聞網

2025-11-17
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image that contributed to the dissemination of false death news about a public figure. This constitutes a violation of rights (reputation and misinformation harm to communities) and is a realized harm caused by AI-generated content. Therefore, this qualifies as an AI Incident due to the direct role of AI in producing misleading content that caused reputational harm and misinformation spread.
Thumbnail Image

150萬粉專突發「成龍插管照」喊:他離開了!本尊不忍發聲「死亡真相曝光」 - 民視新聞網

2025-11-17
民視新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fake image and false death announcement, which constitutes misuse of AI technology leading to misinformation and potential harm to the community's trust and emotional well-being. Since the false information was actively spread and caused public reaction, this qualifies as an AI Incident due to harm to communities through misinformation. The event involves AI-generated content causing realized harm, not just a potential risk, so it is not merely a hazard or complementary information.