AI-Generated Fake Death Images of Jackie Chan Spark Misinformation Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated images falsely depicting Jackie Chan's death circulated widely on social media, causing public concern and spreading misinformation. The fabricated photos, created using AI tools, were quickly debunked, with media confirming Chan is healthy and active. The incident highlights the reputational and social harm caused by AI-driven misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to fabricate images falsely depicting a public figure's death constitutes misinformation that harms communities by spreading false narratives and causing distress. This fits the definition of an AI Incident as the AI system's use has directly led to harm in the form of misinformation and social disruption. Although no physical harm or legal violation is reported, the reputational and social harm is significant and clearly articulated. Hence, the event is classified as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

وفاة جاكي شان تهز العالم بعد صورته الصادمة داخل النعش.. (ما القصة؟)

2025-09-03
الوفد
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) used to spread false rumors, which is an example of AI misuse. However, since the article states these are rumors and no harm (such as injury, rights violations, or disruption) has actually occurred, it does not qualify as an AI Incident. Nor does it present a plausible future harm scenario beyond the misinformation already addressed. The main focus is on clarifying the misinformation and updating the public, which fits the definition of Complementary Information.
Thumbnail Image

صورة "جاكي شان" تشعل مواقع التواصل - هبة بريس

2025-09-03
هبة بريس
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that have caused public concern and misinformation. While the AI system's use has led to reputational harm and public distress, the article does not report any direct or realized physical harm, rights violations, or other significant harms as defined for an AI Incident. The harm is potential and indirect, related to misinformation and social disruption risks. Therefore, this qualifies as an AI Hazard because the AI-generated content could plausibly lead to harm such as reputational damage, public panic, or misinformation spread, but no confirmed incident of harm has occurred yet.
Thumbnail Image

جاكي شان على فراش الموت.. الشائعات تلاحق النجم العالمي وأفلامه تواصل النجاح | المصري اليوم

2025-09-03
AL Masry Al Youm
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that were used to spread false information about Jackie Chan's death, which is a form of harm to communities through misinformation. This qualifies as an AI Incident because the AI system's use directly led to harm (public shock and misinformation). However, since the article mainly discusses the debunking and clarification of these false claims, it serves more as Complementary Information updating on a previously existing AI Incident (the misinformation spread). The primary focus is on correcting the misinformation rather than reporting the initial harm event.
Thumbnail Image

صور لـ"احتضار جاكي شان" تثير ضجة على السوشيال ميديا

2025-09-03
العين الإخبارية
Why's our monitor labelling this an incident or hazard?
The use of AI to fabricate images falsely depicting a public figure's death constitutes misinformation that harms communities by spreading false narratives and causing distress. This fits the definition of an AI Incident as the AI system's use has directly led to harm in the form of misinformation and social disruption. Although no physical harm or legal violation is reported, the reputational and social harm is significant and clearly articulated. Hence, the event is classified as an AI Incident.
Thumbnail Image

شائعة وفاة جاكي شان عبر الذكاء الاصطناعي تهز مواقع التواصل

2025-09-03
24.ae
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create fabricated images that mislead the public, constituting misinformation. This misinformation has caused social disruption and reputational harm, which falls under harm to communities. Since the AI-generated content directly led to this harm, it qualifies as an AI Incident. The article confirms the images were AI-generated and caused significant public confusion, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ما حقيقة صورة جاكي شان على فراش الموت التي اجتاحت مواقع التواصل؟ | صحيفة الخليج

2025-09-03
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI to generate false images of Jackie Chan's death, which were widely circulated and caused misinformation. The AI system's misuse directly led to harm in the form of false information spreading, which affects communities and individuals' reputations. This fits the definition of an AI Incident because the AI system's use directly led to harm (misinformation and reputational harm). The article also details the verification and denial of the false claims, confirming the harm was realized, not just potential.
Thumbnail Image

ضجة بسبب صور وفاة جاكي شان.. ما علاقة الذكاء الاصطناعي ؟

2025-09-03
قناه السومرية العراقية
Why's our monitor labelling this an incident or hazard?
AI systems were used to create fabricated images falsely showing Jackie Chan on his deathbed, which were widely circulated and caused public alarm. This constitutes the use of AI in generating misleading content that can harm communities by spreading misinformation. However, since the article clarifies that the images are fake and the actor is healthy, and no direct or indirect harm such as injury, rights violations, or operational disruption has occurred, this event is best classified as an AI Hazard. The AI-generated misinformation could plausibly lead to harm if believed widely, but no concrete harm has materialized yet according to the article.
Thumbnail Image

شائعة وفاة جاكي شان تجتاح مواقع التواصل

2025-09-03
صحيفة صدى الالكترونية
Why's our monitor labelling this an incident or hazard?
The article involves AI-generated fabricated images (an AI system's use) that led to misinformation spreading on social media. While misinformation can harm communities, the article does not report actual harm occurring, only the circulation of false images and rumors. Therefore, this situation represents a potential risk of harm (misinformation) but no confirmed harm has materialized. Hence, it fits best as Complementary Information, providing context on AI misuse and misinformation without constituting a new AI Incident or Hazard.
Thumbnail Image

وفاة جاكي شان إصطناعياً!

2025-09-03
AlJadeed.tv
Why's our monitor labelling this an incident or hazard?
The use of AI to create fake images of Jackie Chan on his deathbed constitutes the use of an AI system to generate misleading content. This misinformation has caused harm to the community by spreading false information and causing unnecessary distress to fans. Since the AI-generated content directly led to social harm (panic, misinformation), this qualifies as an AI Incident under harm to communities. The article confirms the harm occurred (public concern and spread of false news), not just a potential risk.
Thumbnail Image

جاكي شان يقلب الترند بين شائعة الموت ونجاحات السينما

2025-09-03
جـــريــدة الفجــــــر المصــرية
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false death rumor was created using AI tools, indicating AI system involvement in generating misleading content. The rumor could plausibly lead to harm such as distress or misinformation spread, which fits the definition of an AI Hazard. However, since the rumor was quickly clarified and no actual harm or disruption is reported, it does not meet the threshold for an AI Incident. The main focus is on the rumor and its debunking, not on a response or governance action, so it is not Complementary Information. Hence, the event is best classified as an AI Hazard due to the plausible risk of harm from AI-generated misinformation.
Thumbnail Image

حقيقة وفاة جاكي شان تتصدر مواقع التواصل.. النجم العالمي يواصل نشاطه الفني

2025-09-03
المشهد اليمني
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated fake images (deepfakes) that falsely depict Jackie Chan's death, which is a misuse of AI technology leading to misinformation and potential reputational harm. However, there is no indication of direct physical harm, violation of rights, or disruption caused by the AI system. Since the harm is reputational and the article mainly addresses the misinformation and clarifies the truth, this fits best as Complementary Information, providing context and updates on an AI-related misinformation issue rather than reporting a new AI Incident or Hazard.
Thumbnail Image

جاكي شان يثير قلق جمهوره بصورة له في المستشفى.. والحقيقة تكشف مفاجأة | البوابة

2025-09-04
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The AI system's involvement is in generating a fake image that led to misinformation and public concern. However, the article states that the health-related rumors are false and no actual harm has occurred. The AI-generated content caused misinformation but no direct or indirect harm as defined by the framework. The article's main focus is on debunking the false claims and clarifying the situation, which is a societal response to AI-generated misinformation. Therefore, this event is best classified as Complementary Information rather than an Incident or Hazard.
Thumbnail Image

شائعات وفاة جاكي.. والسبب صور مزيفة بالذكاء الاصطناعي

2025-09-04
albiladpress.com
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate fake images that led to widespread false rumors about Jackie Chan's death. This misinformation can be considered harm to communities and individuals, as it spreads false narratives and causes distress. Since the AI-generated content has already been disseminated and caused harm, this qualifies as an AI Incident under the framework, specifically harm to communities and individuals due to misinformation generated by AI.