AI-generated images fuel earthquake misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In early January, following quakes in Tibet’s Dingri (6.8) and Qinghai’s Maduo (5.5), self-media accounts used old photos and AI-generated images—e.g. a ‘buried boy’—to fabricate disaster scenes and solicit donations. The misinformation misled the public and complicated relief efforts; police have detained involved individuals.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI tools were used to create false images ('small boy buried' pictures) that were widely disseminated online, causing misinformation and social harm after a natural disaster. This misinformation led to public distress, interference with rescue operations, and disrespect to victims and their families, which qualifies as harm to communities and individuals. The involvement of AI in generating the false content and its role in spreading harmful rumors meets the criteria for an AI Incident, as the AI system's use directly led to realized harm. The article also mentions legal actions taken against the perpetrator, reinforcing the incident's seriousness.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
General public

Harm types
Economic/PropertyPsychologicalReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

【社评】向"AI生成谣言"说不,让网络空间更清朗

2025-01-13
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to create false images ('small boy buried' pictures) that were widely disseminated online, causing misinformation and social harm after a natural disaster. This misinformation led to public distress, interference with rescue operations, and disrespect to victims and their families, which qualifies as harm to communities and individuals. The involvement of AI in generating the false content and its role in spreading harmful rumors meets the criteria for an AI Incident, as the AI system's use directly led to realized harm. The article also mentions legal actions taken against the perpetrator, reinforcing the incident's seriousness.
Thumbnail Image

向"AI生成谣言"说不,让网络空间更清朗

2025-01-14
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI tools to create and spread false images and misinformation about a natural disaster, which has directly caused harm by misleading the public, interfering with rescue operations, and causing emotional distress. The AI system's role in generating the false content is explicit and pivotal to the incident. Therefore, this qualifies as an AI Incident due to the realized harm to communities and individuals resulting from the AI-generated misinformation.
Thumbnail Image

今日辟谣(2025年1月9日) 2025-01-09 23:00

2025-01-13
sznews.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated images to create false information about earthquakes, which misleads the public and causes harm to communities by spreading misinformation. This constitutes a violation of rights and harm to communities, fitting the definition of an AI Incident. Although the article focuses on debunking these false claims and promoting awareness, the underlying event involves realized harm due to AI misuse. Therefore, the classification is AI Incident.
Thumbnail Image

網傳西藏強震「六指男童」假照片 涉案者遭行政拘留 - 國際 - 自由時報電子報

2025-01-11
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating fake images falsely associated with a real earthquake disaster, which were then widely disseminated online, causing misinformation and potential harm to communities. This constitutes a violation of rights related to truthful information and harms communities by spreading false narratives. The harm has already occurred as the misinformation was widely spread, and the AI system's role was pivotal in creating the false images. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

重大災情蹭流量! 西藏大震死傷逾300 網熱傳童被壓廢墟圖 竟是AI生成 | 中國 | Newtalk新聞

2025-01-08
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate fake images that are being widely shared as real disaster scenes, causing misinformation and potential harm to communities and disaster management. This constitutes harm to communities and possibly disruption of critical infrastructure management (disaster relief). Since the harm is occurring through misinformation and its consequences, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

傳播「童埋藏震廢墟」AI圖 網民被行政拘留 (22:09) - 20250110 - 兩岸

2025-01-10
明報新聞網 - 每日明報 daily news
Why's our monitor labelling this an incident or hazard?
The event involves an AI system as the image was created by an AI tool. The misuse of this AI-generated content to spread false information about a natural disaster has led to harm by misleading the public and spreading rumors, which can be considered harm to communities. The AI system's role in generating the image is pivotal to the incident, and the misuse directly caused the harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

西藏地震災區網傳「六指男童」照片 證實為AI生成圖 - 國際 - 自由時報電子報

2025-01-08
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic but fake images related to a real disaster, which were then disseminated online. Although no direct physical harm occurred from the AI-generated images themselves, the spread of such misinformation can cause harm to communities by misleading public understanding and potentially affecting disaster response or public sentiment. Since the harm is indirect and related to misinformation, and the article reports on the identification and verification of the images as AI-generated rather than the images causing direct harm, this situation fits best as Complementary Information about AI-generated misinformation risks rather than a direct AI Incident or an AI Hazard.
Thumbnail Image

震災假照片!西藏小男童被壓廢墟...6隻手指露餡了 | 聯合新聞網

2025-01-09
UDN
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake image falsely representing a disaster victim, which was widely shared and believed to be real. This misinformation can disrupt effective disaster response and cause emotional harm to the public, constituting harm to communities. The AI-generated content's role is pivotal in causing this harm, meeting the criteria for an AI Incident due to indirect harm caused by the AI system's use in misinformation.
Thumbnail Image

為搏眼球發西藏男童被埋造假影像 涉案網友遭行政拘留 | 聯合新聞網

2025-01-11
UDN
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake image. The misuse of this AI-generated content to spread false information about a disaster constitutes a violation of rights and causes harm to communities by spreading misinformation. Since the harm (misleading the public and spreading false disaster information) has already occurred, this qualifies as an AI Incident. The administrative detention of the individual responsible further confirms the seriousness of the incident.
Thumbnail Image

"小孩被埋""母親護孩"等AI圖、舊圖大量傳播!嚴重可追責

2025-01-09
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating and spreading false images and videos that mislead the public about a serious disaster, which constitutes harm to communities and disruption of social order. The AI-generated misinformation is actively causing harm, not just posing a potential risk, thus qualifying as an AI Incident. The article's focus on the harm caused by AI-generated fake content and the legal consequences supports this classification.
Thumbnail Image

拼湊AI假圖散播地震謠言 青海網民遭刑拘

2025-01-11
on.cc東網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated images being used to spread false rumors about an earthquake, which led to public misinformation and legal action against the user responsible. The AI system's outputs were directly involved in causing harm to the community by misleading people, fulfilling the criteria for an AI Incident under the definition of harm to communities through misinformation. Therefore, this is classified as an AI Incident.
Thumbnail Image

西藏地震|AI合成「小孩被埋圖」瘋傳 當局:警惕博眼球謠言

2025-01-10
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a synthetic image falsely depicting a disaster victim, which was then widely disseminated online. This misinformation can cause harm to communities by spreading false narratives during a sensitive event, potentially leading to panic, mistrust, or exploitation by malicious actors. The harm is indirect but clearly linked to the AI-generated content. Therefore, this qualifies as an AI Incident due to the realized harm from the AI system's use in generating and spreading misleading content about a real disaster.
Thumbnail Image

西藏地震|AI合成「小孩被埋圖」瘋傳 涉案人員已被刑拘

2025-01-10
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fabricated image related to a real disaster, which was then disseminated to mislead and confuse the public, constituting harm to communities through misinformation. The event involves the use and misuse of AI-generated content leading to social harm and public deception. Therefore, it meets the criteria of an AI Incident as the AI system's use directly led to harm in the form of misinformation and social disruption.
Thumbnail Image

利用災害博流量 「小男孩被埋圖」涉案人員已被拘 - 香港文匯網

2025-01-10
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated images being used to fabricate and spread false information about a natural disaster, which misled the public and caused social harm. This constitutes a violation of public order and the spread of harmful misinformation, fitting the definition of an AI Incident due to the direct harm caused by the AI system's outputs in misleading and confusing the public.
Thumbnail Image

利用地震博流量!「小男孩被埋圖」造謠人員被拘 - 香港文匯網

2025-01-10
香港文匯網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the image used in the misinformation campaign. The misuse of this AI-generated content directly led to harm by spreading false information about a serious event (earthquake), which can cause panic, confusion, and social disruption, qualifying as harm to communities. The administrative action against the perpetrator confirms the incident's seriousness and realized harm. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content misuse and harm to communities.
Thumbnail Image

利用地震博流量!"小男孩被埋圖"造謠人員被拘  22:08

2025-01-10
hkcna.hk
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the image. The misuse of this AI-generated content directly led to harm in the form of misinformation spreading about a serious earthquake, which can cause social disruption, public fear, and interference with disaster response. This fits the definition of an AI Incident because the AI system's outputs were used to create false narratives that harmed communities and public order. The event describes realized harm, not just potential harm, and involves the use and misuse of an AI system's outputs.
Thumbnail Image

利用地震博流量!"小男孩被埋圖"造謠人員被拘

2025-01-11
hkcna.hk
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the image ('小男孩被埋圖'). The harm arises from the misuse of this AI-generated content to spread false information about the earthquake, misleading the public and potentially disrupting social order and emergency response. The event describes actual harm (misinformation, public confusion, social panic risk) caused indirectly by the AI system's outputs being manipulated and disseminated maliciously. The authorities' intervention confirms the seriousness of the harm. Hence, this is an AI Incident as the AI system's misuse directly led to harm to communities and public order.