AI-Generated Videos Exploit Elderly and Cause Public Panic in China

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated videos on Chinese platforms have targeted elderly users with emotionally manipulative content, leading to financial scams and psychological harm. Separately, an AI-created fake video of a building collapse caused widespread panic and misinformation. Both incidents highlight the misuse of AI for deception and harm to vulnerable groups and communities.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems generating realistic videos and emotional content that mislead elderly viewers, causing them to spend money on products under false beliefs. This is a direct harm to the health and well-being of a vulnerable group through deception and financial exploitation. The AI system's use is central to the harm, as it creates convincing fake personas and messages that manipulate emotions. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (financial and emotional) to a group of people (elderly individuals).[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
PsychologicalEconomic/PropertyPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

温情话术专哄老人 AI霸总背后暗藏营销陷阱

2026-04-24
China News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating realistic videos and emotional content that mislead elderly viewers, causing them to spend money on products under false beliefs. This is a direct harm to the health and well-being of a vulnerable group through deception and financial exploitation. The AI system's use is central to the harm, as it creates convincing fake personas and messages that manipulate emotions. The event meets the criteria for an AI Incident because the AI system's use has directly led to harm (financial and emotional) to a group of people (elderly individuals).
Thumbnail Image

张口"姐姐"、闭口"想你","AI霸总"精准围猎老年人

2026-04-25
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The videos are explicitly AI-generated and use AI to create emotionally manipulative content that targets elderly people, leading to their deception and financial harm. The AI system's use directly causes harm to a vulnerable group (elderly people) by exploiting their emotional needs and inducing purchases, which fits the definition of an AI Incident involving harm to communities and violations of rights. The lack of AI disclosure further exacerbates the harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

造谣"楼房倒塌多人坠落"者被罚 AI合成视频引发恐慌

2026-04-25
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a synthetic video that falsely showed a dangerous event, leading to widespread panic and misinformation. This constitutes an AI Incident because the AI-generated content directly caused harm to the community by spreading false information and causing fear. The involvement of AI in generating the misleading video and the resulting social harm fits the definition of an AI Incident under violations of rights and harm to communities. The event is not merely a potential hazard or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

利用AI编造四川泸州楼房倒塌者被罚 虚假视频误导公众

2026-04-25
中华网科技公司
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake video that falsely depicted a dangerous event, misleading many viewers and causing social harm. The harm is realized as the public was misled, and the incident involved the malicious use of AI-generated content. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities through misinformation and deception. The legal response and penalty further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

张口"姐姐"、闭口"想你"!"AI霸总"精准围猎老年人

2026-04-25
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating video content that emotionally manipulates elderly viewers, leading to harm including psychological distress and financial risk from scams. The AI-generated videos are used in a way that misleads viewers by not clearly disclosing their AI nature, which is a misuse of AI technology causing direct harm. The harm includes violation of rights (protection from deceptive practices), harm to health (mental health impact), and harm to property (financial scams). Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

温情话术专哄老人 AI霸总背后暗藏营销陷阱

2026-04-24
杭州网
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic videos with synthetic characters that emotionally manipulate elderly viewers. The AI-generated content is used as a marketing tool to deceive and exploit a vulnerable group, causing direct harm (financial loss and emotional exploitation). The AI system's development and use are central to the harm, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. Therefore, this is classified as an AI Incident.
Thumbnail Image

张口"姐姐"、闭口"想你"!"AI霸总"精准围猎老年人→

2026-04-24
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating video content that emotionally manipulates elderly viewers, leading to direct harm including mental health impact and financial scams. The AI-generated videos are used in marketing schemes that exploit elderly users, causing harm to individuals and communities. The lack of AI disclosure and misuse of military imagery further indicate violations and harm. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly led to harm to persons and property.
Thumbnail Image

张口"姐姐"、闭口"想你"!老年人被"AI霸总"盯上......

2026-04-25
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic video content that emotionally manipulates elderly individuals, leading to realized harms including psychological distress and financial loss due to induced purchases and scams. The AI-generated content's lack of clear labeling constitutes misleading behavior, exacerbating harm. The direct causal link between AI-generated content and harm to elderly users fits the definition of an AI Incident, as the AI system's use has directly led to harm to persons and property.
Thumbnail Image

马上评|为什么说"AI霸总"比"秀才"们更可怕

2026-04-25
新浪财经
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating virtual personas that directly lead to harm by deceiving elderly users, causing emotional and financial damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (elderly individuals) and harm to communities through exploitation and potential scams. The article describes realized harm, not just potential risk, and the AI's role is pivotal in enabling this large-scale, automated exploitation. Therefore, this is classified as an AI Incident.
Thumbnail Image

警惕!"AI霸总"瞄准银发族,"甜言蜜语"为变现

2026-04-24
news.bjd.com.cn
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating the videos that manipulate elderly viewers, causing harm to their mental health and exposing them to fraud risks. The harm is realized or ongoing, not just potential, as elderly people are already affected. The event fits the definition of an AI Incident because the AI-generated content directly leads to harm to a vulnerable community, fulfilling the criteria of harm to health and harm to communities. The mention of regulatory concerns and calls for platform action further supports the seriousness of the incident.
Thumbnail Image

AI霸总围猎老年人 情感杀猪盘陷阱

2026-04-26
中华网科技公司
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (AI face-swapping and voice synthesis) used to generate virtual personas that deceive and exploit elderly people, causing direct harm including financial loss, privacy violations, and psychological damage. The AI's role is pivotal as it enables scalable, low-cost, and convincing scams that would be difficult without such technology. The harms described include violations of rights, harm to individuals' health and well-being, and broader societal harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

为什么说"AI霸总"比"秀才"们更可怕

2026-04-25
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating virtual characters that interact with elderly users, leading to realized harms including emotional distress and financial risk (potential scams). The AI system's use in this context directly contributes to violations of rights and harm to communities, fitting the definition of an AI Incident. The article details actual harm occurring, not just potential harm, and highlights the AI system's pivotal role in enabling these harms at scale.
Thumbnail Image

张口就想你 天天喊姐姐 "AI霸总"瞄准银发族 花式"开撩"后让老人购物打赏 这类现象你身边有吗?

2026-04-25
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the description of 'AI霸总' characters engaging in sophisticated, personalized interactions to manipulate elderly users. The harm includes psychological impact and financial risk (scams), which are direct harms to persons. Therefore, this event qualifies as an AI Incident due to the AI system's use causing realized harm to vulnerable individuals.
Thumbnail Image

AI"霸总"甜蜜陷阱:老人被一句句姐姐和想你掏空钱包

2026-04-25
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic videos used maliciously to deceive elderly individuals, resulting in direct financial harm (scams) and psychological harm. The AI system's use in creating these deceptive videos is central to the harm caused, fulfilling the criteria for an AI Incident due to violations of rights and harm to individuals.