AI-Generated Deepfake Videos Used for Celebrity Impersonation and Scams in Vietnam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Vietnamese director Lý Hải and his wife warned about AI-generated fake videos and audio impersonating them to promote unverified products and scams. The sophisticated deepfakes deceive viewers, especially the elderly, leading to financial loss and reputational harm. Authorities have intervened in some cases, highlighting the growing misuse of AI for fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems that generate realistic fake videos (deepfakes) impersonating a real person without consent, which is a direct misuse of AI technology. This misuse has already led to harm by deceiving consumers into potentially fraudulent purchases, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article explicitly states that the AI-generated videos are being used to sell products deceptively, causing real harm, not just a potential risk.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Lý Hải bức xúc vì bị AI giả mạo

2026-05-01
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate realistic fake videos (deepfakes) impersonating a real person without consent, which is a direct misuse of AI technology. This misuse has already led to harm by deceiving consumers into potentially fraudulent purchases, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The article explicitly states that the AI-generated videos are being used to sell products deceptively, causing real harm, not just a potential risk.
Thumbnail Image

Lý do khiến Lý Hải bức xúc, lên tiếng cảnh báo

2026-05-02
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos and audio that impersonate real people to promote products and scams, which has already led to harm by deceiving viewers and potentially causing financial and reputational damage. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and individuals. The involvement of AI in creating these fake clips is clear, and the harm is realized, not just potential. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Lý Hải cảnh báo

2026-05-02
cafef.vn
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create fake videos and audio impersonating public figures, which have been used to advertise unverified products and potentially scam people. The harm includes deception leading to financial loss and reputational damage, which fits the definition of harm to property and communities. The AI system's use is central to the incident, and the harm is realized or ongoing, not merely potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lý Hải cảnh báo

2026-05-01
Báo điện tử Tiền Phong
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to create fake videos and audio impersonating celebrities, which are then used to advertise unverified products and scams. This misuse of AI has directly led to harm, including deception of the public, potential financial loss, and violation of personal rights. The harm is realized and ongoing, as evidenced by warnings from the celebrities and authorities, and documented cases of police intervention. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Lý Hải lên tiếng cảnh báo việc bị giả mạo hình ảnh

2026-05-02
Kienthuc.net.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake images and voices used to impersonate individuals for fraudulent advertising and scams. This misuse of AI has directly led to harm by deceiving people and potentially causing financial loss. The involvement of AI in generating synthetic media is clear, and the harm is occurring, not just potential. Hence, it meets the criteria for an AI Incident due to realized harm caused by malicious AI use.
Thumbnail Image

Từ sao Việt đến nghệ sĩ quốc tế đau đầu trước làn sóng giả mạo bằng AI

2026-05-03
Kienthuc.net.vn
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos and audio impersonating celebrities, which have directly caused harm such as financial loss, reputational damage, and potential health risks from fake products. The harms fall under violations of personal rights and harm to communities through fraud and deception. The article details actual incidents of harm, not just potential risks, and includes responses from authorities and affected individuals. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Lý Hải lên tiếng cảnh báo về loạt video AI giả mạo hình ảnh, giọng nói của mình

2026-05-02
Đời sống pháp luật
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated fake videos that impersonate individuals to deceive viewers and promote fraudulent products, which constitutes a violation of personal rights and can cause harm to individuals and communities. The harm is realized as these videos are actively circulating and misleading people, fulfilling the criteria for an AI Incident. The involvement of AI in creating these videos is clear, and the harm includes deception, potential financial loss, and reputational damage, which aligns with violations of rights and harm to communities as defined in the framework.
Thumbnail Image

"Phủ xanh" không gian mạng giữa làn sóng AI - Bài 1: AI thành "nhà máy" sản xuất tin giả

2026-05-03
thanhtra.com.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to generate fake videos and false statements that have been disseminated widely, causing misinformation and social harm. The harms include misleading the public, damaging reputations, and undermining social trust, which fall under harm to communities and violations of rights. The AI systems' role is pivotal as they enable rapid, large-scale production and dissemination of fake content, amplified by AI-powered recommendation algorithms. The article also cites official actions taken against individuals using AI to create such content, confirming realized harm. Hence, this qualifies as an AI Incident.