AI-Driven Deepfake Scams Cause Financial Harm in Vietnam

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Vietnam, criminals are increasingly using AI-powered deepfake technology to impersonate individuals and organizations, leading to financial scams and significant monetary losses. Authorities and cybersecurity experts have reported real cases of fraud, especially during the Tet holiday, and are raising public awareness to combat these AI-enabled threats.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of deepfake AI technology to impersonate individuals and organizations to commit fraud, resulting in actual financial harm to victims. This meets the definition of an AI Incident because the AI system's use has directly led to harm (financial loss and deception). The discussion of real scams and losses confirms that harm has materialized, not just potential risk. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
Privacy & data governanceAccountability

Industries
Digital securityFinancial and insurance services

Affected stakeholders
General public

Harm types
Economic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Bản chất của công nghệ deepfake, những thủ đoạn lừa đảo phổ biến

2026-01-29
Báo Công an nhân dân điện tử
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to impersonate individuals and organizations to commit fraud, resulting in actual financial harm to victims. This meets the definition of an AI Incident because the AI system's use has directly led to harm (financial loss and deception). The discussion of real scams and losses confirms that harm has materialized, not just potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Tết Bính Ngọ 2026: Cảnh báo Fanpage có tích xanh lừa đảo khách du lịch

2026-01-29
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools, including deepfake technology, to create fake content and impersonate trusted entities, which directly facilitates scams and fraud causing financial harm to people. The AI systems are central to the deception methods described, making this an AI Incident. The harms include violation of property rights (financial theft) and harm to individuals and communities through fraud. The involvement of AI is not speculative but clearly stated as part of the criminal methods. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chuyên gia A05 cảnh báo 4 hình thức lừa đảo trực tuyến dịp Tết Bính Ngọ 2026

2026-01-29
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and deepfake technologies by criminals to create fraudulent schemes that have already caused harm by deceiving and stealing money from victims. The involvement of AI in generating fake content and scenarios is central to the scams described. The harms are realized (financial theft), and the AI systems' use is a direct contributing factor. Hence, this is an AI Incident rather than a hazard or complementary information. The article is not merely a warning or general news but reports on ongoing and expected harms involving AI-enabled scams.
Thumbnail Image

AI và deepfake khiến 'mắt thấy, tai nghe' không còn đáng tin

2026-01-29
Thanh Niên
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems such as deepfake and voice cloning being used by criminals to impersonate people and organizations to commit fraud, resulting in actual financial harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss) and communities (widespread scams). The article does not merely warn about potential harm but reports realized harm and ongoing attacks. Hence, it is classified as an AI Incident.
Thumbnail Image

Nhận diện các hình thức mới của 'Deepfake'

2026-01-29
baodientu.chinhphu.vn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create realistic fake images, voices, and videos that are actively used in scams causing financial harm and security breaches. The harms described include direct financial losses (property harm) and threats to digital security (harm to communities). The article reports realized harm (e.g., $200 million in losses in Q1 2025) and ongoing incidents, not just potential risks. Therefore, it meets the criteria for an AI Incident due to the direct involvement of AI systems in causing harm.
Thumbnail Image

Chuyên gia đưa ra lời khuyên giúp người dùng trên mạng an toàn dịp Tết

2026-01-29
VietnamPlus
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfake and voice cloning) to create realistic fake images, voices, and videos that are used to deceive people into transferring money or providing sensitive information. These AI-enabled scams have already caused significant financial harm globally and to individuals, which fits the definition of an AI Incident as the AI system's use has directly led to harm. The article also includes expert analysis and warnings but focuses on the realized harms and ongoing risks, not just potential future harm or general information. Hence, it is not merely complementary information or an AI hazard.
Thumbnail Image

Lừa đảo tài chính lợi dụng công nghệ deepfake ngày càng phức tạp - Tạp chí Doanh nghiệp Việt Nam

2026-01-29
Cơ quan ngôn luận của Hiệp hội Doanh nghiệp Khoa học và Công nghệ Việt Nam
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake technology) and discusses harms caused by their misuse (financial scams). However, it does not report a particular incident of harm caused by AI, nor does it describe a new hazard event with plausible future harm. Instead, it focuses on a public awareness campaign, expert warnings, and statistical context about deepfake scams. This fits the definition of Complementary Information, as it enhances understanding and informs about societal and governance responses to AI-related harms without reporting a new incident or hazard itself.
Thumbnail Image

Bẫy deepfake trong dịp Tết Nguyên đán: Khi "mắt thấy, tai nghe" chưa chắc là thật

2026-01-30
Báo Nhân Dân điện tử
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake and deepvoice AI technologies by criminals to impersonate people in video calls and voice messages to commit fraud, resulting in actual financial losses to victims. This constitutes harm to individuals (financial harm) caused directly by the use of AI systems. The article also provides statistics on the number of fraud cases and financial damage, confirming that harm has materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Vui Tết an toàn - không lo Deepfake": Chung tay đẩy lùi lừa đảo trực tuyến

2026-01-31
cafef.vn
Why's our monitor labelling this an incident or hazard?
The article describes ongoing harms caused by AI systems (deepfake technology) in the form of online fraud and impersonation scams, which have directly led to financial harm to victims. However, the article itself is primarily about raising awareness, sharing expert insights, and promoting preventive measures rather than reporting a new or specific AI incident or hazard event. Therefore, it fits best as Complementary Information, providing context and societal response to existing AI-related harms rather than describing a new incident or hazard.
Thumbnail Image

'Chậm lại - kiểm chứng - bảo vệ' để tránh lừa đảo bằng Deepfake

2026-01-30
Báo Pháp Luật TP. Hồ Chí Minh
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Deepfake technology, an AI system capable of generating realistic fake images and voices, being used in scams that have already caused financial losses to victims. The harm is direct and realized, as victims have been defrauded. The discussion of prevention and awareness campaigns further supports that the harm is ongoing and significant. Hence, this is an AI Incident involving the use and misuse of AI systems leading to harm to people (financial injury).
Thumbnail Image

Siết chặt an ninh, an toàn giao dịch ngân hàng số

2026-01-31
cafef.vn
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems indirectly, as cybercriminals use AI to enhance fraud and attack techniques, causing significant financial harm to individuals and the banking system. This constitutes an AI Incident because the AI-enabled malicious activities have directly led to realized harm (financial losses, fraud). However, the article primarily focuses on the regulatory and technical measures taken by the banking sector and authorities to mitigate these harms, including new security standards and monitoring systems. Therefore, the article is best classified as Complementary Information, as it provides detailed context on responses to an ongoing AI Incident rather than reporting a new incident itself.
Thumbnail Image

Nhận được cuộc gọi này, tuyệt đối không nói 'có', 'vâng' mà hãy tắt máy ngay lập tức

2026-01-31
cafef.vn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or automated systems in phone scams that directly lead to harm by enabling fraud and identity theft, which are violations of rights and cause harm to individuals. Since the harm is occurring and the AI system's role is pivotal in enabling these scams, this qualifies as an AI Incident.
Thumbnail Image

Tội phạm sử dụng AI và cạm bẫy mạo danh

2026-02-02
Báo Công an nhân dân điện tử
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create fake images and videos that impersonate high-profile individuals, which are then used to defraud victims of large sums of money. The harm is direct and significant, involving financial loss and social disruption. The AI systems' use in generating deceptive content is central to the incident, fulfilling the criteria for an AI Incident as the AI's development and use have directly led to harm to people and communities. The detailed description of actual fraud cases and law enforcement actions confirms that the harm is realized, not just potential.