AI-Driven Deepfake Scams Exploit Romance, Investment and Tech Support

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Scammers in Vietnam and beyond use AI and deepfake tools to impersonate celebrities, soldiers or tech support, generate lifelike images, voices and scripts for investment, romance, e-commerce and remote access scams. Google, Meta, Haiphong police and Europol warn of rising AI-driven fraud, blocking millions of high-risk apps and accounts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI was used to mimic the voice of a public official to conduct fraudulent calls, resulting in a significant financial loss to the victim. This is a direct harm caused by the use of an AI system (voice synthesis for impersonation) leading to a realized harm (financial fraud). Therefore, it meets the criteria for an AI Incident as the AI system's use directly led to harm to a person (financial loss).[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityAccountabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityConsumer servicesFinancial and insurance servicesIT infrastructure and hostingGovernment, security, and defence

Affected stakeholders
Consumers

Harm types
Economic/PropertyPsychologicalReputationalHuman or fundamental rightsPublic interest

Severity
AI incident

Business function:
Marketing and advertisementCitizen/customer service

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Bộ trưởng Quốc phòng Ý bị AI giả mạo, lừa cựu chủ tịch Inter Milan chuyển 1 triệu euro

2025-02-13
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to mimic the voice of a public official to conduct fraudulent calls, resulting in a significant financial loss to the victim. This is a direct harm caused by the use of an AI system (voice synthesis for impersonation) leading to a realized harm (financial fraud). Therefore, it meets the criteria for an AI Incident as the AI system's use directly led to harm to a person (financial loss).
Thumbnail Image

Google cảnh báo 5 thủ đoạn lừa đảo trực tuyến tại Việt Nam

2025-02-11
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI in creating deepfake videos, AI-enhanced scam techniques, and AI-generated content to deceive victims, which has directly led to financial and data-related harms. The harms include fraud, identity theft, unauthorized access to personal and financial information, and potential legal repercussions for victims. The AI systems' malicious use is central to the scams' effectiveness and the resulting harm, fulfilling the criteria for an AI Incident. The article does not merely warn about potential risks but describes ongoing harms caused by AI-enabled scams, thus it is not an AI Hazard or Complementary Information. It is not unrelated because AI involvement and harm are clearly stated.
Thumbnail Image

Loạt tỷ phú Italy trình báo bị lừa bởi AI giả giọng Bộ trưởng Quốc phòng

2025-02-11
vnexpress.net
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI was used to generate the voice of the Minister of Defense, which was then used to deceive billionaires into transferring money. This use of AI directly caused financial harm to at least one victim and posed a significant risk to others targeted. The harm is realized (money was transferred), and the AI system's role was pivotal in enabling the fraud. Therefore, this qualifies as an AI Incident under the framework, as it involves the use of an AI system leading directly to harm (financial loss and deception).
Thumbnail Image

Khi trí tuệ nhân tạo là trợ thủ đắc lực của những kẻ lừa tình trực tuyến

2025-02-11
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (language models, deepfake technology, chatbots) being used by scammers to perpetrate online romance fraud, which has caused actual harm to victims (financial loss, emotional harm). The AI systems are not hypothetical or potential threats but are actively enabling and amplifying the scams. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people and communities. The article also discusses the nature of the AI involvement (use of AI-generated scripts, voices, and images) and the resulting harms, fulfilling the criteria for classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google đưa ra 5 hình thức lừa đảo phổ biến trên mạng và mẹo để an toàn hơn

2025-02-11
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos and sophisticated fake websites that have been used in active scams causing financial and data harm to victims. This constitutes direct involvement of AI systems in causing harm (AI Incident). The harms include financial loss, data breaches, and potential legal consequences for victims, which align with the definitions of AI Incident. The article also provides safety tips but the main focus is on the ongoing harms caused by AI-enabled scams, not just potential risks or responses, so it is not Complementary Information or AI Hazard.
Thumbnail Image

Doanh nghiệp viễn thông, Internet không được cung cấp dịch vụ có nội dung lừa đảo

2025-02-11
VietNamNet News
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (AI software and deepfake technology) by criminals to produce fraudulent content that leads to harm, specifically fraud and asset theft. The harms described include violations of property rights and harm to individuals and communities through deception and extortion. Since these harms are occurring and directly linked to the use of AI systems, this qualifies as an AI Incident. The article focuses on the realized harm caused by AI-enabled fraud, not just potential or future risks, and details responses to ongoing criminal activity.
Thumbnail Image

Google cảnh báo 5 chiêu lừa phổ biến tại Việt Nam

2025-02-11
TUOI TRE ONLINE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create deepfake videos and sophisticated fake websites that have been used in active scams causing financial harm to victims. The AI systems' development and use have directly led to violations of property rights and harm to individuals and communities through fraud. This fits the definition of an AI Incident, as the AI system's use has directly led to significant harm. The article is not merely a warning or potential risk but describes ongoing harms and active scams involving AI.
Thumbnail Image

Công an Hải Phòng cảnh báo thủ đoạn sử dụng công nghệ cao giả danh cơ quan tổ chức, truy cập được vào tài khoản người dân, giả giọng nói, nhận diện khuôn mặt để lừa đảo

2025-02-11
cafef.vn
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (voice imitation and facial recognition) by criminals to commit fraud and steal assets from individuals. This constitutes direct harm to property and individuals through malicious use of AI technology. Since the harm is occurring and the AI systems are pivotal in enabling these crimes, this qualifies as an AI Incident under the framework.
Thumbnail Image

Facebook cảnh báo chiêu trò lừa đảo trước thềm Lễ Tình nhân 14/2

2025-02-13
Kienthuc.net.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and deepfake technologies to create fake identities and videos that deceive victims into transferring money or digital assets, which constitutes direct harm to individuals. The AI systems are used maliciously to impersonate others and facilitate fraud, leading to realized financial harm. Meta's removal of fake accounts and the use of AI-based facial recognition for identity verification further confirm the AI system involvement and the harm caused. Hence, this is an AI Incident due to the direct, realized harm caused by AI-enabled scams.
Thumbnail Image

Hơn 1,5 triệu cài đặt có mức độ rủi ro cao và 8.000 ứng dụng độc hại bị ngăn chặn

2025-02-11
hanoimoi.vn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create and enhance scam techniques that have already resulted in harm to victims, including financial fraud and data theft. The AI systems are used in the development and use phases to generate fake content and impersonations that facilitate these harms. Since the harms are realized and directly linked to AI-enabled scams, this qualifies as an AI Incident under the framework.