AI-Generated Deepfakes Fuel Surge in Financial Fraud and Imposter Scams

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Criminals are using AI-generated deepfake voices and synthetic images to perpetrate imposter scams, deceiving victims and bypassing security systems. This has led to a surge in financial fraud, with US consumers losing $8.8 billion in a year, alarming regulators and the financial industry.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated voices and masks used by criminals to deceive victims, leading to substantial financial fraud losses. The AI systems are being used maliciously to perpetrate scams, directly causing harm to people (financial harm). Therefore, this qualifies as an AI Incident under the definition of harm to persons or groups of people through the use of AI systems.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRobustness & digital securitySafetyTransparency & explainabilityRespect of human rightsHuman wellbeing

Industries
Financial and insurance servicesDigital securityMedia, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Economic/PropertyReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Powered by technology, imposter scams drive new wave of fraud

2023-08-22
The Japan Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated voices and masks used by criminals to deceive victims, leading to substantial financial fraud losses. The AI systems are being used maliciously to perpetrate scams, directly causing harm to people (financial harm). Therefore, this qualifies as an AI Incident under the definition of harm to persons or groups of people through the use of AI systems.
Thumbnail Image

Cost of AI deepfake scams to soar to $10.5 trillion by 2025: report

2023-08-22
The News International
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered scam technologies being exploited by criminals to deceive consumers, resulting in billions of dollars in losses. This is a direct harm to individuals and communities (financial harm and loss of trust), fitting the definition of an AI Incident. The harm is realized and ongoing, not merely potential. The involvement of AI in generating synthetic voices and images used in scams confirms the presence of AI systems contributing to the harm.
Thumbnail Image

Deepfake Imposter Scams Are Driving a New Wave of Fraud - BNN Bloomberg

2023-08-21
BNN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used to clone voices and create synthetic images to perpetrate scams that have already caused significant financial harm to consumers. The harms described include direct financial losses to individuals and the broader financial industry, which fits the definition of an AI Incident as the AI system's use has directly led to harm. The article also discusses ongoing responses and mitigation efforts, but the primary focus is on the realized harms caused by AI-enabled fraud, not just potential future risks or complementary information.
Thumbnail Image

Deepfakes Are Driving a Whole New Era of Financial Crime

2023-08-23
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake voices and images used by criminals to deceive victims and commit fraud, causing direct harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm (financial crime and victimization).
Thumbnail Image

Deepfake Imposter Scams Are Driving A New Wave Of Fraud

2023-08-22
BQ Prime
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI being used to 'turbocharge' fraud, with concrete financial losses reported ($8.8 billion lost in the US alone). The harm is realized and significant, affecting individuals and the financial sector. The AI system's use in generating deepfake imposters is a direct contributing factor to these harms, fitting the definition of an AI Incident involving harm to people and communities.
Thumbnail Image

Deepfake Imposter Scams Are Driving a New Wave of Fraud

2023-08-22
HT Tech
Why's our monitor labelling this an incident or hazard?
The article explicitly details how AI-generated deepfake voices and synthetic images are being used by criminals to perpetrate financial fraud, causing direct financial harm to victims and operational challenges to banks. The AI systems' use in cloning voices and creating fake IDs is central to the scams, which have already resulted in billions of dollars in losses. This meets the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss) and harm to communities (widespread fraud).
Thumbnail Image

How financial institutions can safeguard against deepfakes

2023-08-29
Zawya.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake technology using deep learning) that have been used in cyberattacks causing financial fraud, which constitutes harm to persons and property (financial harm). The article describes realized harms from these AI-enabled attacks and the need for safeguards, thus meeting the criteria for an AI Incident. It is not merely a future risk or general discussion but references actual attacks and their impact, making it an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How financial institutions can safeguard against deepfakes

2023-08-29
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake generation using deep learning) and their use in digital injection attacks that threaten financial institutions. Although no actual harm or incident is described, the threat is real and rapidly scalable, with potential to cause significant financial crime and fraud. The discussion of biometric liveness detection technologies as countermeasures further confirms the AI context. Since the article focuses on the plausible future harm from AI-enabled deepfakes and the need for safeguards, it fits the definition of an AI Hazard rather than an Incident or Complementary Information. It is not unrelated because it centers on AI-driven threats and responses in financial services.
Thumbnail Image

How financial institutions can safeguard against deepfakes

2023-08-29
Global Partnership for Education
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake technology and AI-based biometric authentication systems. The main focus is on the potential threat (plausible future harm) that deepfake-enabled digital injection attacks pose to financial institutions and consumers. No actual harm or incident is described as having occurred; instead, the article emphasizes the need for safeguards and the use of AI technologies to prevent fraud. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to financial harm through fraud but does not describe a realized AI Incident. It is not merely general AI news or complementary information since it focuses on the threat and risk of harm from AI misuse in financial services.
Thumbnail Image

How financial institutions can safeguard against deepfakes

2023-08-31
ITWeb
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake generation using deep learning and AI-based biometric authentication systems. However, it does not describe any realized harm or incident resulting from these AI systems. Instead, it focuses on the plausible future harm deepfakes could cause in financial fraud and the measures to prevent such harm. Therefore, the event is best classified as an AI Hazard, as it outlines a credible risk of harm from AI misuse in financial services without reporting an actual incident.
Thumbnail Image

How Can Financial Institutions Safeguard Themselves Against Deepfakes? - IT News Africa | Business Technology, Telecoms and Startup News

2023-08-30
ITNewsAfrica.com
Why's our monitor labelling this an incident or hazard?
Deepfake technology is explicitly described as an AI system that synthesizes or distorts media to impersonate individuals, facilitating cyberattacks and fraud. The article details how these AI-generated deepfakes have already led to financial crime risks and discusses existing AI-based countermeasures to detect and prevent such fraud. Since the misuse of AI deepfakes is causing or enabling harm to people and financial institutions, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to harm (financial crime and fraud).
Thumbnail Image

How Financial Institutions Can Prevent Deepfakes

2023-08-29
BizWatchNigeria.Ng
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically deepfake generation and biometric liveness detection technologies, which are AI-based. The harms discussed relate to financial fraud and cybercrime, which fall under harm to property and communities. However, the article does not describe any realized harm or specific event where an AI system directly or indirectly caused harm. It focuses on the potential threat deepfakes pose and the technological responses to mitigate these threats. Therefore, this is best classified as Complementary Information, as it provides context, risk awareness, and information about mitigation strategies rather than reporting an AI Incident or AI Hazard.