AI-Driven Deepfake Fraud Surges, Prompting Defensive Innovation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Onfido's 2024 Identity Fraud Report reveals a 31-fold increase in AI-generated deepfake fraud and a fivefold rise in forged identities, driven by accessible generative AI tools. These attacks have caused significant financial and identity-related harm, prompting Onfido to launch an AI-powered Fraud Lab to counter escalating AI-enabled fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems (generative AI for deepfakes) in fraudulent activities that have directly led to harm, including financial fraud and unauthorized access to sensitive information. The article details the scale of the problem (3000% increase in fraud attempts) and the methods used by fraudsters, indicating realized harm rather than just potential risk. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to individuals and organizations through fraud.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyRobustness & digital securityAccountabilityTransparency & explainability

Industries
Digital securityFinancial and insurance servicesIT infrastructure and hosting

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfake fraud attempts are up 3000% in 2023 -- here’s why

2023-11-15
The Next Web
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for deepfakes) in fraudulent activities that have directly led to harm, including financial fraud and unauthorized access to sensitive information. The article details the scale of the problem (3000% increase in fraud attempts) and the methods used by fraudsters, indicating realized harm rather than just potential risk. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to individuals and organizations through fraud.
Thumbnail Image

'Being deepfaked showed me how easy it is to hack a bank account'

2023-11-16
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses how deepfake AI technology is used by scammers to impersonate individuals and bypass security checks, resulting in financial fraud and identity theft, which constitute harm to individuals and communities. The AI system's use directly leads to these harms, qualifying this event as an AI Incident under the framework. The article also details the increase in such fraud attempts and the challenges in detecting deepfakes, confirming realized harm rather than just potential risk.
Thumbnail Image

Onfido Launches First Fraud Lab Capable of Creating Synthetic Attacks at Scale as Deepfakes Increase 31X

2023-11-15
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used by fraudsters to perpetrate identity fraud and deepfake attacks, which have caused significant financial harm and security breaches. It also describes Onfido's AI-powered Fraud Lab that creates synthetic attacks to improve detection and prevention. The harms are realized and ongoing, including financial losses and identity fraud, which fall under harm to persons and communities. Therefore, this event meets the criteria for an AI Incident due to direct involvement of AI in causing harm through fraud and identity theft.
Thumbnail Image

Onfido releases 2024 Identity Fraud Report and launches Fraud Lab | Biometric Update

2023-11-15
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems both as tools used by fraudsters (generative AI for deepfakes and forged identities) causing harm, and as defensive AI systems (biometric verification powered by deep learning) used to detect and prevent fraud. The harms described include identity fraud and document forgery, which constitute violations of rights and harm to individuals and communities. The Fraud Lab's synthetic attack generation is a mitigation effort but does not negate the fact that AI-enabled fraud is occurring. Hence, the event qualifies as an AI Incident due to the realized harms caused by AI-powered fraud.
Thumbnail Image

Onfido Fraud Lab Churns Out Fake IDs -- and Trains Anti-Fraud Tech

2023-11-16
FindBiometrics
Why's our monitor labelling this an incident or hazard?
The article details the use of AI systems to generate synthetic fraudulent data for training anti-fraud AI, which is a development and use of AI for defensive purposes. There is no direct or indirect harm caused by the AI system described, nor is there a plausible future harm from the AI system's use as presented. The event is primarily informative about AI research and defense strategies against AI-enabled fraud, fitting the definition of Complementary Information as it provides context and updates on AI ecosystem developments and responses to AI-driven fraud threats.
Thumbnail Image

Onfido Launches First Fraud Lab Capable of Creating Synthetic Attacks at Scale as Deepfakes Increase 31X

2023-11-16
Financial IT
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems both in the perpetration of fraud (AI-generated deepfakes and synthetic identities used by criminals) and in the defensive AI systems developed by Onfido to detect and prevent these attacks. The harms described include financial losses (over $3.9B saved by the platform) and the broader impact of identity fraud on individuals and communities. Since the AI systems' use has directly led to realized harms (fraud attacks causing financial and identity-related harm), this qualifies as an AI Incident. The Fraud Lab's launch is a response to an ongoing AI Incident rather than a mere hazard or complementary information. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

10 signs your identity has been compromised

2023-12-08
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes as a tool that makes identity theft more difficult to detect and thus more harmful. The harms described—financial fraud, unauthorized transactions, tax fraud, and legal issues—are real and have occurred to many individuals. The AI system's role in enabling these harms is indirect but pivotal, as the advanced AI technology facilitates the creation of convincing fake identities and documents used by criminals. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to persons (financial and legal harm) and communities (through widespread identity theft).