AI-Driven Deepfake and Biometric Fraud Surges Across Africa

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-enabled fraud, including deepfake and biometric spoofing, is rapidly increasing across Africa, particularly in East, West, and Southern regions. Criminals use AI to manipulate identity verification systems, leading to widespread account takeovers, financial theft, and security breaches. Biometric verification systems are now primary targets, with significant harm to individuals and businesses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI systems (deepfakes and biometric spoofing) to manipulate biometric verification processes, which are AI-driven security measures. This manipulation leads to identity fraud, a clear violation of rights and harm to individuals and businesses. Since the fraud is actively occurring and causing harm, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through fraudulent impersonation and security breaches.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Financial and insurance servicesDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Warning for South Africans as deepfake fraud surges

2026-03-05
The South African
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfakes and biometric spoofing) to manipulate biometric verification processes, which are AI-driven security measures. This manipulation leads to identity fraud, a clear violation of rights and harm to individuals and businesses. Since the fraud is actively occurring and causing harm, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through fraudulent impersonation and security breaches.
Thumbnail Image

65% of West Africa fraud attempts linked to biometric spoofing -- Report - Businessday NG

2026-03-05
Businessday NG
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI technologies, such as deepfake-generated biometric fraud, are being used to perpetrate fraud at scale, causing harm through account takeovers and financial theft. The biometric verification systems, which rely on AI for face matching, are being spoofed, leading to security breaches. This meets the definition of an AI Incident because the AI system's use and misuse have directly led to harm (financial and security-related) to persons and organizations. The involvement of AI in both the biometric systems and the fraudulent attacks is clear and central to the harm described.
Thumbnail Image

AI Fraud Now Five Times More Likely After Account Login Than at Onboarding

2026-03-05
News Ghana
Why's our monitor labelling this an incident or hazard?
The report explicitly mentions the use of AI tools by fraudsters to automate credential reuse and conduct sophisticated attacks on biometric verification systems, leading to widespread fraud attempts and account takeovers. These activities constitute violations of rights and harm to individuals by compromising digital identities and security. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

2026 Digital Identity Fraud in Africa Report

2026-03-05
JBKlutse
Why's our monitor labelling this an incident or hazard?
The report explicitly states that AI has enabled fraudsters to scale and improve the quality of deepfake fraud and other identity attacks, leading to significant realized harms such as account takeovers and fraudulent transactions. The involvement of AI in both the attack methods (deepfakes, AI-enabled automation) and defense mechanisms (LLMs for detection) is clear. The harms described include violations of individual rights and harm to communities through fraud, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but documents ongoing, realized harm caused by AI systems.
Thumbnail Image

Document Fraud Dominates Identity Verification Rejections in East Africa

2026-03-05
TechArena
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled fraud techniques causing a significant portion of identity verification rejections, indicating direct involvement of AI systems in fraudulent activities and detection. The harms include fraud, security breaches, and undermining trust in identity verification systems, which affect individuals and institutions. These harms fall under harm to communities and violations of rights. Since the harm is occurring and linked to AI system use and misuse, this qualifies as an AI Incident rather than a hazard or complementary information.