Deepfake Surge Drives Identity Fraud Spike in Singapore

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Singapore experienced a 207% increase in identity fraud in 2024, the highest in the Asia-Pacific region, driven by deepfake technology. This surge, reported by Sumsub, highlights the growing use of AI in creating fake documents and facilitating fraud networks, significantly impacting identity security.[AI generated]

Why's our monitor labelling this an incident or hazard?

Criminals are actively using AI systems (deepfake generation, automated forgery tools, credential-stealing malware) to execute account takeovers, manipulate identity documents, and perpetrate financial losses worldwide. These activities have directly resulted in widespread harm to individuals and businesses, qualifying as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securitySafetyAccountabilityRespect of human rightsTransparency & explainability

Industries
Digital securityFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
General public

Harm types
Economic/PropertyHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Identity Fraud in the Age of AI: Account Takeover Scams Soar 250%

2024-11-21
Finance Magnates
Why's our monitor labelling this an incident or hazard?
Criminals are actively using AI systems (deepfake generation, automated forgery tools, credential-stealing malware) to execute account takeovers, manipulate identity documents, and perpetrate financial losses worldwide. These activities have directly resulted in widespread harm to individuals and businesses, qualifying as an AI Incident.
Thumbnail Image

Singapore registers Asia-Pacific's biggest spike in identity fraud, driven by deepfake surge

2024-11-21
CNA
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated manipulated media used to impersonate individuals in identity fraud. The article documents a substantial year-on-year rise in such AI-enabled fraud cases, representing direct harms (financial loss, identity theft) caused by AI misuse. Therefore, it qualifies as an AI Incident.
Thumbnail Image

Sumsub: APAC Sees 121% Increase in Identity Fraud, with Deepfakes Becoming a Growing Threat | Taiwan News | Nov. 21, 2024 09:00

2024-11-21
Taiwan News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the rise of deepfake fraud, which is an AI-generated synthetic media technique used maliciously to commit identity fraud. This constitutes direct harm to individuals and businesses through deception and financial loss, fitting the definition of an AI Incident. The involvement of AI systems in generating deepfakes is clear, and the harms are realized and ongoing, not merely potential. Hence, the event is classified as an AI Incident due to the direct link between AI misuse (deepfakes) and identity fraud harms.
Thumbnail Image

Sumsub: APAC Sees 121% Increase in Identity Fraud, with Deepfakes Becoming a Growing Threat

2024-11-21
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake fraud, which is an AI-generated synthetic media technique, as a growing and realized form of identity fraud causing harm in the APAC region. The harms include financial losses and erosion of trust, which fall under harm to individuals and communities. The AI system's use in generating deepfakes is directly linked to these harms, fulfilling the criteria for an AI Incident. The report's data on actual fraud attempts and impacts confirms that harm is occurring, not just potential. Hence, this is classified as an AI Incident.
Thumbnail Image

Deepfake attacks now occur every five minutes, Entrust report warns | Biometric Update

2024-11-19
Biometric Update
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI and deepfake technology) to perpetrate identity fraud and biometric fraud attacks, which have directly caused harm to individuals and organizations, including financial losses and violations of rights. The article describes realized harms occurring frequently (every five minutes), not just potential risks. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harms.
Thumbnail Image

Identity Fraud in Africa Rises Sharply, Deepfakes Lead

2024-11-21
OCCRP
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates synthetic video and audio content. The article explicitly links the rise in identity fraud to the use of deepfakes, which are being exploited to impersonate individuals and commit fraud. This has directly led to realized harms such as financial losses and erosion of trust in digital services, fulfilling the definition of an AI Incident. The harm is materialized and ongoing, not merely a potential risk, so the classification as an AI Incident is appropriate.
Thumbnail Image

Deepfake Fraud Jumps by 194% in APAC, as Fraud-as-a-Service Becomes More Widespread, Sumsub Reveals | The Fintech Times

2024-11-20
The Fintech Times
Why's our monitor labelling this an incident or hazard?
Deepfake fraud inherently involves AI systems that generate synthetic media to impersonate individuals, which is explicitly mentioned as increasing significantly. The resulting identity fraud and financial fraud cause direct harm to individuals and businesses, fulfilling the criteria for an AI Incident. The article describes realized harm (fraud occurrences) linked to AI-generated deepfakes and the use of AI-enabled Fraud-as-a-Service platforms, not just potential or future risks. Therefore, this event qualifies as an AI Incident due to the direct and significant harm caused by AI system use in fraud.
Thumbnail Image

Digital Document Forgeries Overtake Physical Forgeries For the First Time As Deepfakes on the Rise | The Fintech Times

2024-11-20
The Fintech Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and deepfakes by fraudsters to conduct digital document forgeries and synthetic identity attacks, which have directly led to financial fraud and scams across multiple industries. The harms are realized and ongoing, including fraud attempts every five minutes and a large increase in digital forgery surpassing physical forgery. The AI systems' use in these attacks is central to the harm, fulfilling the criteria for an AI Incident involving violations of rights and harm to communities and individuals through fraud and misinformation.
Thumbnail Image

Sumsub: APAC Sees 121% Increase in Identity Fraud, with Deepfakes Becoming a Growing Threat

2024-11-21
IT News Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake fraud, which is an AI-generated synthetic media technique, as a growing and realized threat contributing to identity fraud. The harms described include financial losses and identity theft, which are direct harms to persons and businesses. The AI system's use in generating deepfakes is a direct factor in these harms. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm (identity fraud and associated consequences).
Thumbnail Image

AI drives 15x growth in deepfake fraud across APAC

2024-11-20
Asian Banking & Finance
Why's our monitor labelling this an incident or hazard?
The article discusses the rapid increase in AI-driven deepfake fraud, which represents a credible and plausible risk of harm to individuals and businesses (fraud harms). The AI systems involved in both committing and combating fraud are explicitly mentioned. However, the article does not describe a concrete event where harm has already occurred due to AI misuse; rather, it outlines the threat and the defensive measures being implemented. Therefore, this situation fits the definition of an AI Hazard, as it plausibly could lead to AI Incidents (fraud harms) if not properly managed. It is not Complementary Information because the main focus is not on updates or responses to a past incident but on the current threat landscape and AI's role in it. It is not Unrelated because AI systems are central to the described risks and responses.
Thumbnail Image

Crypto, Fintech Firms in APAC Face Growing Identity Fraud Risks

2024-11-20
Blockhead
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfakes, which are AI-generated synthetic media used to impersonate individuals and commit identity fraud. The fraud has already occurred, causing harm to individuals and businesses in the crypto and fintech sectors, fulfilling the criteria for an AI Incident. The harm includes financial loss and violation of trust, which are significant harms to communities and individuals. Therefore, this event is classified as an AI Incident.
Thumbnail Image

FinCEN Alert: Fraud schemes using generative artificial intelligence to circumvent financial institutions' identity verification, authentication, and due diligence controls

2024-11-20
Consumer Finance Monitor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions generative AI systems being used to create deepfake media that enable fraudsters to bypass identity verification and perpetrate financial fraud. This constitutes direct harm to financial institutions and consumers, including violations of legal obligations and financial rights. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to realized harm through fraudulent activities.