AI-Driven Identity Fraud via Deepfake and Synthetic Identities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Cybersecurity experts warn that AI-driven deepfake technology and synthetic identities complicate detection and prevention of identity theft. Fraudsters leverage AI to create falsified images and videos that bypass financial verification, raising concerns about human rights violations and breaches of legal protections.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article details actual AI-enabled fraud techniques actively used in the financial sector, quantifies their prevalence and success rates, and highlights real harms (identity theft, financial loss). These meet the definition of an AI Incident, as development and malicious use of AI systems has directly led to harm.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityRespect of human rightsAccountabilityTransparency & explainabilitySafety

Industries
Financial and insurance servicesDigital security

Affected stakeholders
ConsumersBusiness

Harm types
Economic/PropertyHuman or fundamental rightsReputationalPsychologicalPublic interest

Severity
AI incident

Business function:
ICT management and information securityCompliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας - Τι πρέπει να προσέξετε

2025-02-27
Patras Events
Why's our monitor labelling this an incident or hazard?
The article details actual AI-enabled fraud techniques actively used in the financial sector, quantifies their prevalence and success rates, and highlights real harms (identity theft, financial loss). These meet the definition of an AI Incident, as development and malicious use of AI systems has directly led to harm.
Thumbnail Image

Μέσω AI γίνεται πλεον το 43% όλων των απατών στον χρηματοπιστωτικό τομέα - Οι τακτικές που χρησιμοποιούνται

2025-02-25
Newsbeast
Why's our monitor labelling this an incident or hazard?
Cybercriminals’ use of AI for identity theft and financial fraud constitutes direct harm to individuals and institutions (violations of property and financial rights). The article describes actual, realized incidents of AI-enabled fraud, not merely potential risks or secondary updates, so it is an AI Incident.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας- Συμβουλές από εταιρεία ασφάλειας πληροφοριακών συστημάτων

2025-02-25
TheCaller
Why's our monitor labelling this an incident or hazard?
The piece describes ongoing, widespread misuse of AI by fraudsters rather than detailing a specific, singular incident or a narrowly defined hazard. Its main purpose is to give background on AI-driven identity-theft tactics and to offer recommendations for prevention, fitting the definition of complementary information rather than reporting a new incident or purely warning of a future risk.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας: Νέα απειλή προκαλεί χάος

2025-02-25
Reporter
Why's our monitor labelling this an incident or hazard?
Criminals are actively using AI systems (deepfake generators, automated social‐engineering tools) to commit identity theft, leading to real financial and privacy harm for individuals and banks. This constitutes an AI Incident since the AI-enabled fraud is already occurring and causing violations of rights and financial losses.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας: Η νέα απειλή που προκαλεί χάος

2025-02-25
Ελεύθερος Τύπος
Why's our monitor labelling this an incident or hazard?
The piece reports on real-world use of AI by criminals to perpetrate identity theft and deepfake fraud—leading to financial loss and privacy violations for individuals—constituting direct, ongoing harm. Therefore, it is an AI Incident.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας -Τι πρέπει να προσέξετε για να μην πέσετε στην παγίδα - iefimerida.gr

2025-02-25
iefimerida.gr
Why's our monitor labelling this an incident or hazard?
Cybercriminals are actively using generative AI and deepfake tools to perpetrate identity-theft scams and financial fraud, directly leading to harm. This fits the definition of an AI Incident, as these AI systems are a pivotal factor in realized illicit activities.
Thumbnail Image

Μέσω AI γίνεται πλέον το 43% όλων των απατών στον χρηματοπιστωτικό τομέα - Οι τακτικές που χρησιμοποιούνται - Fibernews

2025-02-25
Fibernews - All digital news!
Why's our monitor labelling this an incident or hazard?
This piece is a summary of ESET’s broader research results and guidance on the rise of AI-driven financial fraud. It does not focus on a discrete event or an unrealized risk scenario, but rather on ecosystem-level statistics and best practices, making it complementary information.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας - H νέα απειλή για τους τραπεζικούς λογαριασμούς | in.gr

2025-02-25
in.gr
Why's our monitor labelling this an incident or hazard?
The piece documents that AI-driven fraud now accounts for over 43% of financial-sector fraud attempts—with nearly 29% succeeding—highlighting ongoing, realized harms (unauthorized transactions, PII misuse) directly enabled by AI/deepfake technologies. Thus it reports a series of AI-related harms and qualifies as an AI Incident.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας. H νέα απειλή για τους τραπεζικούς λογαριασμούς

2025-02-25
cyprustimes.com
Why's our monitor labelling this an incident or hazard?
This is an AI Incident because it documents actual, realized harms directly driven by AI systems and techniques used by criminals to steal identities and defraud bank accounts. The deepfake‐enabled bypass of KYC checks, large‐scale synthetic identity creation, and AI‐powered credential stuffing are all reported as current, successful fraud methods. These events constitute direct violations of property rights and cause emotional and economic harm.
Thumbnail Image

"Μάστιγα" η απάτη μέσω τεχνητής νοημοσύνης-Τι να προσέχετε για να αποφύγετε τα χειρότερα - Ecozen

2025-02-25
Ecozen
Why's our monitor labelling this an incident or hazard?
It describes actual instances of harm where AI systems are leveraged to bypass biometric checks, forge documents, create synthetic identities, and automate attacks—resulting in victims’ financial loss and privacy violations. These constitute realized harms directly linked to AI use, fitting the definition of an AI Incident.
Thumbnail Image

Τεχνητή νοημοσύνη και κλοπή ταυτότητας- Συμβουλές από την ESET

2025-02-25
Liberal.gr
Why's our monitor labelling this an incident or hazard?
The piece focuses on presenting ESET’s estimates of AI‐based fraud rates, describes general tactics used by criminals, and outlines protective measures. There is no single harm event or detailed warning about a new hazard; it mainly offers broader contextual information and recommendations, fitting the definition of Complementary Information.
Thumbnail Image

Escrocheriile bazate pe inteligență artificială sunt tot mai greu de detectat de autorități. Avertismentul venit de la o mare companie de securitate cibernetică

2025-03-09
Ziare.com
Why's our monitor labelling this an incident or hazard?
The piece aggregates data on ongoing AI‐based scams and issues general recommendations. It does not describe a concrete AI failure or event causing harm (AI Incident) nor a new plausible single hazard scenario. Instead, it offers context on the evolving threat landscape, fitting the definition of Complementary Information.
Thumbnail Image

Fraudele deepfake și identitățile sintetice: O nouă eră a furtului de identitate, avertizează experții Eset

2025-03-09
Forbes Romania
Why's our monitor labelling this an incident or hazard?
Cybercriminals are actively using AI systems (deepfake generators, generative AI for synthetic identities, credential-stuffing tools) to defraud individuals and financial institutions, causing realized harms. The misuse of these AI capabilities directly contributes to identity theft and financial loss, qualifying it as an AI Incident.
Thumbnail Image

Escrocheriile bazate pe AI și deepfake-uri complică detectarea furtului de identitate, avertizează experții

2025-03-09
ZIUA de Constanta
Why's our monitor labelling this an incident or hazard?
Criminals are employing AI systems (deepfake and generative AI) to fabricate user images and videos, directly causing identity theft and financial harm. This is an actual, realized harm facilitated by AI, fitting the definition of an AI Incident.
Thumbnail Image

Deepfake-urile și fraudele digitale. Inteligența artificială amenință securitatea identității online - Evenimentul Zilei

2025-03-10
Evenimentul Zilei
Why's our monitor labelling this an incident or hazard?
The report details how generative AI systems are being used to produce falsified images, videos, and documents that have already led to financial losses and breaches of security. This constitutes a series of AI-related harms (identity theft, fraud) directly caused by AI use, fitting the definition of an AI Incident.
Thumbnail Image

Furtul de identitate este tot mai greu de detectat din cauza deepfake şi a fraudelor bazate pe inteligența artificială, avertizează experții

2025-03-09
Economedia.ro
Why's our monitor labelling this an incident or hazard?
The article is a forward-looking, expert analysis and warning about how AI-enabled fraud schemes (deepfakes, synthetic identities, credential stuffing) can facilitate identity theft at scale. It summarizes trends and statistics but does not describe a specific incident or remediation effort. Instead, it outlines a credible risk of harm, fitting the definition of an AI Hazard.
Thumbnail Image

Furtul de identitate, tot mai greu de detectat din cauza deepfake şi a fraudelor bazate pe AI (experţi)

2025-03-09
G4Media.ro
Why's our monitor labelling this an incident or hazard?
No discrete incident or novel AI system causing a concrete harm is described; rather, the piece aggregates data on ongoing AI‐based fraud trends and recommends countermeasures. Its primary purpose is to inform and contextualize—matching the definition of Complementary Information.
Thumbnail Image

Furtul de identitate, tot mai greu de detectat din cauza deepfake şi a fraudelor bazate pe AI (experţi) - Economica.net

2025-03-09
Economica.net
Why's our monitor labelling this an incident or hazard?
This item does not describe a specific AI‐caused harm incident, nor a single event; rather, it presents industry findings and contextual details about the evolving landscape of AI‐enabled fraud and defenses. It offers supporting data and guidance, fitting the definition of complementary information.
Thumbnail Image

Furtul de identitate, tot mai greu de detectat din cauza deepfake și a fraudelor bazate pe AI (experți)

2025-03-09
agerpres.ro
Why's our monitor labelling this an incident or hazard?
The piece does not describe a single discrete incident or a newly discovered vulnerability, nor does it focus on a specific near-miss or warning about one particular AI system’s future risk. Instead, it provides aggregate data, expert analysis, and advice on combating ongoing AI-enabled fraud, making it complementary information to understand the broader AI-related threats landscape.
Thumbnail Image

Asigurați-vă că nu veți fi următoarea victimă

2025-03-09
Cotidianul RO
Why's our monitor labelling this an incident or hazard?
The piece details multiple realized harms directly enabled by AI systems (deepfake generators, generative tools, automated credential stuffing) that lead to identity fraud and financial loss. Because it reports actual incidents where AI has caused harm, it qualifies as an AI Incident rather than a hypothetical risk or complementary update.
Thumbnail Image

CFTC Warns Of Extremely Convincing AI-Driven Investment Scams - FinanceFeeds

2025-03-20
FinanceFeeds
Why's our monitor labelling this an incident or hazard?
Generative AI is being misused by criminals to carry out investment frauds (deepfake profiles, videos, websites) leading to actual financial harm. This constitutes an AI Incident because the AI system’s malicious use has directly resulted in consumer losses.
Thumbnail Image

The Age of Artificial Deception: AI Fraud Is Fooling Investors, CFTC Warns

2025-03-19
Financial and Business News | Finance Magnates
Why's our monitor labelling this an incident or hazard?
The piece centers on a regulatory/governance response—an official advisory from the CFTC’s Office of Customer Education and Outreach—aimed at educating the public on existing AI-enabled scams. It does not detail a specific new incident or describe a narrowly focused AI hazard but instead offers broader awareness and prevention guidance, fitting the definition of Complementary Information.
Thumbnail Image

CFTC warns of rising AI-facilitated financial frauds

2025-03-20
FinTech Global
Why's our monitor labelling this an incident or hazard?
This is primarily a governance/public‐awareness update (a regulatory advisory) about the threat of AI‐facilitated fraud. It outlines how criminals could (and are beginning to) misuse generative AI and offers steps to mitigate risk, but does not document a particular AI‐driven fraud case or concrete harm mapping. Therefore it is best classified as complementary information.
Thumbnail Image

CFTC: Generative AI is making easier for fraudsters to fool tthe public

2025-03-19
Finextra Research
Why's our monitor labelling this an incident or hazard?
The article describes ongoing misuse of generative AI by fraudsters to defraud individuals (e.g. fake IDs, spoofed video chats, malicious trading sites), causing direct financial harm. This qualifies as an AI Incident because the AI system’s use has directly led to harm.
Thumbnail Image

CFTC: Generative AI Is Making It Easier For Fraudsters To Fool The Public

2025-03-19
mondovisione.com
Why's our monitor labelling this an incident or hazard?
This is a governance/societal response—an educational advisory detailing existing AI-enabled fraud and recommending defenses. While it describes real harms caused by AI-enabled scams, the primary focus is the CFTC’s advisory rather than reporting a discrete incident or identifying a novel hazard. Therefore, it falls under Complementary Information.
Thumbnail Image

The New Face of IDENTIFY THEFT in 2025

2025-03-18
PCMag UK
Why's our monitor labelling this an incident or hazard?
The piece is primarily educational and advisory, detailing general trends in AI-enabled identity theft and recommending protective steps. It does not focus on a specific realized incident or a narrowly scoped hazard event, but rather contextualizes the broader AI threat landscape and mitigation strategies. Therefore, it constitutes Complementary Information.
Thumbnail Image

Fraudele deepfake și inteligența artificială - Noua frontieră a furtului de identitate și a escrocheriilor online

2025-02-16
ZIUA de Constanta
Why's our monitor labelling this an incident or hazard?
The misuse of AI systems (deepfake generators and injection attacks) is explicitly linked to ongoing identity fraud and financial loss. This constitutes a realized harm caused by the AI’s outputs, meeting the definition of an AI Incident.
Thumbnail Image

Avertizare cybersecurity: Fraudele bazate pe Inteligenţa Artificială fac furtul de identitate mai greu de detectat şi de prevenit - Economica.net

2025-02-16
Economica.net
Why's our monitor labelling this an incident or hazard?
It describes active harms (identity theft, fraud) directly enabled by AI systems—deepfake generation, AI-driven credential attacks—leading to financial and property harm. These are realized incidents, not mere forecasts or policy updates.
Thumbnail Image

Avertizare cybersecurity: Fraudele bazate pe Inteligența Artificială fac furtul de identitate mai greu de detectat și de prevenit

2025-02-16
agerpres.ro
Why's our monitor labelling this an incident or hazard?
The piece reports that AI-enabled scams are actively occurring (43% of fraud attempts use AI, with 29% successful), causing harm to individuals and financial institutions. Because it details actual misuse of AI systems leading to violations of property and rights, it qualifies as an AI Incident.
Thumbnail Image

Fraudele deepfake și AI fac furtul de identitate tot mai greu de detectat, avertizează experții Eset

2025-02-16
Forbes Romania
Why's our monitor labelling this an incident or hazard?
This is a case of realized harm: AI systems (deepfake generators, synthetic-identity tools, generative audio) are being used in actual identity-theft and extortion schemes, directly causing financial loss and rights violations. Though the article aggregates multiple incidents rather than describing one discrete event, the AI misuse and resulting harms are ongoing and materialized, so it qualifies as an AI Incident.
Thumbnail Image

Cybersecurity: Fraudele bazate pe Inteligența Artificială fac furtul de identitate mai greu de detectat și de prevenit

2025-02-16
Profit.ro
Why's our monitor labelling this an incident or hazard?
The piece reports on realized harms (identity theft, financial fraud) directly enabled by AI systems (deepfake technology, synthetic biometric spoofing) and provides data on the volume and success rates of these attacks. Because the AI involvement has directly led to ongoing fraudulent incidents, this qualifies as an AI Incident.
Thumbnail Image

Noua gaselnita a hackerilor ca sa faca furtul de identitate mai greu de detectat si de prevenit. Ce folosesc pentru a pentru a crea imagini si videoclipuri falsificate ale utilizatorilor reali

2025-02-16
Realitatea.NET
Why's our monitor labelling this an incident or hazard?
The article describes real-world misuse of AI systems—specifically generative deepfake tools and synthetic identity techniques—that have directly enabled identity theft, fraudulent account openings, and financial scams. The harms (financial loss, identity theft) are occurring now, so this qualifies as an AI Incident.
Thumbnail Image

Specialişti în securitate cibernetică: Fraudele bazate pe AI fac furtul de identitate mai greu de detectat şi de prevenit - Stiripesurse.md

2025-02-16
Stiripesurse.md
Why's our monitor labelling this an incident or hazard?
Criminals are actively using AI-powered deepfake and generative tools to bypass biometric checks, automate credential stuffing, and commit identity fraud. The article cites statistics showing these AI-driven attacks are occurring now and causing measurable harm, so this qualifies as an AI Incident.
Thumbnail Image

Specialişti în securitate cibernetică: Fraudele bazate pe AI fac furtul de identitate mai greu de detectat şi de prevenit

2025-02-16
Știrile ProTV
Why's our monitor labelling this an incident or hazard?
The piece reports that AI systems (deepfake generators, synthetic‐identity tools, credential‐stuffing algorithms and generative models) are already being used to perpetrate identity‐theft and financial fraud (with quantified success rates), causing actual harm. AI’s involvement in these crimes is direct and pivotal, making this an AI Incident.