AI-Generated Fake IDs Bypass Crypto Exchange KYC Checks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The AI service OnlyFake uses neural networks to generate highly realistic fake IDs, enabling users to bypass identity verification (KYC) on major cryptocurrency exchanges. This has facilitated fraud, undermined regulatory compliance, and poses significant risks to financial security and anti-money laundering efforts.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of neural networks (an AI system) to generate fake IDs that are used to successfully bypass identity verification on a cryptocurrency exchange known for criminal use. This directly facilitates fraud and money laundering, which are harms to property and communities. The AI system's development and use are central to the harm described, meeting the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyRespect of human rights

Industries
Financial and insurance servicesDigital security

Affected stakeholders
BusinessGovernmentGeneral public

Harm types
Economic/PropertyReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Inside the Underground Site Where 'Neural Networks' Churn Out Fake IDs

2024-02-05
404 Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of neural networks (an AI system) to generate fake IDs that are used to successfully bypass identity verification on a cryptocurrency exchange known for criminal use. This directly facilitates fraud and money laundering, which are harms to property and communities. The AI system's development and use are central to the harm described, meeting the criteria for an AI Incident.
Thumbnail Image

AI-Generated Fake IDs Bypass Crypto Exchange KYC Checks, OKX Says Industry-Wide Issue By Benzinga

2024-02-06
Investing.com UK
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as generating counterfeit IDs that have been successfully used to bypass security checks, leading to fraudulent account creation on crypto exchanges. This directly results in harm by facilitating illegal activities, undermining compliance with financial regulations, and potentially enabling further criminal acts. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in fraud and security breaches.
Thumbnail Image

AI-Generated Fake IDs Bypass Crypto Exchange KYC Checks, OKX Says Industry-Wide Issue

2024-02-06
NASDAQ Stock Market
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating counterfeit IDs that have been successfully used to bypass security checks on crypto exchanges, leading to fraudulent account creation. This directly results in harm by enabling illegal activities and undermining trust in financial systems. The AI system's use is central to the incident, fulfilling the criteria for an AI Incident due to realized harm involving violations of legal obligations and harm to property and communities. Therefore, this event is classified as an AI Incident.
Thumbnail Image

People Are Using Basic AI to Bypass KYC -- But Should You? - Decrypt

2024-02-07
Decrypt
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (GANs and diffusion models) to generate fake IDs that successfully bypass KYC verification, enabling fraudulent account openings and other illicit activities. This directly leads to harm in the form of financial fraud and breaches of AML/KYC regulations, which are legal obligations protecting fundamental rights and financial security. The involvement of AI in producing these fake documents is central to the incident, and the harm is realized, not just potential. Hence, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

AI versus age-verification laws

2024-02-07
Reason
Why's our monitor labelling this an incident or hazard?
The AI system (OnlyFake) is explicitly mentioned as generating fake IDs that can be used to bypass age-verification laws online. This use directly leads to harm by enabling minors to access platforms they are legally restricted from, violating laws designed to protect them and potentially exposing them to harmful content. The article provides evidence of the AI system's capability and its practical use, indicating realized harm rather than hypothetical risk. Therefore, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing a violation of legal protections and harm to communities (minors).
Thumbnail Image

OnlyFake Website Pumps Out Hyper Realistic Images of Fake IDs

2024-02-05
PetaPixel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (neural networks and generators) to produce fake IDs that have been successfully used to bypass identity verification, indicating direct involvement of AI in causing harm. The harm includes violations of legal and fundamental rights through identity fraud, which is a clear AI Incident under the framework. The AI system's development and use have directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Coin Center Director of Research raises alarm over identity fraud via AI

2024-02-05
CryptoSlate
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (neural networks) used to create fake IDs that have been verified as effective in bypassing identity verification processes, directly enabling identity fraud. This is a clear case where the AI system's use has directly led to harm (identity fraud), fulfilling the criteria for an AI Incident. The harm is realized, not just potential, and involves violations of security and trust that impact individuals and financial institutions. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

OnlyFake, the deepfake site churning out sophisticated fake IDs

2024-02-06
ReadWrite
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of neural networks (an AI system) to produce counterfeit documents that have been successfully used to bypass identity verification on a major crypto exchange. This constitutes direct use of AI leading to realized harm, including fraud and security breaches, which fall under harm to communities and property. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Crypto Exchanges Face Security Challenge With New AI Service

2024-02-06
Bitcoinist.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI (neural networks) to generate realistic fake IDs that successfully bypass crypto exchange verification systems, leading to fraudulent account creation. This directly results in harm to communities and property through scams and illicit activities. The AI system's role is pivotal in enabling these harms by lowering the cost and increasing the scale of fake ID production, which facilitates fraud. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

New Platform Enables Fraudulent KYC for Only $15, Targets Crypto Platforms: Report

2024-02-06
CryptoPotato
Why's our monitor labelling this an incident or hazard?
The platform OnlyFakes uses AI-generated fake IDs to bypass KYC checks, enabling fraudulent access to crypto and financial platforms. This directly leads to violations of legal obligations and potential financial harm to users and platforms. The AI system's involvement in producing these fake documents is central to the incident. Although the claim of AI use is disputed, the article states the platform claims AI involvement, and the nature of the fake document generation aligns with AI capabilities. The harm is realized, not just potential, as multiple platforms have been breached using these documents. Hence, this is an AI Incident.
Thumbnail Image

Generative AI can now create fake IDs

2024-02-08
MediaNama
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (neural networks) to create fake identification documents that have been demonstrated to bypass verification on a major platform. This use of AI has directly led to harm by enabling fraudulent activities and undermining security measures, which fits the definition of an AI Incident due to violations of legal protections and harm to communities and property through fraud and laundering risks.
Thumbnail Image

OnlyFake: Underground Site Churns Out AI-Generated Fake IDs to Dupe Crypto Exchange KYC - Blockonomi

2024-02-06
Blockonomi
Why's our monitor labelling this an incident or hazard?
The article explicitly states that an AI system is used to create fake IDs that have been used to pass KYC verification at multiple crypto exchanges, which constitutes a violation of legal and regulatory frameworks protecting against fraud and financial crime. The harm is realized and ongoing, as these fake IDs enable identity fraud and regulatory bypass. Therefore, this qualifies as an AI Incident due to direct involvement of AI in causing harm through fraudulent identity verification.
Thumbnail Image

AI-Driven Service Offers Fake IDs for Just $15, Raising Concerns Over Crypto Scammers and KYC

2024-02-08
cryptodaily.co.uk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI neural networks and generators to create fake IDs that have successfully passed KYC checks on major crypto exchanges and financial platforms. This AI system's use directly leads to violations of legal frameworks and enables fraudulent activities, which are harms under the AI Incident definition (violations of law and harm to property/communities). The involvement of AI in generating these counterfeit documents and metadata spoofing is central to the incident. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

A big-box retail version of synthetic ID has shaken some | Biometric Update

2024-02-06
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system generating synthetic IDs that have been used to defeat identity verification at a cryptocurrency exchange, indicating direct harm through fraud and violation of security protocols. This constitutes a violation of rights and harm to communities through enabling identity fraud. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use in fraudulent activities.
Thumbnail Image

How Do AI-Generated IDs Challenge Traditional Authentication Methods? | Cryptopolitan

2024-02-07
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to produce counterfeit identification documents that are sold and used to circumvent legal identity verification measures. This use of AI directly leads to violations of legal frameworks (AML and KYC policies) and facilitates illicit activities, which constitute harm to communities and breaches of law. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm and legal violations.
Thumbnail Image

AI-generated counterfeit IDs challenge crypto exchanges

2024-02-06
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI systems are used to generate counterfeit IDs that have been successfully used to bypass KYC checks on various crypto exchanges and financial platforms. This misuse of AI directly leads to harm by enabling fraudulent activities, identity deception, and evasion of security measures, which are violations of legal and security frameworks protecting users and financial institutions. The harm is realized and ongoing, as evidenced by successful bypasses and the sale of these fake IDs. Hence, this is an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

OnlyFake's AI IDs Bypass Crypto Security Checks - Altcoin Buzz

2024-02-07
Altcoin Buzz
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (OnlyFake) that generates fake IDs used to bypass security checks on crypto exchanges, directly enabling fraud and money laundering risks. This constitutes a violation of legal frameworks intended to prevent financial crimes and harms the integrity of digital financial communities. The AI system's use has directly led to these harms, qualifying the event as an AI Incident under the OECD framework.
Thumbnail Image

AI creates fake IDs to bypass verification: Is it the end of KYC?

2024-02-06
Finbold
Why's our monitor labelling this an incident or hazard?
The AI system (OnlyFake) is explicitly used to create fake IDs that bypass identity verification, enabling illegal access to cryptocurrency markets. This misuse of AI directly leads to harm by facilitating fraud and undermining regulatory compliance, which fits the definition of an AI Incident due to violations of law and potential harm to communities. The article reports actual use and successful bypass, not just potential risk, confirming realized harm rather than a hazard or complementary information.
Thumbnail Image

'Generated' fake IDs claimed to pass crypto exchange KYC are selling for $15

2024-02-06
TradingView
Why's our monitor labelling this an incident or hazard?
The AI system (OnlyFake) uses neural networks and generators to create fake driver's licenses and passports that have successfully passed KYC verification on major crypto exchanges and financial platforms. This use of AI directly leads to harm by enabling identity fraud, facilitating financial crimes, and undermining trust in critical financial infrastructure. The harm is realized and ongoing, as evidenced by user reports and media verification. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and violations of legal and regulatory protections (harm category c) and harm to the operation of financial infrastructure (harm category b).
Thumbnail Image

CoinStats - AI-Generated Fake IDs Challenge Crypto Exchan...

2024-02-06
coinstats.app
Why's our monitor labelling this an incident or hazard?
OnlyFake's AI-generated fake IDs have been used to bypass KYC checks on multiple financial platforms, directly enabling fraudulent activities and violating legal protections, which fits the definition of an AI Incident involving harm to property and communities. The audio-jacking cyberattack method uses generative AI to manipulate live conversations for fraudulent purposes, posing direct harm to individuals and financial systems, also qualifying as an AI Incident. The Roblox translation model is a positive AI development without associated harm, thus classified as complementary information. The article's main focus on the fraudulent use of AI-generated IDs and AI-enabled cyberattacks justifies classifying the event primarily as AI Incidents with complementary information included.
Thumbnail Image

OnlyFakes: A New Threat of AI-Generated Fake IDs Bypassing KYC

2024-02-07
News9live
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (OnlyFake) that generates counterfeit IDs with detailed personal information and realistic features, which have directly led to the successful bypass of identity verification at a cryptocurrency exchange. This constitutes a violation of security and potentially legal rights, causing harm to property and communities through fraud. The AI system's use has directly led to harm, qualifying this as an AI Incident.