AI Deepfakes Used to Bypass Aadhaar Security in Ahmedabad Loan Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In Ahmedabad, four men were arrested for using AI tools, including Google Gemini, to create deepfake videos that bypassed Aadhaar biometric and OTP verification. This allowed them to change a businessman's Aadhaar-linked mobile number, open a bank account, and fraudulently secure a loan, highlighting vulnerabilities in India's digital identity systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves AI systems (Gemini AI and Meta AI) used to generate deepfake videos that directly facilitated identity theft and financial fraud. The AI's role was pivotal in bypassing biometric security measures, leading to realized harm including financial loss and violation of personal rights. The use of AI-generated deepfakes to deceive security systems and commit fraud fits the definition of an AI Incident as the AI system's use directly led to harm to persons and communities.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Digital securityGovernment, security, and defence

Affected stakeholders
Consumers

Harm types
Economic/PropertyPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Gemini To Meta: How A Deepfake Gang Used AI To Hijack Identities In Gujarat

2026-04-29
NDTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Gemini AI and Meta AI) used to generate deepfake videos that directly facilitated identity theft and financial fraud. The AI's role was pivotal in bypassing biometric security measures, leading to realized harm including financial loss and violation of personal rights. The use of AI-generated deepfakes to deceive security systems and commit fraud fits the definition of an AI Incident as the AI system's use directly led to harm to persons and communities.
Thumbnail Image

No OTP, no problem: Ahmedabad gang uses AI deepfakes to bypass Aadhaar checks

2026-04-29
India Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake generation via Google's Gemini AI) to manipulate biometric authentication systems, enabling unauthorized access and fraudulent financial transactions. The AI system's use directly caused harm to the victim through identity theft, financial loss, and breach of privacy, fulfilling the criteria for an AI Incident under the definitions provided. The involvement is in the use of AI to commit fraud, leading to realized harm.
Thumbnail Image

Google Gemini Misused in AI Deepfake Loan Scam: Ahmedabad Police Expose Aadhaar OTP Bypass

2026-04-30
Republic World
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools, including Google Gemini, to create deepfake videos that bypassed biometric identity verification and OTP security measures, leading to unauthorized financial transactions and identity theft. This directly caused harm to the victim's property and personal rights. The AI system's misuse was pivotal in enabling the fraud, fulfilling the criteria for an AI Incident as the AI system's use directly led to harm (financial loss, identity theft, violation of privacy).
Thumbnail Image

Ahmedabad Cyber Fraud Case: Aadhaar Manipulation, AI-Assisted Methods Used In ₹25,000 Loan Scam; 4 Arrested - The Logical Indian

2026-04-30
The Logical Indian
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools and deepfake videos to manipulate biometric authentication and digital verification processes, which directly facilitated the fraudulent loan scam. The harm includes financial loss to the victim, violation of personal identity rights, and exploitation of digital infrastructure. The AI system's misuse was a pivotal factor in the incident, fulfilling the criteria for an AI Incident as the AI's development and use directly led to realized harm.
Thumbnail Image

Ahmedabad AI fraud: Gang uses deepfakes to hack Aadhaar, take loan without OTP

2026-04-30
News9live
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generation via AI tools like Google Gemini) to directly cause harm by enabling fraud, identity theft, and unauthorized financial transactions. The AI system's use led to violations of rights (identity theft and unauthorized access to personal data), financial harm to the victim, and misuse of biometric authentication systems. Therefore, this qualifies as an AI Incident because the AI system's use directly led to realized harm.
Thumbnail Image

How The Ahmedabad Gang Used AI To Bypass Aadhaar Authentication And Secure Loans?

2026-04-30
thedailyjagran.com
Why's our monitor labelling this an incident or hazard?
The use of AI deepfake technology to manipulate biometric verification and commit fraud constitutes direct involvement of an AI system in causing harm. The fraudulent loan transaction and unauthorized changes to the victim's Aadhaar-linked mobile number represent violations of personal rights and financial harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

No calls, no OTP requests! This AI scam can change your Aadhaar details without you realising

2026-05-03
Techlusive
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to create realistic facial movement videos to trick biometric verification systems, enabling unauthorized changes to Aadhaar details and subsequent fraudulent activities. This AI involvement directly caused harm to individuals by facilitating identity theft and financial fraud, meeting the criteria for an AI Incident. The harm is realized and ongoing, not merely potential, and involves violations of rights and harm to property and communities.
Thumbnail Image

आधार से जुड़ा नया AI स्कैम! डिजिटल अरेस्ट के बाद अब ठगों का सबसे खतरनाक दांव, हो गया खुलासा

2026-05-03
hindi
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools to generate video clips that deceive biometric verification systems, which directly led to unauthorized access to sensitive accounts and financial loss. This constitutes harm to property and violation of rights due to AI system misuse. Therefore, this qualifies as an AI Incident because the AI system's use directly contributed to realized harm through cyber fraud.
Thumbnail Image

गुजरात: हाई-टेक आधार कार्ड फ्रॉड का पर्दाफाश, डीपफेक वीडियो और AI का हुआ इस्तेमाल

2026-05-02
India TV Hindi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake video generation) used maliciously to bypass biometric security, leading to realized harm (financial fraud and identity theft). The AI system's use directly contributed to the incident, fulfilling the criteria for an AI Incident. The harm is materialized, not just potential, and involves violation of rights and harm to property (financial assets).
Thumbnail Image

AI वीडियो से व्यापारी के साथ ठगी, आधार फिंगरप्रिंट तक बदल दिया, बस ये एक काम आपको बचाएगा

2026-04-30
LallanTop - News with most viral and Social Sharing Indian content on the web in Hindi
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake videos were used to bypass biometric Aadhaar verification, enabling criminals to change registered mobile numbers and open bank accounts fraudulently. This led to a direct financial harm to the victim, fulfilling the criteria for an AI Incident. The AI system's use was malicious and directly caused the harm, not just a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

AI बना ठगों का औजार! डीपफेक से पहचान बदलकर बैंक लोन लेने वाला हाईटेक गैंग गिरफ्तार

2026-04-29
AajTak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos to manipulate biometric authentication, which directly enabled the fraud. The AI system's involvement is clear and central to the harm caused, including financial loss and identity misuse. This fits the definition of an AI Incident because the AI system's use directly led to harm (financial fraud and violation of personal rights).
Thumbnail Image

सावधान! AI बना 'डिजिटल डकैत'! अहमदाबाद में Google Gemini AI का उपयोग कर आधार बायोमेट्रिक्स में सेंध, चार गिरफ्तार

2026-04-30
Prabhasakshi
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Google Gemini AI) to create deepfake videos and bypass biometric authentication, which directly led to financial fraud and identity theft. The harms include violation of personal rights, unauthorized access to sensitive data, and financial loss. The AI system's role is central and pivotal in causing these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Deepfake से किया e-KYC, फिर आधार वेरिफिकेशन कर ले लिया लोन, बिना OTP के उड़ाए पैसे, पुलिस ने साइबर फ्रॉड गैंग को पकड़ा

2026-04-30
Good News Today
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (the 'Gemini' AI tool) to generate deepfake videos for biometric verification, which directly facilitated unauthorized access to bank accounts and loans without OTP verification. This led to realized harm including financial theft and identity fraud, which are violations of rights and cause harm to individuals and communities. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm through cyber fraud.
Thumbnail Image

AI से बिजनेसमैन का फेक वीडियो बनाया और बैंक से ले लिया लोन...गुजरात में हैरान करने वाला साइबर फ्रॉड

2026-04-30
TV9 Bharatvarsh
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake video generation) to commit biometric fraud, which directly caused harm by enabling unauthorized access to bank accounts and fraudulent loans. The harm includes financial loss and violation of personal identity rights, fitting the definition of an AI Incident. The AI system's use was malicious and central to the fraud, not merely a potential risk or background context, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Gujarat: हाईटेक ठगी का खतरनाक खेल, एआई से बनाया नकली चेहरा और कर दिया फ्रॉड

2026-04-30
AajTak
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI and deepfake technology to create a realistic fake video that bypassed biometric authentication, which directly facilitated fraudulent activities causing harm to the victim's property and personal data. This meets the criteria for an AI Incident as the AI system's use directly led to violations of rights and financial harm.
Thumbnail Image

Ahmedabad में AI का सबसे बड़ा धोखा! Deepfake वीडियो से बिजनेसमैन के नाम पर लिया लाखों का Loan.

2026-04-30
Prabhasakshi
Why's our monitor labelling this an incident or hazard?
The incident involves the use of an AI system (deepfake technology) to create fraudulent videos that directly enabled the criminals to bypass security systems and commit financial fraud. This caused realized harm to the victim's property and violated personal rights. Therefore, it qualifies as an AI Incident because the AI system's use directly led to harm.
Thumbnail Image

Scam without OTP: बिना OTP और कॉल के खाली हो रहा बैंक अकाउंट, क्या आपने भी की ये गलती? तुरंत सावधान हो जाओ - CNBC Awaaz

2026-05-01
CNBC Awaaz
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create fake biometric data to bypass security in the AEPS payment system, resulting in unauthorized withdrawals from bank accounts. This constitutes direct harm to property (financial loss) caused by the misuse of an AI system. Therefore, it qualifies as an AI Incident under the framework.
Thumbnail Image

Interstate Gang Used Deepfakes To Commit Aadhaar Loan Fraud, Busted

2026-05-07
NDTV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfake videos were used to trick biometric 'Live Face' verification systems, enabling the gang to change mobile numbers linked to Aadhaar without victims receiving OTPs. This AI-enabled bypass directly facilitated identity theft and fraudulent loan acquisition, causing realized harm to individuals and financial institutions. The use of AI in this criminal activity meets the criteria for an AI Incident because the AI system's use directly led to harm (identity theft, financial fraud) and violations of rights (privacy, financial security).
Thumbnail Image

How AI scammers used social media photos to hijack Aadhaar identities, steal loans

2026-05-08
India Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos that mimic human facial movements to fool identity verification systems. This AI-enabled misuse directly led to financial harm through identity theft and fraudulent loans, fulfilling the criteria for an AI Incident. The harm includes violation of property rights and financial loss to individuals, and the AI system's role is pivotal in enabling the fraud. Therefore, this is classified as an AI Incident.
Thumbnail Image

2 part of cybercrime gang held for taking loans fraudulently | Guwahati News

2026-05-09
The Times of India
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI tools were used to create deepfake videos to impersonate victims and manipulate biometric systems, enabling fraudulent loan acquisition. This use of AI directly caused harm to individuals by enabling financial fraud and identity theft. The involvement of AI in the fraudulent activity and the resulting realized harm meet the criteria for an AI Incident.
Thumbnail Image

Social media photos to fake loans: Inside AI deepfake scam busted by Gujarat Police​

2026-05-08
Social News XYZ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos to bypass biometric authentication, which directly caused harm by enabling identity theft and fraudulent loans. The use of AI to create realistic eye-blinking videos to fool Aadhaar verification systems is a clear example of AI misuse leading to violations of personal and financial rights and harm to victims. The harm is realized, not just potential, and the AI system's role is pivotal in the incident. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

Assam duo held in AI deepfake Aadhaar fraud linked to interstate loan racket

2026-05-09
The Assam Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos to bypass biometric authentication, which is an AI system's use leading directly to identity theft and financial fraud. The harm includes violation of personal rights and financial loss to victims, fitting the definition of an AI Incident. The AI system's role is pivotal in enabling the fraud, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

How AI deepfake scam used Aadhaar to take loans without victims knowing: Tips to stay safe

2026-05-09
Techlusive
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI tools (Gemini and Meta AI) to create deepfake videos that fooled biometric verification systems, enabling identity theft and unauthorized loan applications. This directly caused harm to individuals through financial fraud and violation of their identity rights. The AI system's role is pivotal in enabling the scam, fulfilling the criteria for an AI Incident under the definitions provided.