AI-Generated Fake Bank Cheque Sparks Fraud Concerns in India

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A viral social media post showed a hyper-realistic UCO Bank cheque created using ChatGPT Image 2.0, raising widespread alarm about the potential for AI-generated images to facilitate financial fraud. While no actual harm occurred, the incident highlights growing risks of AI misuse in creating convincing forged documents.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system generating a fake bank cheque image with high fidelity, which could be used to deceive individuals or facilitate fraud. While current banking systems may detect such fakes, the risk remains significant in contexts lacking robust verification, such as private transactions or social engineering scams. No direct harm has been reported yet, so it is not an AI Incident. The focus is on the potential for harm due to the AI system's capabilities and the demonstrated ability to bypass safety protocols, fitting the definition of an AI Hazard.[AI generated]
AI principles
Robustness & digital securityAccountability

Industries
Financial and insurance services

Affected stakeholders
BusinessConsumers

Harm types
Economic/PropertyReputational

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

UCO Bank Fake Cheque: ChatGPT Image 2.0-Generated Bank Cheque Goes Viral on Social Media, Sparks Fraud Concerns

2026-04-23
LatestLY
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake bank cheque image with high fidelity, which could be used to deceive individuals or facilitate fraud. While current banking systems may detect such fakes, the risk remains significant in contexts lacking robust verification, such as private transactions or social engineering scams. No direct harm has been reported yet, so it is not an AI Incident. The focus is on the potential for harm due to the AI system's capabilities and the demonstrated ability to bypass safety protocols, fitting the definition of an AI Hazard.
Thumbnail Image

Viral Post | AI-Generated ₹69,000 Cheque Sparks Fresh Fears of Digital Fraud

2026-04-23
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
An AI system (image generation via ChatGPT Image 2.0) was used to create a realistic bank cheque image. While the image itself is AI-generated, the event does not describe any realized harm such as actual financial fraud or legal violations. Instead, it highlights concerns and debates about the potential misuse of AI-generated images for fraud, which constitutes a plausible future risk. Therefore, this event fits the definition of an AI Hazard, as it involves an AI system whose use could plausibly lead to an AI Incident (financial fraud) but has not yet done so.
Thumbnail Image

AI-generated pic of cheque for ₹69,000 sparks fraud concerns: 'We are so cooked'

2026-04-23
Hindustan Times
Why's our monitor labelling this an incident or hazard?
An AI system (AI image generation) is explicitly involved in creating a fake cheque image. The event centers on concerns about the potential misuse of such AI-generated images for fraud, which could plausibly lead to harm such as financial fraud or deception. Since no actual fraud or harm has been reported, and the focus is on the plausible future risk, this fits the definition of an AI Hazard rather than an AI Incident. The discussion about the difficulty of using such images in practice and the lack of actual incidents supports this classification.
Thumbnail Image

ChatGPT-Made Rs 69,000 Cheque Goes Viral, Sparks Fraud Fears: 'We Are So Cooked'

2026-04-23
News18
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT Image 2.0) was used to generate a realistic image of a cheque, which is an AI system involvement in content generation. Although no actual fraud or harm has occurred yet, the event raises credible concerns about the potential misuse of such AI-generated images to commit fraud or deceive others. The article does not report any realized harm but focuses on the plausible future risk of AI-generated fake documents being used maliciously. Therefore, this event qualifies as an AI Hazard because it plausibly could lead to an AI Incident involving fraud or financial harm if such images were used improperly.
Thumbnail Image

'We are finished': ChatGPT's UCO Bank cheque goes viral, sparks fraud fears around new AI image model | Today News

2026-04-23
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (ChatGPT Images 2.0) to generate a fake bank cheque that looks highly realistic, including details like account numbers and signatures. While no actual fraud or harm has been confirmed, the detailed depiction and user concerns about the potential for such AI-generated images to be used maliciously indicate a credible risk of future harm. The AI system's development and use in generating such images could plausibly lead to incidents of financial fraud, which would constitute violations of law and harm to property or individuals. Since the harm is not yet realized but is a credible risk, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

UCO Bank Fake Cheque: ChatGPT Images 2.0-Generated Bank Cheque Goes Viral on Social Media, Sparks Fraud Concerns | 📲 LatestLY

2026-04-23
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate a fake bank cheque image that could plausibly lead to financial fraud or social engineering scams. While no direct harm has been reported, the viral nature of the image and the discussion about bypassing platform restrictions indicate a credible risk of misuse. The AI system's involvement in producing realistic fraudulent content that could deceive people in private transactions or online scams fits the definition of an AI Hazard, as it could plausibly lead to harm (fraud, financial loss, violation of rights). There is no indication that harm has already occurred, so it is not an AI Incident. The article is not merely complementary information or unrelated news, as it focuses on the potential risks posed by the AI-generated fake cheque.
Thumbnail Image

'We Are So Cooked': AI-Generated UCO Bank Cheque Goes Viral

2026-04-23
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The AI system (image generation AI) was used to create a fake bank cheque image that looks very realistic, which could plausibly lead to fraud or other harms related to financial crime or legal violations. Although no direct harm or incident has been reported yet, the event highlights a credible potential for harm due to the AI's capability to generate convincing fake financial documents. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving harm to property or violations of law.
Thumbnail Image

Fake Rs 69,000 cheque made with AI goes viral, netizens say 'Photoshop did this 10 years ago'

2026-04-24
News9live
Why's our monitor labelling this an incident or hazard?
The AI system was used to create a forged cheque image, which could plausibly lead to financial fraud or other harms if used maliciously. However, the article does not describe any actual fraud or harm resulting from this AI-generated image. The concerns expressed are about potential misuse and future risks, not a realized incident. Hence, this qualifies as an AI Hazard, reflecting a credible risk of harm from AI-generated forged documents, but not an AI Incident since no harm has occurred.