AI-Generated Fake Injury Used in Attempted Nail Salon Fraud in South Korea

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A Turkish national in South Korea used generative AI (ChatGPT) to manipulate photos and medical documents, falsely claiming injury from a nail salon procedure to extort money. The fraud attempt failed, but the incident highlights AI's role in enabling sophisticated deception and attempted financial harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that a generative AI system (ChatGPT) was used to create manipulated images and falsified medical documents to simulate injury, which was then used to attempt to extort money from a nail salon. This misuse of AI directly led to an attempted fraud (harm to property/business) and disruption of the nail salon's operations. Although the fraud was unsuccessful, the AI's role in enabling the deception and attempted harm is clear and direct, qualifying this as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityAccountability

Industries
Consumer services

Affected stakeholders
Business

Harm types
Economic/PropertyReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"시술 부작용"...AI로 사진 조작해 돈 뜯으려던 외국인 송치

2026-03-12
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a generative AI system (ChatGPT) was used to create manipulated images and falsified medical documents to simulate injury, which was then used to attempt to extort money from a nail salon. This misuse of AI directly led to an attempted fraud (harm to property/business) and disruption of the nail salon's operations. Although the fraud was unsuccessful, the AI's role in enabling the deception and attempted harm is clear and direct, qualifying this as an AI Incident.
Thumbnail Image

''네일 받고 피났다'' 40만원 요구한 외국인...알고 보니

2026-03-12
매일방송
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system (ChatGPT) to manipulate images and documents to simulate injury and demand money fraudulently. This constitutes misuse of AI leading to an attempted harm (fraud and financial loss). Since the harm was attempted but not realized, it still qualifies as an AI Incident due to the direct involvement of AI in causing harm or attempted harm through fraudulent means.
Thumbnail Image

"시술 부작용"...AI로 사진 조작한 외국인 송치

2026-03-12
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
An AI system (generative AI, ChatGPT) was explicitly used to manipulate images to create false evidence of harm, which was then used to attempt financial extortion. This misuse of AI directly led to an incident involving harm (attempted fraud and business disruption). Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use directly led to harm (attempted fraud and business disruption).
Thumbnail Image

"네일 받고 피났다"며 40만원 요구한 외국인...알고보니 챗GPT 조작

2026-03-12
아시아경제
Why's our monitor labelling this an incident or hazard?
ChatGPT, a generative AI system, was explicitly used to manipulate images and documents to fabricate evidence for a fraudulent claim. This use of AI directly led to an attempted fraud (an AI Incident) involving harm to property and legal rights, even though the fraud was unsuccessful. The AI system's role was pivotal in creating the manipulated content that formed the basis of the incident. Therefore, this event qualifies as an AI Incident due to the direct involvement of AI in causing harm through attempted deception and legal violation.