
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A Turkish national in South Korea used generative AI (ChatGPT) to manipulate photos and medical documents, falsely claiming injury from a nail salon procedure to extort money. The fraud attempt failed, but the incident highlights AI's role in enabling sophisticated deception and attempted financial harm.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly states that a generative AI system (ChatGPT) was used to create manipulated images and falsified medical documents to simulate injury, which was then used to attempt to extort money from a nail salon. This misuse of AI directly led to an attempted fraud (harm to property/business) and disruption of the nail salon's operations. Although the fraud was unsuccessful, the AI's role in enabling the deception and attempted harm is clear and direct, qualifying this as an AI Incident.[AI generated]