
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
In South Korea, scammers are using AI deepfake technology to impersonate financial experts and lure investors into illegal investment chat rooms. Victims are deceived into transferring funds through fake stock trading apps, resulting in financial losses. Authorities have issued consumer warnings and are increasing monitoring to combat these AI-driven frauds.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology by illegal actors to impersonate experts and deceive investors, leading to financial scams and losses. This is a direct use of an AI system causing harm to people (financial harm and violation of rights). The harm is realized, not just potential, as the scams are ongoing and authorities have issued warnings. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]