IU Takes Legal Action Against Deepfake Harassment

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Korean singer IU, represented by EDAM, has filed legal complaints against over 180 individuals for severe online harassment, including threats, defamation, and the creation and distribution of AI-generated deepfake content. These actions are considered criminal, with some cases already in court, highlighting the misuse of AI technology in cyberbullying.[AI generated]

Why's our monitor labelling this an incident or hazard?

Deepfake creation and distribution relies on AI systems. The harms—threats, defamation, privacy violations, sexual harassment via AI‐generated imagery—have already occurred. This is a concrete case of AI‐enabled harm resulting in legal action, fitting the definition of an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingArts, entertainment, and recreationDigital security

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Korean soloist IU sues ex-classmate and 180 others over online harassment, deepfakes, and slander

2024-11-12
Hindustan Times
Why's our monitor labelling this an incident or hazard?
Deepfake creation and distribution relies on AI systems. The harms—threats, defamation, privacy violations, sexual harassment via AI‐generated imagery—have already occurred. This is a concrete case of AI‐enabled harm resulting in legal action, fitting the definition of an AI Incident.
Thumbnail Image

IU Sues 180 Individuals For Online Harassment, Including Former Classmate

2024-11-12
TimesNow
Why's our monitor labelling this an incident or hazard?
Individuals used AI-generated deepfakes and other AI-facilitated harassment to threaten and defame the artist, causing realized harm to her rights and reputation. This is a case of direct malicious use of an AI system resulting in personal and reputational harm, qualifying as an AI incident.
Thumbnail Image

IU files online harassment lawsuit against former classmate Singapore News

2024-11-12
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and spread of deepfake content, which is generated by AI systems. This AI-generated content has been used maliciously to harass IU, causing harm such as defamation, privacy violations, and threats. Since the harm is realized and the AI system's involvement is direct in producing harmful content, this qualifies as an AI Incident under the framework.
Thumbnail Image

K-pop Star IU Sues 180 People, Including Ex-Classmate, Over Online Abuse And Deepfake Content

2024-11-12
english
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the creation and distribution of illegal deepfake content, which is generated using AI systems. This content has been used to harass and defame IU, causing harm to her reputation and privacy, which falls under violations of human rights and harm to the individual. Since the harm is realized and legal actions are underway, this qualifies as an AI Incident. The involvement of AI is clear through the use of deepfake technology, and the harm is direct and significant.