YG Entertainment Takes Legal Action Against Deepfake Content

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

YG Entertainment is taking legal action against the creation and distribution of explicit deepfake content involving their artists, including BLACKPINK and BABYMONSTER. The agency is actively monitoring and removing such AI-generated content, pursuing criminal proceedings to protect the rights and dignity of their artists amidst a deepfake crisis in South Korea.[AI generated]

Why's our monitor labelling this an incident or hazard?

AI-generated deepfake porn is actively circulating, causing real harm by breaching the artists’ human and privacy rights. Management companies are pursuing legal action, confirming that these AI-enabled videos have already resulted in rights violations and reputational damage. This constitutes an AI Incident under violations of human rights and privacy.[AI generated]
AI principles
Respect of human rightsPrivacy & data governanceSafetyAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Women

Harm types
ReputationalPsychologicalHuman or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

YG Entertainment pursuing legal action vs. deepfake content involving its artists

2024-09-02
GMA Network
Why's our monitor labelling this an incident or hazard?
While the article references harmful deepfake content (an AI misuse causing violation of artists’ rights), its primary focus is on the company’s legal and remediation measures—monitoring, removal, blocking, and criminal proceedings—and governmental investigations. This makes it an update on responses to previously existing AI harms rather than reporting a new incident or outlining potential future hazards.
Thumbnail Image

Deepfake mayhem: Squid Game 2 actor Park Gyu Young is the latest target of disturbing South Korean porn scandal

2024-09-04
Hindustan Times
Why's our monitor labelling this an incident or hazard?
AI-generated deepfake porn is actively circulating, causing real harm by breaching the artists’ human and privacy rights. Management companies are pursuing legal action, confirming that these AI-enabled videos have already resulted in rights violations and reputational damage. This constitutes an AI Incident under violations of human rights and privacy.
Thumbnail Image

Squid Game 2 Actor Park Gyu-young Falls Victim To South Korea's Growing Deepfake Porn Scandal - News18

2024-09-05
News18
Why's our monitor labelling this an incident or hazard?
This scandal involves the malicious use of AI systems to superimpose celebrities’ faces onto pornographic content without consent, causing reputational and privacy harms. The harms are realized and directly stem from AI-generated deepfakes, meeting the definition of an AI Incident.
Thumbnail Image

YG Entertainment Threatens Legal Action Against Inappropriate Deepfake Videos Of Blackpink And BABYMONSTER

2024-09-02
TimesNow
Why's our monitor labelling this an incident or hazard?
The event involves actual AI-generated deepfakes (deep learning–based content generation) that have been distributed, causing reputational and privacy harms. This misuse of AI has directly led to rights violations. The label’s legal response does not transform the nature of the event—it remains an AI Incident.
Thumbnail Image

We are seriously concerned" -- YG Entertainment initiates legal proceedings against the creation and dissemination of explicit deepfakes of their artists

2024-09-02
Sportskeeda
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is generated using AI-based synthetic video technology and is directly causing reputational, dignity, and human-rights harms to the artists. The misuse of AI to produce and disseminate explicit content constitutes a realized harm. YG Entertainment’s legal proceedings are a response to an ongoing wrongful use of AI, making this an AI incident.
Thumbnail Image

YG Entertainment to crack down on deepfake content featuring its stars

2024-09-03
Philstar.com
Why's our monitor labelling this an incident or hazard?
While deepfake content constitutes an AI-powered harm (sexual and reputational harm via AI-generated synthetic media), the piece’s primary focus is on YG’s governance response—legal measures, content blocking, and monitoring—rather than detailing a new incident or warning of future risk. This aligns with ‘Complementary Information.’
Thumbnail Image

Bini talent agency warns of legal action against deepfake content creators

2024-09-03
The Manila times
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions deepfake content, which is AI-generated synthetic media, being used maliciously against artists. This constitutes a violation of rights and harm to the individuals and their communities. The AI system's use (deepfake generation) has directly led to harm, qualifying this as an AI Incident. The legal actions and monitoring are responses to the incident, not the main focus, so this is not Complementary Information.