JYP Entertainment Takes Legal Action Against TWICE Deepfake Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

JYP Entertainment, representing K-pop group TWICE, is pursuing strong legal action against creators of AI-generated deepfake videos depicting its artists. The company is collecting evidence to address this violation of rights and law, emphasizing its commitment to protect its artists from unauthorized and harmful representations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The deepfake videos constitute realized harm—privacy and reputational violation—and directly involve an AI system (deepfake generation). This misuse of an AI system has already occurred and is causing harm to the artists, qualifying it as an AI Incident.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityHuman wellbeingSafety

Industries
Media, social platforms, and marketingArts, entertainment, and recreation

Affected stakeholders
WomenBusiness

Harm types
ReputationalPsychologicalHuman or fundamental rightsEconomic/Property

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

JYP Entertainment to take legal action against deepfake videos targeting TWICE | - Times of India

2024-08-31
The Times of India
Why's our monitor labelling this an incident or hazard?
The deepfake videos constitute realized harm—privacy and reputational violation—and directly involve an AI system (deepfake generation). This misuse of an AI system has already occurred and is causing harm to the artists, qualifying it as an AI Incident.
Thumbnail Image

JYP Entertainment takes a stand against deepfake exploitation of TWICE; warns legal action - read full statement

2024-08-31
MoneyControl
Why's our monitor labelling this an incident or hazard?
The article reports an actual incident in which deepfake technology (an AI system) was used to produce and disseminate pornographic content without consent. This misuse of AI has directly harmed the artists by infringing on their privacy and personal rights, meeting the criteria for an AI Incident.
Thumbnail Image

JYP, TWICE's agency, cracks down on deepfakes amid concerns

2024-09-01
Inquirer
Why's our monitor labelling this an incident or hazard?
The article focuses on JYP’s pledge to pursue legal action and evidence gathering against deepfake content and situates this within broader industry responses (e.g., Woollim, ADOR). It does not describe a new AI-driven harm event but rather outlines a policy and legal response to previously occurring deepfake incidents, fitting the definition of Complementary Information.
Thumbnail Image

BINI's agency vows to take action vs inappropriate deepfakes

2024-09-02
Inquirer
Why's our monitor labelling this an incident or hazard?
Deepfake creation involves AI-generated content that has directly harmed the group through non-consensual sexual imagery and harassment, violating their rights and personal safety. This is a realized harm caused by misuse of AI, so it qualifies as an AI Incident rather than a potential hazard or mere contextual update.
Thumbnail Image

K-pop agency vows 'strongest legal action' against deepfake videos - VnExpress International

2024-08-31
VnExpress International ΓÇô Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
Deepfake technology (an AI system) has already been used to create and distribute pornographic videos without consent, causing real harm to victims. This is a direct AI-driven incident involving rights violations and personal harm.
Thumbnail Image

JYP warns legal action vs deepfake videos of TWICE

2024-08-31
Rappler
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI-generated deepfake videos of TWICE members, which is an explicit case of an AI system (deepfake generator) directly causing harm by violating the artists’ rights. This meets the criteria for an AI Incident under rights violations.
Thumbnail Image

ABS-CBN's Star Magic to take legal action vs BINI deepfakes

2024-09-02
Rappler
Why's our monitor labelling this an incident or hazard?
The article describes actual deepfake (AI‐generated) content that has circulated on social media and Telegram groups, causing violation of the members’ rights, harassment, and exploitation. The AI system’s outputs have directly led to harm, and legal steps are being taken against those responsible. This fits the definition of an AI Incident.
Thumbnail Image

K-pop agencies declare war on deepfake porn using artists' faces

2024-09-01
중앙일보
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is being actively generated and spread using AI-based face‐swapping and video‐fabrication tools, directly harming K-pop artists by violating their privacy and rights. This is a clear case of misuse of AI leading to actual harm (sexual rights/ privacy violations), so it qualifies as an AI Incident.
Thumbnail Image

TWICE's Label JYP Entertainment Announces Legal Action After Deepfake Videos Of K-pop Idols Go Viral

2024-08-30
TimesNow
Why's our monitor labelling this an incident or hazard?
The viral AI-generated deepfakes constitute a direct harm—violating the artists’ privacy and rights (a breach of human and intellectual property rights). The content has already been spread, so this is an AI Incident rather than a future hazard or mere complementary update.
Thumbnail Image

K-pop agency vows 'strongest legal action' against deepfake videos

2024-08-31
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article describes non-consensual, AI-generated sexual videos (deepfake porn) being shared in Telegram chat rooms, a direct violation of victims’ rights and serious harm (including to minors). The AI system’s use here has directly led to wrongdoing and victimization, constituting an AI Incident.
Thumbnail Image

TWICE's label threatens "the strongest legal action" against deepfake videos of the group

2024-09-02
NME Music News, Reviews, Videos, Galleries, Tickets and Blogs | NME.COM
Why's our monitor labelling this an incident or hazard?
The piece centers on JYP’s intent to pursue legal action and on South Korean police measures following widespread AI-generated deepfake sexual content. It does not detail a new, discrete AI-caused harm event or a near miss; rather, it documents responses—legal, investigative, and policy—to harms already occurring. This aligns with Complementary Information, as it provides context and updates on actions addressing prior AI-related violations.
Thumbnail Image

K-pop agency vows 'strongest legal action' against deepfake videos

2024-08-31
GULF NEWS
Why's our monitor labelling this an incident or hazard?
Deepfake pornography is produced and distributed using AI, directly violating individuals’ rights and causing psychological and reputational harm. The deepfake content is a realized misuse of AI, meeting the criteria for an AI Incident (violation of human rights).
Thumbnail Image

JYP Entertainment to pursue legal action against deepfakes and AI-generated TWICE content - Bollywood Hungama

2024-08-31
Bollywood Hungama
Why's our monitor labelling this an incident or hazard?
The article describes the creation and distribution of AI‐generated deepfake content that directly infringes on the artists’ legal and personal rights (a human rights violation). This misuse of AI technology has already occurred and caused harm, fitting the definition of an AI Incident.
Thumbnail Image

JYP Entertainment vows 'strongest legal action' vs deepfake videos

2024-09-01
Philstar.com
Why's our monitor labelling this an incident or hazard?
The article describes the harm caused by AI-generated deepfake pornographic videos depicting K-pop artists and minors, which constitutes direct violation of personal rights and mental well-being. This is a realized harm enabled by an AI system’s ability to generate realistic non-consensual sexual content.
Thumbnail Image

Agency warns about deepfake videos of K-pop group TWICE

2024-08-31
Manila Bulletin
Why's our monitor labelling this an incident or hazard?
Deepfake videos have been produced and disseminated, leading to real violations of the artists’ rights and prompting law enforcement investigations and planned legal action. This is a clear case of realized harm (privacy, reputational, and sexual exploitation) directly caused by an AI system (deepfake generation).
Thumbnail Image

TWICE's agency cracks down on deepfake videos amid rising industry concerns | Yonhap News Agency

2024-08-31
Yonhap News Agency
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that synthesize realistic but fake visual content. The article reports that these deepfake videos sexually exploit artists, violating their rights and causing distress. The harm is realized and ongoing, and the AI system's use is central to the incident. Hence, this is an AI Incident involving violations of human rights and harm to individuals.
Thumbnail Image

TWICE's agency cracks down on deepfake videos amid rising industry concerns

2024-08-31
The Korea Times
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that synthesize realistic but fake visual content. The creation and spread of sexually exploitative deepfake videos directly harm the individuals depicted, violating their rights and causing distress. The article reports that such videos have been made and circulated, indicating realized harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to persons caused by AI-generated content.
Thumbnail Image

K-pop agency vows 'strongest legal action' against deepfake videos

2024-08-31
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake videos being distributed without consent, causing harm to the depicted individuals and communities. The harm includes violations of privacy and rights, and the content is pornographic and non-consensual, which is a clear violation of human rights and causes significant harm. The AI system's use in generating these videos is central to the incident. The ongoing distribution and public outrage confirm that harm is occurring, not just potential. Hence, this is an AI Incident under the framework definitions.
Thumbnail Image

After JYP, FCENM Also Takes Action Against Deepfake Videos Targeting Their Girl Group ILY:1 - OtakuKart

2024-09-01
OtakuKart
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos that have directly led to harm, including defamation, emotional distress, and violation of privacy and rights of the artists. The use of AI to create synthetic videos that damage reputations and cause emotional trauma fits the definition of an AI Incident, as the AI system's use has directly led to harm to individuals (harm to persons and violation of rights). The label's legal response and public condemnation further confirm the recognition of harm caused by AI misuse. Thus, this is an AI Incident.
Thumbnail Image

K-pop agency vows 'strongest legal action' against deepfake videos

2024-08-31
The Standard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake pornography, which is a direct misuse of AI systems to create non-consensual, harmful content. This has led to realized harm to the victims, including minors, and involves violations of rights and personal dignity. The involvement of AI in generating the deepfake videos and the resulting harm to individuals and communities qualifies this as an AI Incident rather than a hazard or complementary information. The agency's legal response and public outrage further confirm the materialized harm.
Thumbnail Image

K-pop agency JYP Entertainment vows 'strongest legal action' against deepfake videos

2024-09-01
CNA Lifestyle
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake pornography involving K-pop artists, which is a direct violation of rights and causes harm to individuals, including minors. The AI system's use in creating these videos has directly led to harm, meeting the definition of an AI Incident. The legal actions and investigations are responses to this harm, but the core event is the realized harm from AI misuse. Therefore, this is classified as an AI Incident.
Thumbnail Image

Unmasking the Truth: Viewer Media Foundation Takes Aim at Deepfake Crime Prevention - News Directory 3

2024-09-03
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article focuses on preventive education and support related to deepfake crimes, which are AI-related harms, but it does not describe any realized harm or incident caused by AI systems. There is no direct or indirect harm reported, only efforts to mitigate potential harms. Therefore, this is Complementary Information as it provides context and societal response to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

韓團TWICE也成換臉不雅片受害者 韓警察廳要查Telegram | 國際要聞 | 全球 | NOWnews今日新聞

2024-09-02
NOWnews 今日新聞
Why's our monitor labelling this an incident or hazard?
Deepfake generation is an AI-enabled misuse of synthetic media that directly harms individuals’ privacy and likeness rights. Because the harmful content is being produced and distributed, this is an AI Incident.
Thumbnail Image

N號房事件2.0!TWICE遭AI換臉 JYP警告:絕不寬恕 - 自由娛樂

2024-08-31
自由時報電子報
Why's our monitor labelling this an incident or hazard?
No actual deepfake harm or incident is described as having occurred, nor is there a technical vulnerability or malfunction of an AI system. Instead, this is a governance response—legal and policy measures to prevent future unauthorized AI deepfakes—making it complementary information rather than an incident or hazard.
Thumbnail Image

TWICE成Deepfake受害者 公司採取法律行動 | 大紀元

2024-08-31
The Epoch Times
Why's our monitor labelling this an incident or hazard?
A deepfake AI system was used to produce and distribute unauthorized portrait content of real individuals, directly causing reputational and legal harm. Because the AI-generated content was published and led to realized harm (violation of likeness and intellectual property/portrait rights), this event qualifies as an AI Incident.
Thumbnail Image

Deepfake dan Kejahatan Seksual, Korea Selatan Waspadai sebagai Ancaman Baru

2024-08-31
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
Deepfake is a form of AI (GAN-based face and audio synthesis) being actively used to produce and distribute illicit sexual content, harming minors and adults through extortion and reputation damage. This represents realized harm (violation of personal and privacy rights), so it qualifies as an AI Incident.
Thumbnail Image

Twice Jadi Korban Video Deepfake Tak Senonoh, JYP Entertainment Berikrar Kejar Pelaku Tanpa Ampun

2024-09-02
Liputan 6
Why's our monitor labelling this an incident or hazard?
Deepfake generation uses AI to create unauthorized, harmful sexual content without the subjects’ consent, directly violating their personal and human rights. The incident is an example of AI-driven malicious content creation that has already caused harm, fitting the definition of an AI Incident.
Thumbnail Image

Idol Ikut Terdampak, Agensi Kpop Siap Perangi Teror Foto Deepfake

2024-09-02
CNNindonesia
Why's our monitor labelling this an incident or hazard?
Malicious deepfake technology (a generative AI system) has been used to create and distribute pornographic content without consent, directly harming individuals’ rights and privacy. This constitutes an AI Incident because the AI’s use has already resulted in actual harm and violations.
Thumbnail Image

Korea Selatan Gencar Tangkap Pelaku Pornografi Deepfake, Sudah Ribuan Orang Terlibat

2024-09-03
VIVA.co.id
Why's our monitor labelling this an incident or hazard?
The article describes realized harms directly linked to misuse of generative AI (deepfake pornography). The AI system’s outputs have led to violations of individual rights, psychological injury, and reputational damage. This constitutes an AI incident.
Thumbnail Image

Anak-Anak di Korea Selatan Ketakutan, Predator Seks Online Berkeliaran

2024-08-31
CNBCindonesia
Why's our monitor labelling this an incident or hazard?
Deepfake generation is an AI system’s misuse. The article documents actual non-consensual sexual harms (psychological, rights violations) to minors and others caused by deepfake AI technology. This is a direct AI-driven harm, so it qualifies as an AI Incident under the OECD framework.
Thumbnail Image

JYP Entertainment akan Laporkan Pembuat dan Penyebar Video Deepfake TWICE

2024-08-31
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
The deepfake videos are explicitly described as being generated by AI (‘deepfake’ from ‘deep learning’), and these videos are already circulating, causing violations of the artists’ rights and potential sexual harm. This is a realized harm—non-consensual AI-based content—so it qualifies as an AI Incident rather than a potential hazard or complementary update.
Thumbnail Image

4 Cara Mencegah Modus Penipuan Deepfake

2024-09-02
JawaPos.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deepfake technology) and acknowledges the harms caused by their misuse (financial, reputational, emotional harm). However, it does not describe a specific AI Incident (a particular event where harm has directly or indirectly occurred) nor does it report a new AI Hazard (a specific event or circumstance where harm could plausibly occur). Instead, it offers general guidance and preventive measures against deepfake scams, which fits the definition of Complementary Information as it supports understanding and mitigation of AI-related harms without reporting a new incident or hazard.
Thumbnail Image

Kaget! 50% Korban Kejahatan Seks Deepfake Adalah Selebriti Wanita Korea ? - Editor News

2024-08-31
Editor News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to create deepfake videos, which are AI-generated synthetic media. The harm is realized as the deepfake pornography exploits individuals, violating their rights and causing reputational and psychological harm. The report quantifies the scale and impact, confirming that harm has occurred. Hence, it meets the criteria for an AI Incident due to violations of human rights and harm to communities caused by AI-generated content.
Thumbnail Image

Agensi TWICE akan Ambil Langkah Hukum Tindak Video <em>Deepfake</em> |Republika Online

2024-08-31
Republika Online
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated by AI systems that create synthetic media. The article describes the exploitation of artists through such videos, which is a direct violation of their rights and causes harm. The involvement of AI in creating these videos and the resulting harm to the artists meets the criteria for an AI Incident. The legal actions being taken further confirm the recognition of harm caused by the AI system's misuse.
Thumbnail Image

Deepfake Pornografi Merajalela, Korsel Siaga Tinggi |Republika Online

2024-09-01
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of deepfake technology, which is an AI system generating synthetic sexually explicit images and videos. The article describes realized harm to individuals (sexual exploitation, violation of rights, and harm to communities) caused by the use of these AI-generated deepfakes. The involvement of AI in creating harmful content that is actively distributed and exploited constitutes an AI Incident under the framework, as the harm is direct and ongoing. The article also discusses responses and mitigation efforts, but the primary focus is on the harm caused by the AI system's use.
Thumbnail Image

<em>Deepfake</em> Porno Ancaman Digital yang Menimpa Artis Korea, Apa Itu dan Dampaknya? |Republika Online

2024-09-02
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create manipulated sexual content without consent, directly causing harm to individuals (celebrities) in terms of reputation, mental health, and social standing. These harms fall under violations of human rights and harm to communities. The article reports actual cases (297 reported in 2024), indicating realized harm rather than potential harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

<em>Deepfake</em> Porno Marak, Pemerintah Korsel Dinilai Kurang Serius Cari Jalan Keluar |Republika Online

2024-09-02
Republika Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to create harmful sexual content that has directly led to significant harm to individuals, including psychological trauma and suicide. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The article details realized harm rather than potential harm, and the AI system's role is pivotal in the incident.
Thumbnail Image

Korban <em>Deepfake</em> Porno di Korea Selatan: Dari Artis Hingga Pelajar |Republika Online

2024-09-02
Republika Online
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (generative AI used to create deepfake videos) whose use has directly led to significant harm to individuals and communities through sexual exploitation and privacy violations. The harm is realized and ongoing, meeting the criteria for an AI Incident. The article does not merely warn of potential harm but documents actual exploitation and victimization. Hence, this is classified as an AI Incident.
Thumbnail Image

YG Entertainment Umumkan Langkah Hukum Atas Konten Deepfake yang Tak Pantas Terhadap Artis Mereka - Selebritalk

2024-09-02
Selebritalk
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated deepfake videos that are inappropriate and harmful to the artists' dignity and reputation, which constitutes a violation of rights under the framework. The harm is realized as the content is being circulated, and the company is responding to this ongoing harm. The AI system's use (deepfake generation) is directly linked to the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Mengarah Pornografi, YG Entertainment Ambil Langkah Hukum soal Video Deepfake AI Artis Mereka

2024-09-02
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create pornographic videos involving real individuals (artists), which constitutes a violation of their rights and causes reputational harm. This is a direct harm caused by the use of an AI system (deepfake AI). The agency's legal actions and content removal efforts confirm the harm is realized and ongoing. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to individuals' rights and reputations.
Thumbnail Image

Polisi Korea Selatan Selidiki Dugaan Keterlibatan Telegram dalam Penyebaran Konten Deepfake Seksual

2024-09-02
VOI - Waktunya Merevolusi Pemberitaan
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the distribution of deepfake sexual content on Telegram, which is AI-generated synthetic media. This content causes harm to individuals (sexual exploitation) and communities (harm to societal norms and rights). The police investigation is a response to realized harm caused by the use of AI systems generating deepfake content. The involvement of AI in creating and distributing harmful content meets the criteria for an AI Incident, as the harm is direct and ongoing. The investigation and regulatory responses further confirm the seriousness of the incident.
Thumbnail Image

YG Entertaiment Ambil Tindakan Hukum Terhadap Penyebar Deepfake Yang Melibatkan Artis Mereka - Editor News

2024-09-03
Editor News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and distribution of deepfake videos, which are AI-generated synthetic content. The harm caused includes violations of personal dignity and honor, which can be considered a violation of human rights and harm to individuals. Since the deepfake content is already being produced and distributed, causing harm, this qualifies as an AI Incident due to realized harm stemming from AI-generated content misuse.
Thumbnail Image

Idola K-Pop Cewek Jadi Korban Deepfake Porno, Agensi Pasang Badan

2024-09-04
detikjogja
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based deepfake technology to create and distribute pornographic videos of K-Pop idols without their consent. This constitutes a violation of personal rights and causes harm to the mental health and reputation of the victims, fulfilling the criteria for an AI Incident under violations of human rights and harm to individuals. The involvement of AI in generating the harmful content and the realized harm to the victims justifies classification as an AI Incident.