Seoul University Deepfake Scandal: AI Technology Used in Non-consensual Pornography

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake scandal at Seoul University involved AI technology to create and distribute non-consensual explicit content, affecting at least 61 women, including students and minors. Five suspects, including two university graduates, were arrested for producing and sharing deepfake pornography on Telegram, causing significant harm and privacy violations.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly mentions the use of 'deepfake' technology, which is an AI system capable of face-swapping and generating synthetic videos. The malicious use of this AI system to create and distribute non-consensual pornographic content constitutes a violation of human rights and causes harm to the victims. The harm is realized and ongoing, with multiple victims confirmed. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in this criminal context.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsSafetyHuman wellbeingAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
Human or fundamental rightsPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

韩国首尔大学爆发"第二个N号房"事件,受害女性多达61人_全球速报_澎湃新闻-The Paper

2024-05-23
The Paper
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of 'deepfake' technology, which is an AI system capable of face-swapping and generating synthetic videos. The malicious use of this AI system to create and distribute non-consensual pornographic content constitutes a violation of human rights and causes harm to the victims. The harm is realized and ongoing, with multiple victims confirmed. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in this criminal context.
Thumbnail Image

韩国再现"N号房"事件, AI换脸的数字性犯罪敲响警钟_全球速报_澎湃新闻-The Paper

2024-05-25
The Paper
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to produce and disseminate illegal sexual content without consent, directly causing harm to victims' mental health, privacy, and human rights. The use of AI for face-swapping to create fake pornographic material is a clear AI system involvement. The harms described include violations of human rights and significant psychological and social harm to victims, meeting the criteria for an AI Incident. The article also mentions law enforcement responses and victim support measures, but the primary focus is on the realized harm caused by the AI-enabled digital sexual crimes.
Thumbnail Image

韩国再现"N号房"事件,AI换脸的数字性犯罪敲响警钟

2024-05-25
news.cctv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create illegal sexual content without consent, which was distributed to victims and others, causing direct harm to the victims' mental health and social well-being. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and significant harm to individuals and communities. The involvement of AI in the creation of deepfake pornography and its malicious dissemination is central to the harm described. The event is not merely a potential risk or a complementary update but a concrete case of AI-enabled digital sexual crime with documented victims and legal actions.
Thumbnail Image

韩国再发"N号房"事件,至少61名女性受害,照片被合成色情视频

2024-05-23
app.myzaker.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the creation and distribution of deepfake pornography using AI-based face synthesis technology, which is a clear AI system involvement. The harm is direct and significant, involving violations of privacy, dignity, and human rights of multiple victims. The malicious use of AI to produce and spread such content constitutes an AI Incident under the framework, as it has directly led to harm to individuals and communities. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

韩国再现"N号房"事件, AI换脸的数字性犯罪敲响警钟

2024-05-25
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create illegal sexual content without consent, which was distributed to victims and others, causing direct psychological and social harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to individuals and communities. The involvement of AI is clear and central to the harm described. The event is not merely a potential risk or a complementary update but a concrete case of AI-enabled harm.
Thumbnail Image

还原韩国第二起"N号房"事件,罪恶何以再生?

2024-05-25
扬子网(扬子晚报)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI deepfake technology (an AI system) to create illegal synthetic sexual content without consent, which has been distributed and caused direct harm to victims, including psychological trauma and violation of rights. The involvement of AI in the creation of deepfake pornography and its malicious dissemination meets the definition of an AI Incident because the AI system's use has directly led to significant harm to individuals and communities. The article also discusses the legal and societal responses but the primary focus is on the incident itself and its harms, not just complementary information or potential hazards.
Thumbnail Image

韩国首尔大学爆发"第二个N号房"事件!受害女性多达61人

2024-05-23
杭州网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of 'deepfake' technology, which is an AI system capable of generating synthetic media by face-swapping. The malicious use of this AI system directly caused harm to at least 61 victims, including sexual exploitation and violation of rights. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to individuals and communities. The event is not merely a potential risk or a complementary update but a realized harm caused by AI misuse.
Thumbnail Image

"首尔大学版N号房"事件曝光:至少61名受害者,主要嫌疑人和已确认受害者为校友

2024-05-21
163.com
Why's our monitor labelling this an incident or hazard?
The event explicitly describes the use of AI-based image synthesis (deepfake technology) to create fake pornographic videos without consent, which were then distributed to a closed group, causing direct harm to at least 61 victims. The AI system's development and use directly led to violations of privacy and sexual violence, which are breaches of fundamental human rights. The harm is realized and ongoing, not merely potential, thus qualifying as an AI Incident under the framework.
Thumbnail Image

韩国首尔大学爆发"第二个N号房"事件!受害女性多达61人

2024-05-23
环球网
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of "deepfake" technology, which is an AI system capable of synthesizing realistic face-swapped images and videos. The malicious use of this AI system to create and distribute non-consensual sexual content constitutes a direct violation of human rights and inflicts harm on the victims. The involvement of AI in the creation of these materials is central to the incident, and the harms are realized and ongoing. Therefore, this qualifies as an AI Incident under the OECD framework, specifically under violations of human rights and harm to communities.
Thumbnail Image

第二个N号房事件爆发!受害女性多达61人

2024-05-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of "deepfake" technology, which is an AI system capable of generating synthetic images and videos by face-swapping. The malicious use of this AI system to create and distribute non-consensual pornographic content has directly caused harm to at least 61 victims, including minors, violating their rights and causing psychological and social harm. This fits the definition of an AI Incident because the AI system's use directly led to violations of human rights and harm to individuals and communities. The event is not merely a potential risk but a realized harm scenario involving AI misuse.
Thumbnail Image

韩国首尔大学爆发"第二个N号房"事件!受害女性多达61人_警方_朴某_合成

2024-05-23
搜狐新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of "deepfake" technology, which is an AI system capable of synthesizing realistic images and videos by face-swapping. The malicious use of this AI system to produce and distribute non-consensual sexual content has directly caused harm to at least 61 victims, including students and minors, violating their rights and causing psychological and reputational damage. The involvement of AI in the creation of these materials and the resulting harm meets the criteria for an AI Incident under the OECD framework, as it involves violations of human rights and harm to individuals and communities directly linked to the AI system's use.
Thumbnail Image

韩国首尔大学爆发"第二个N号房"事件!受害女性多达61人_色情_受害者_视频

2024-05-23
搜狐新闻
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of "deepfake" technology, which is an AI system for face-swapping and synthetic media generation. The malicious use of this AI system to create and distribute non-consensual sexual content has directly harmed at least 61 victims, including minors and university students. This constitutes a clear violation of human rights and privacy, fulfilling the criteria for an AI Incident. The involvement of AI in the creation of harmful content and the resulting harm to individuals' rights and well-being confirms this classification.
Thumbnail Image

韩国大学曝"N号房"大规模性犯罪事件,受害者多达61人,幕后黑手又是"高材生"......

2024-05-22
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of illegal synthetic video creation, which involves AI-based deepfake technology, to produce and distribute non-consensual pornographic content. This directly violates human rights and causes significant harm to the victims. The involvement of AI in the creation of these videos and their distribution through encrypted communication platforms is central to the incident. The harm is realized and substantial, including violations of privacy, psychological trauma, and exploitation. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

韩媒:韩国出现第二个"N号房"事件 受害女性多达61人

2024-05-23
青岛新闻
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI technology (deepfake face-swapping) to produce illegal synthetic explicit content, which was then distributed to harm victims. This constitutes a direct AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals (victims of non-consensual deepfake pornography). The scale and nature of the harm (including minors) and the criminal activity confirm this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

韩媒:韩国出现第二个"N号房"事件 受害女性多达61人

2024-05-23
China News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology to create illegal synthetic explicit content targeting women, which is a clear application of AI systems (deep learning-based face swapping). The production and dissemination of such content have directly caused harm to at least 61 victims, including violations of privacy, dignity, and potentially other human rights. The involvement of AI in the creation of these harmful materials and the resulting real-world harm to individuals fits the definition of an AI Incident, as the AI system's use has directly led to violations of human rights and harm to individuals.
Thumbnail Image

韩国首尔大学爆发"第二个N号房"事件!受害女性多达61人

2024-05-23
China News
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves the use of AI-based deepfake technology to produce and disseminate harmful content without consent, which constitutes a violation of rights and causes harm to the victims. The involvement of AI in the creation of these synthetic images and videos is central to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to significant harm to individuals and communities.
Thumbnail Image

韩国出现第二个"N号房"事件,2名主犯系首尔大学毕业生

2024-05-23
金羊网
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems for deepfake face-swapping to create illegal and harmful content. The AI system's use directly caused harm to numerous victims through privacy violations, distribution of non-consensual explicit material, and psychological harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals and communities. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI misuse.
Thumbnail Image

韩国曝第2个"N号房"事件,受害女性61人,两主犯毕业于首尔大学

2024-05-23
金羊网
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of "deepfake" technology, an AI system that synthesizes realistic face-swapped images and videos, to produce and distribute illicit sexual content without consent. This AI-enabled misuse has directly caused harm to at least 61 women, including students, violating their rights and causing psychological and social harm. The involvement of AI in the creation of harmful content and its distribution via Telegram channels meets the criteria for an AI Incident, as the AI system's use directly led to violations of human rights and harm to communities.
Thumbnail Image

韩国出现第2个"N号房"事件 受害女性多达61人

2024-05-24
enorth.com.cn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology to create illegal synthetic explicit content, which is an AI system application. The creation and dissemination of these deepfake images and videos have directly caused harm to at least 61 women, including violations of their privacy and rights, and the distribution of illegal content involving minors. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to individuals. The event is not merely a potential risk but an ongoing incident with confirmed harm and legal consequences.
Thumbnail Image

韩国爆发"第二个N号房"事件,受害女性达61人 2024-05-23 16:13

2024-05-23
sznews.com
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of "deepfake" technology, which is an AI system capable of face-swapping and generating synthetic media. The malicious use of this AI system to create and distribute non-consensual pornographic content constitutes a violation of human rights and causes harm to individuals and communities. The harm is realized and ongoing, with multiple victims affected. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to harm (violation of rights and personal harm).
Thumbnail Image

N號房2.0?首爾大畢業生「AI換臉」學妹淫片供會員洩慾 61女受害

2024-05-23
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI deepfake technology to create and distribute non-consensual sexual images and videos, causing direct harm to the victims through sexual exploitation and privacy violations. The involvement of AI in the creation of these harmful materials and their distribution leading to real harm fits the definition of an AI Incident. The harm includes violations of fundamental rights and harm to individuals and communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

韓性犯罪猖獗 首爾AI助防兒少不雅片傳播

2024-05-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being developed and deployed by the Seoul government to detect and remove illegal sexual content involving children and adolescents. The system's use directly addresses harm to minors (a form of injury and harm to groups of people) by preventing the dissemination of exploitative material. Since the AI system's use is central to managing and reducing this harm, this qualifies as an AI Incident under the framework, as the AI system's use is directly linked to harm prevention in a context where harm is ongoing and significant. The article also discusses the broader context of sexual crimes and the AI system's role in combating them, confirming the AI system's involvement in harm-related outcomes.
Thumbnail Image

首爾大畢業生修圖學妹照片 靠「換臉」製作上千部色情影片 | 聯合新聞網

2024-05-25
UDN
Why's our monitor labelling this an incident or hazard?
The article describes a case where AI-based face-swapping technology was used to create and distribute thousands of fake pornographic videos without consent, involving real individuals. This directly violates human rights and causes significant harm to the victims. The involvement of AI in generating these manipulated videos is explicit and central to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to violations of rights and harm to individuals.
Thumbnail Image

影/第2個「N號房」!南韓首爾大學爆「深偽」色情影像案 已有61女受害

2024-05-23
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of "deepfake" technology, which is an AI system capable of generating synthetic media by face swapping. The malicious use of this AI system to create and distribute non-consensual pornographic content has directly harmed at least 61 women, including minors and university students. The harms include violations of privacy, potential psychological trauma, and breaches of fundamental rights. The involvement of AI in the creation of these harmful materials and their distribution meets the criteria for an AI Incident, as the AI system's use directly led to significant harm to individuals and communities.
Thumbnail Image

韓國再現"N號房"事件,AI換臉的數字性犯罪敲響警鐘

2024-05-25
big5.cctv.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create and distribute illegal sexual content without consent, causing significant harm to victims. The AI system's use directly led to violations of human rights and inflicted psychological and social harm, fulfilling the criteria for an AI Incident. The involvement of AI is clear and central to the harm, and the harm is realized, not merely potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

韓國N號房2.0|首爾大畢業生「移花接木」學妹淫照 供群組會員手淫

2024-05-21
std.stheadline.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI-based image synthesis ("photo manipulation" or "image compositing") to produce fake explicit content without consent, which was then distributed to a group of users for sexual gratification. This constitutes a violation of human rights and sexual violence, fulfilling the criteria for an AI Incident. The harm is direct and significant, involving at least 61 victims, including minors, and the AI system's role is pivotal in creating the fabricated content. Therefore, this event is classified as an AI Incident.
Thumbnail Image

南韓再爆第二個N號房!「AI換臉」受害女61人 主犯是首爾大學畢業生 | 娛樂星聞 | 三立新聞網 SETN.COM

2024-05-23
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of "deepfake" technology (AI-based face-swapping) to produce and distribute sexually explicit content without consent, involving at least 61 victims. This constitutes a direct harm to the victims' rights and dignity, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The AI system's use in creating these materials is central to the harm caused, not merely incidental or potential.
Thumbnail Image

N號房2.0?首爾大畢業生「AI換臉」製淫片供會員洩慾 61女受害 | 國際 | 三立新聞網 SETN.COM

2024-05-23
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create and distribute harmful content, leading to violations of human rights and significant harm to the victims. The AI system's use directly caused the harm described, including privacy violations, sexual exploitation, and psychological harm to the victims. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to persons and communities.
Thumbnail Image

韓媒:韓國出現第二個「N號房」事件 受害女性多達61人 - 香港文匯網

2024-05-23
香港文匯網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake face-swapping) to create and distribute illegal and harmful synthetic sexual content. This has directly caused harm to at least 61 women, including minors, through violations of privacy, dignity, and potentially other human rights. The involvement of AI in producing the synthetic content and its malicious use to harm victims fits the definition of an AI Incident, as the AI system's use directly led to significant harm to individuals and communities.