South Korea Investigates Telegram Over Deepfake Sexual Crimes

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

South Korean police launched investigations into Telegram for facilitating deepfake sexual crimes, uncovering over 120 cases among hundreds of channels distributing AI-generated pornographic videos. Victims include minors (60%). Telegram cooperated by deleting content and apologizing. Civil society groups held protests demanding stricter legal and technical measures against deepfake sexual violence.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions deepfake technology, which is an AI system used to create fake pornographic videos. The increase in Telegram users is linked to the spread of these deepfake videos, which have caused harm to individuals, especially minors, as victims and perpetrators. This constitutes harm to communities and violations of rights, fulfilling the criteria for an AI Incident. The AI system's use has directly led to these harms through the creation and dissemination of illegal content.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
ChildrenWomen

Harm types
PsychologicalHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

通讯软件"电报"8月在韩用户环比增幅创新高 | 韩联社

2024-09-05
韩联社(韩国联合通讯社)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake technology, which is an AI system used to create fake pornographic videos. The increase in Telegram users is linked to the spread of these deepfake videos, which have caused harm to individuals, especially minors, as victims and perpetrators. This constitutes harm to communities and violations of rights, fulfilling the criteria for an AI Incident. The AI system's use has directly led to these harms through the creation and dissemination of illegal content.
Thumbnail Image

【社论】民众陷入深伪技术的痛苦深渊,政策却毫无作为

2024-09-06
韩国最大的传媒机构《中央日报》中文网
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology) to produce and distribute harmful content, directly causing violations of human rights and harm to individuals (sexual exploitation victims). The harm is materialized and significant, meeting the criteria for an AI Incident. The article details the scale of the problem, the direct impact on victims, and the failure of current policies to prevent or remediate the harm, confirming the AI system's role in causing the incident.
Thumbnail Image

奇客Solidot | Telegram 与韩国合作删除部分深度伪造色情视频

2024-09-05
Lighthouse @ Newquay
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that create realistic but fake content. The spread of these deepfake pornographic videos on Telegram has caused direct harm to victims' rights and communities, including minors, which fits the definition of harm to human rights and communities. Telegram's cooperation and apology indicate the harm is realized, not just potential. Therefore, this event is classified as an AI Incident.
Thumbnail Image

韩国以涉嫌"帮助深伪性犯罪"调查社交平台

2024-09-05
千龙网
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of deepfake technology used to create harmful sexually exploitative content. The use and distribution of such AI-generated content have directly caused harm to victims, including minors, constituting violations of rights and significant personal harm. The investigation into the platform's role in facilitating these crimes further confirms the AI system's involvement in causing harm. Therefore, this event qualifies as an AI Incident due to realized harm stemming from the use of AI systems.
Thumbnail Image

深度伪造受害的"自我救助"......韩国女性们举行集会,共同面对

2024-09-06
china.hani.co.kr
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deepfake technology) used maliciously to create sexual exploitation content, which constitutes violations of rights and harm to communities, fitting the definition of AI harm. However, the article mainly reports on collective actions, protests, and calls for government measures in response to these harms rather than describing a new incident or hazard event. Therefore, it is best classified as Complementary Information, as it provides context and societal/governance responses to an ongoing AI-related harm issue rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

深度伪造在N号房事件当时也存在,国家袖手旁观,Telegram助长犯罪

2024-09-06
china.hani.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake technology for image and voice synthesis) to produce and disseminate sexual exploitation content, which has directly caused harm to victims, including violations of fundamental rights and psychological injury. The article documents actual incidents of harm, arrests related to these crimes, and the systemic nature of the problem. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

即时通讯工具Telegram变更使用协议 私聊和群聊消息增加举报功能 - cnBeta.COM 移动版

2024-09-06
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The article discusses Telegram's policy change to add a reporting and content review feature for private and group chats, which likely involves AI-based automated content moderation. This change aims to reduce harm from illegal or harmful content, such as deepfake pornography, by enabling quicker removal. There is no indication that the AI system caused harm or malfunctioned, nor that harm has occurred due to this change. Instead, it is a response to prior issues and a governance measure to mitigate potential harms. Hence, it fits the definition of Complementary Information, as it updates on governance and operational responses related to AI systems without describing a new incident or hazard.
Thumbnail Image

即时通讯工具Telegram变更使用协议 私聊和群聊消息增加举报功能 - 社交 - IM 即时消息 - cnBeta.COM

2024-09-06
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event involves Telegram modifying its platform to include message reporting and content review, which likely involves automated or AI-assisted moderation tools. While this relates to AI use in content moderation, the article focuses on policy changes and legal compliance rather than a specific incident of AI-caused harm or a hazard. The addition of reporting and review functions is a governance and operational update, enhancing the platform's response to harmful content. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related content moderation challenges without describing a new AI Incident or Hazard.