Japanese Police Arrest Three for Deepfake Porn Targeting Female Celebrities

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Three men in Japan were arrested for using AI deepfake technology to create and distribute fake pornographic videos featuring the faces of well-known female celebrities. The videos, widely circulated online for profit, caused reputational harm to at least six actresses, prompting police action and ongoing efforts to curb such AI-generated abuse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (deepfake technology) used to produce synthetic videos that harm individuals' reputations, constituting a violation of rights under applicable law. The harm has already occurred as evidenced by police arrests and ongoing illegal distribution of such content. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to individuals' rights and reputations.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Women

Harm types
Reputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

網路散播深偽技術合成女明星假A片 日警逮3男 | 國際 | 中央社 CNA

2020-11-19
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to produce synthetic videos that harm individuals' reputations, constituting a violation of rights under applicable law. The harm has already occurred as evidenced by police arrests and ongoing illegal distribution of such content. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to individuals' rights and reputations.
Thumbnail Image

「Deepfake」換臉造假女星A片 3日男涉毀損名譽被抓了

2020-11-19
鏡週刊 Mirror Media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake technology, an AI system, to create fake videos that damage the reputation of real actresses. This constitutes a violation of rights (defamation) and harm to individuals, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful content that has materialized harm (reputational damage) makes this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

結衣BB都中招|女星被AI換臉變AV女優 警拘三人

2020-11-20
Apple Daily 蘋果日報
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create manipulated videos that directly harm the reputations and rights of the female celebrities involved. This constitutes a violation of personal rights and defamation, which falls under violations of human rights or breach of obligations intended to protect fundamental rights. Since the harm has already occurred and legal action has been taken, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

「Deepfake」換臉造假女星A片 3日男涉毀損名譽被抓了

2020-11-19
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create manipulated videos that caused reputational harm to individuals. The harm is realized and direct, as the fake videos were distributed online and led to legal action. This fits the definition of an AI Incident because the AI system's use directly caused violations of rights and harm to individuals' reputations.
Thumbnail Image

新垣結衣遭變臉AV女優...成人片瘋傳一搜就有!3惡徒撈錢遭警逮捕 | ETtoday星光雲 | ETtoday新聞雲

2020-11-20
ETtoday星光雲
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to create and distribute fake videos that harm the reputation and rights of individuals. This constitutes a violation of human rights (defamation and misuse of personal likeness) caused by the use of AI systems. The harm is realized and ongoing, as the videos are widely spread online and have led to police action. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

網路散播深偽技術合成女明星假A片 日警逮3男 | 國際焦點 | 國際 | 經濟日報

2020-11-19
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to create fake videos that damage the reputation of real individuals, which is a violation of rights and harm to communities. The AI system's use directly led to harm through defamation and social impact. The arrests and ongoing police action confirm that harm has materialized, not just a potential risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

新垣結衣被「換臉」AV女優!2000支成人片外流瘋傳 | 娛樂星聞 | 三立新聞網 SETN.COM

2020-11-21
三立新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (deepfake technology) to create fake videos that have been distributed widely, causing harm to the reputation and rights of the individuals depicted. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights (defamation and damage to personal reputation). The harm is realized and ongoing, not just potential. Therefore, the classification is AI Incident.