AI-Generated Deepfake Images of Minors Spark Police Action in Japan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Japanese police reported 79 cases from January to September 2025 involving AI-generated sexual deepfake images of minors, mostly targeting middle and high school students. About half the perpetrators were classmates. Authorities responded with legal actions, public warnings, and inter-agency cooperation to address the growing harm caused by misuse of generative AI.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of generative AI to create sexual deepfake images of minors, which is a clear example of an AI system's use leading to harm. The harms include violations of human rights and dignity, defamation, and sexual exploitation of children, which are serious legal and ethical violations. The involvement of AI in producing these images and the resulting legal actions confirm that this is an AI Incident as per the definitions provided.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

18歳未満のAI性的画像、加害者の半数は同じ学校 警察庁が初公表

2025-12-17
日本経済新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create sexual deepfake images of minors, which is a clear example of an AI system's use leading to harm. The harms include violations of human rights and dignity, defamation, and sexual exploitation of children, which are serious legal and ethical violations. The involvement of AI in producing these images and the resulting legal actions confirm that this is an AI Incident as per the definitions provided.
Thumbnail Image

性的画像に加工、子の被害79件 タブレット悪用、画像販売の生徒も:朝日新聞

2025-12-17
朝日新聞デジタル
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake technology to create sexual images of minors, which directly harms the victims by violating their rights and causing psychological and social damage. The harm is realized and documented with multiple cases and police involvement. The AI system's use in generating these images is central to the incident, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

子どもの性的ディープフェイク被害 加害者の半数は同じ学校の生徒

2025-12-17
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly states that generative AI was used to create fake sexual images of minors, which constitutes a violation of human rights and privacy, causing harm to children. The involvement of AI in producing these harmful images and the resulting legal actions confirm that harm has materialized. The presence of AI in the creation of these images and the resulting criminal consequences meet the criteria for an AI Incident, as the AI system's use directly led to harm.
Thumbnail Image

性的ディープフェイク:生徒の性的ディープフェイク 加害者半数「同じ学校内」 警察庁集計

2025-12-17
毎日新聞
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI for deepfake creation) used maliciously to produce harmful sexual images of minors, causing direct harm to the victims. The involvement of AI in creating these fake images and the resulting harm to children (a vulnerable group) meets the criteria for an AI Incident under violations of human rights and harm to individuals. The police data confirms that harm has occurred, not just a potential risk.
Thumbnail Image

生成AIなどで作成した子供の「性的偽画像」、同じ学校の加害者が半数...「学校の端末で入手」行事の画像悪用も

2025-12-18
読売新聞オンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI and image manipulation apps to create sexual deepfake images of minors, which is a direct violation of rights and causes harm to the victims. The involvement of AI in generating these harmful images and their distribution by peers and others is clear. The harms include violations of human rights, reputational damage, and psychological harm to children, fulfilling the criteria for an AI Incident. The police investigations and legal actions further confirm the realized harm linked to AI misuse.
Thumbnail Image

18歳未満の被害、学校内5割超 性的偽画像、生成AI普及で -- 相談79件、初めて公表・警察庁:時事ドットコム

2025-12-17
時事ドットコム
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to create sexual deepfake images, which have caused direct harm to minors, including psychological injury and violations of rights. The involvement of AI in generating these harmful images and the resulting legal actions confirm that this is an AI Incident under the framework, as the AI system's use has directly led to harm to individuals and communities.
Thumbnail Image

AI性的画像、被害は同級生半数 18歳未満、相談79件

2025-12-17
神戸新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated sexual deepfake images involving minors, which constitute a violation of rights and cause harm to the victims. The involvement of AI in creating these harmful images and the resulting legal and protective actions confirm that this is an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

AI性的画像、被害は同級生半数 18歳未満、相談79件

2025-12-17
琉球新報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create sexual deepfake images of minors, which constitutes a violation of human rights and causes harm to the victims. The involvement of AI in producing these images and the resulting legal and social consequences confirm that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm to individuals, including minors, through non-consensual sexual image manipulation and distribution.
Thumbnail Image

AI性的画像、被害は同級生半数|埼玉新聞|埼玉の最新ニュース・スポーツ・地域の話題

2025-12-17
��ʐV���b��ʂ̍ŐV�j���[�X�E�X�|�[�c�E�n��̘b��
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated or AI-manipulated sexual deepfake images of minors, which have caused harm to the victims (minors) and led to police consultations and legal actions. This constitutes a violation of rights and harm to individuals, fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the use of AI systems in generating harmful content.
Thumbnail Image

AI性的画像、被害は同級生半数 18歳未満、相談79件:社会:福島民友新聞社

2025-12-17
福島民友新聞社
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (generative AI for creating deepfake images) that have directly led to harm, including violations of personal rights and psychological harm to minors. The involvement of AI in producing illicit sexual images of minors and the resulting police consultations and legal actions confirm that this is an AI Incident under the framework, as the harm is realized and directly linked to AI use.
Thumbnail Image

【茨城新聞】AI性的画像、被害は同級生半数

2025-12-17
茨城新聞社
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create or manipulate sexual deepfake images of minors, which constitutes a violation of rights and causes harm to the victims. The article details actual reported cases, legal actions, and police interventions, indicating realized harm rather than potential harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's use in generating harmful content involving minors.
Thumbnail Image

児童の画像を性的に加工したディープフェイクに注意喚起 今年9月までに79件の事案認知 警察庁(2025年12月18日掲載)|日テレNEWS NNN

2025-12-17
日テレNEWS NNN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to create sexually explicit deepfake images of children, which is a direct misuse of AI technology causing harm to individuals (children) and communities. The police have recognized multiple cases where this harm has materialized, including illegal distribution and sharing of such images. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to persons. The involvement of authorities and coordinated responses further confirm the seriousness and realized harm of these incidents.
Thumbnail Image

女子生徒のSNS画像をAIで裸の画像に加工...《ディープフェイクポルノの被害者、中学生が5割超》卒業アルバムの悪用は自衛が難しい、どうすれば

2025-12-19
東洋経済オンライン
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (generative AI and deep learning) to create deepfake pornographic images targeting minors, which is a direct violation of human rights and child protection laws. The harm is realized as victims are affected and legal action has been taken against perpetrators. The AI system's use in generating harmful content directly leads to the incident. Hence, it meets the criteria for an AI Incident involving violations of rights and harm to individuals.
Thumbnail Image

中学生が5割超、「AIで偽の性的画像」被害の実態|ニフティニュース

2025-12-19
�j�t�e�B�j���[�X
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI to create deepfake pornographic images of minors, which directly leads to harm by violating child protection laws and human rights. The possession and sharing of such AI-generated illegal content is a direct harm to the victims and society, fulfilling the criteria for an AI Incident. The AI system's use in generating these images is central to the harm described.
Thumbnail Image

Japan police reveal half of offenders in child 'sexual deepfake' cases are fellow students

2025-12-18
毎日新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create sexual deepfake images of minors, which are then distributed among peers, causing harm to the victims. This constitutes a violation of human rights and legal protections, fulfilling the criteria for an AI Incident. The involvement of AI in generating manipulated sexual images that lead to criminal charges and police actions confirms direct harm caused by AI misuse. The presence of realized harm (distribution of sexual deepfake images of minors) and legal consequences further supports classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Over half of deepfakes of underage victims made by classmates, Japanese police say

2025-12-18
The Japan Times
Why's our monitor labelling this an incident or hazard?
The involvement of generative AI in creating explicit deepfakes of minors constitutes the use of an AI system that has directly led to harm to individuals (minors) through violations of their rights and harm to communities. The harm is realized as these deepfakes have been reported to police, indicating actual incidents rather than potential risks. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Schoolmates behind sexual deepfakes of minors in 50% of cases: police

2025-12-18
Japan Today
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create sexually explicit deepfake images of minors, which is a direct violation of human rights and legal protections. The harms include violations of rights, psychological harm to minors, and criminal offenses. The AI system's use in generating these images is central to the harm caused. The police actions and investigations confirm that harm has occurred. Hence, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Over Half of Deepfakes with Under-18 Victims Made by Classmates

2025-12-17
Adnkronos
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of generative AI to create sexual deepfake images targeting minors, which is a direct violation of rights and causes harm to the victims. The involvement of AI in generating these images and the resulting legal actions confirm that harm has occurred. The harm includes violations of human rights and harm to individuals and communities, fitting the definition of an AI Incident. The event is not merely a potential risk or a response update but a report of actual harm caused by AI misuse.
Thumbnail Image

Over Half of Deepfakes with Under-18 Victims Made by Classmates

2025-12-18
jen.jiji.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI) used to create sexual deepfake images targeting minors, which is a violation of human rights and causes harm to the victims. The harm has already occurred as police have been consulted on these cases. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Over half of deepfakes with under-18 victims made by classmates

2025-12-18
Tuoi tre news
Why's our monitor labelling this an incident or hazard?
The use of generative AI to create sexual deepfakes of minors constitutes a violation of human rights and causes harm to individuals and communities. The AI system's use directly led to harm (psychological, reputational) to the victims. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to persons (minors) and breaches of rights.