IT Contractor Creates Deepfake Videos from Stolen School Staff Photos in Busan

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A male IT contractor in Busan, South Korea, illegally accessed 194 female school staff members' PCs, stealing over 220,000 personal files and using AI deepfake technology to create manipulated sexual videos. The incident, uncovered after a USB was found, highlights privacy violations and misuse of AI for harmful content creation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI technology (deepfake generation) to create harmful synthetic sexual videos without consent, which is a violation of human rights and privacy. The AI system's use directly led to harm through the creation and possession of illicit content. The incident is not merely a potential risk but a realized harm, as the deepfake videos were produced and stored. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Education and trainingDigital security

Affected stakeholders
WomenWorkers

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"PC 점검 왔습니다"...여교사 사진 빼돌려 '딥페이크' 만든 30대 덜미 - 매일경제

2026-05-07
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake generation) to create harmful synthetic sexual videos without consent, which is a violation of human rights and privacy. The AI system's use directly led to harm through the creation and possession of illicit content. The incident is not merely a potential risk but a realized harm, as the deepfake videos were produced and stored. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

PC점검 왔다더니...여교사 사진 유출해 '딥페이크' 만든 30대 | 연합뉴스

2026-05-07
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake technology, an AI system that generates manipulated videos, to create sexual content without consent, which is a violation of human rights and privacy. The harm has materialized through unauthorized data access, privacy breaches, and creation of harmful AI-generated content. Although the deepfake videos were not distributed, the creation and possession of such content is a significant harm. Hence, this is an AI Incident as the AI system's use directly led to violations of fundamental rights.
Thumbnail Image

교직원 194명 사진 빼돌린 30대男...딥페이크까지 만들었다

2026-05-07
Wow TV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake generation) to create manipulated sexual videos without consent, which is a violation of human rights and privacy. The AI system's use directly led to harm by producing harmful content. The illegal access and storage of personal data also contribute to the harm. Although the deepfake videos were not distributed online, the creation and possession of such content itself is a significant harm. Hence, this qualifies as an AI Incident under the framework definitions.
Thumbnail Image

"PC 고쳐드릴게요"...수년간 여교사 사진 빼돌려 딥페이크 만든 30대

2026-05-07
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (deepfake technology) to generate manipulated videos from stolen personal data. The AI system's use directly led to harm in terms of privacy violations and illegal content creation. Even though the videos were not disseminated, the act of producing deepfakes from stolen data is a clear violation of human rights and applicable laws. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

여교사 사진 유출해 '딥페이크' 만든 PC점검 직원

2026-05-07
국제신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake) to create harmful sexual videos without consent, which is a violation of human rights and privacy. The AI system's use (deepfake generation) directly led to harm by producing illegal and harmful content. The presence of AI is clear from the mention of deepfake videos, and the harm is realized through privacy breaches and illegal content creation. Hence, this is classified as an AI Incident.
Thumbnail Image

PC 점검은 안하고...여교사들 사진 빼돌려 음란물 만든 30대 - 시사저널

2026-05-07
시사저널
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake video creation) to produce illicit sexual content without consent, which is a violation of human rights and privacy. The AI system's use directly led to harm by creating manipulated sexual videos of victims, fulfilling the criteria for an AI Incident. The illegal access and copying of data facilitated the AI misuse, and the harm is realized even though the content was not distributed online. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

女 교사 사진 빼돌려 딥페이크...유출 파일만 22만개

2026-05-07
데일리안
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of deepfake AI technology to create manipulated videos from stolen personal images and videos, which is a direct use of an AI system causing harm. The harms include violations of privacy, sexual exploitation, and breaches of laws protecting individuals, especially vulnerable groups such as women and minors. The AI system's role is pivotal in generating the harmful deepfake content. The incident has already occurred with concrete harm, not just a potential risk, meeting the criteria for an AI Incident.
Thumbnail Image

학교 전산업체 직원이 영상 빼돌려 '딥페이크' 제작

2026-05-07
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The article describes the unauthorized extraction of personal data and the creation of deepfake videos using AI technology. Deepfake generation is an AI system application that manipulates video content to create realistic but fake imagery. The harm is direct, involving violations of privacy and personal rights of nearly 200 victims, with the AI system playing a pivotal role in producing harmful content. This meets the criteria for an AI Incident as the AI system's use directly led to significant harm to individuals.
Thumbnail Image

학교 PC 점검하면서 교직원 사진 22만장 '슬쩍' 30대··· 딥페이크 음란물 20건도 만들었다

2026-05-07
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology in the form of deepfake video creation, which is an AI system generating synthetic content. The harm includes privacy violations, unauthorized data theft, and creation of harmful deepfake pornography, which directly harms individuals' rights and dignity. The AI system's use in producing deepfake videos is a direct cause of the harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of human rights and harm to individuals.
Thumbnail Image

'PC 고치는 척' 교직원 사진·영상 유출...음란 딥페이크 영상물 만든 수리기사

2026-05-07
경향신문
Why's our monitor labelling this an incident or hazard?
The incident explicitly involves the use of AI technology (deepfake video generation) to create harmful and non-consensual explicit content, which is a direct violation of human rights and privacy. The AI system's use here directly led to harm through the creation of manipulated sexual videos, even though the videos were not publicly leaked, the production itself is a serious harm. The unauthorized access and data theft facilitated the AI misuse. Hence, this is an AI Incident as per the definitions provided.
Thumbnail Image

학교 PC 고치러 온 외주 직원의 두 얼굴...여교직원 자료 22만개 빼돌린 이유 보니

2026-05-07
아시아경제
Why's our monitor labelling this an incident or hazard?
The article describes a clear case where an AI system (deepfake generation) was used maliciously to create sexual fake videos from stolen personal data. The AI system's use directly led to violations of privacy and sexual exploitation, which are harms to individuals and communities. Although the initial data theft was manual, the AI-generated deepfake content is a direct AI-related harm. Hence, this qualifies as an AI Incident under the framework.
Thumbnail Image

교직원 PC에서 사진 빼내 딥페이크 만든 30대 구속

2026-05-07
오마이뉴스
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake generation) to create manipulated videos from stolen photos and videos. The harm is realized in the form of privacy violations, creation of illicit content, and potential psychological harm to the victims. The AI system's use is central to the harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but a realized harm, and thus it is not an AI Hazard or Complementary Information. It is not unrelated because AI (deepfake) is clearly involved.
Thumbnail Image

여교사 등 194명 사진 22만 개 빼내 음란물 만든 PC기사(종합)

2026-05-07
국제신문
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of deepfake technology, an AI system, to create sexual fake videos using stolen personal images and videos. This use of AI directly led to harm by violating the victims' rights and privacy, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the deepfake videos were produced and the victims' personal data was compromised. The AI system's role is pivotal in generating the harmful content, and the incident involves malicious use of AI, thus classifying it as an AI Incident.
Thumbnail Image

교사 사진 빼돌려 딥페이크 만든 30대 덜미

2026-05-07
국민일보
Why's our monitor labelling this an incident or hazard?
The event describes a person who used AI-based deepfake technology to create fake sexual videos from stolen personal images and videos, which is a direct use of an AI system leading to harm. The harms include violations of privacy, sexual crimes, and breaches of legal protections, fulfilling the criteria for an AI Incident. The AI system's role is pivotal in generating the harmful content, and the harm has already occurred.