YouTuber Arrested for Spreading AI-Generated Fake Police Bodycam Videos and Pornography in South Korea

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A South Korean YouTuber was arrested for creating and distributing AI-generated fake police bodycam videos, which amassed over 30 million views and misled the public, undermining trust in law enforcement. The individual also produced and sold AI-generated pornography and participated in investment fraud schemes, prompting a police crackdown on AI-driven misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves the use of AI systems to generate fake videos that impersonate police bodycam footage, which were disseminated widely and caused harm by misleading the public and damaging community trust. The AI-generated content was used maliciously and without proper labeling, constituting a violation of laws against false information dissemination. The harm is realized and direct, as the AI system's outputs were central to the incident. Hence, this is classified as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
ReputationalPublic interestEconomic/Property

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

경찰 보디캠 영상인줄 알았는데 알고보니 '가짜'... AI로 영상 만든 유튜버 구속 - 매일경제

2026-02-02
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate fake videos that impersonate police bodycam footage, which were disseminated widely and caused harm by misleading the public and damaging community trust. The AI-generated content was used maliciously and without proper labeling, constituting a violation of laws against false information dissemination. The harm is realized and direct, as the AI system's outputs were central to the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

가짜 경찰 보디캠 영상 AI로 만든 유튜버...음란물도 팔다 구속 | 연합뉴스

2026-02-02
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of AI systems to generate fake video content that misleads the public and harms societal trust in public institutions, which constitutes harm to communities. Additionally, the creation and sale of AI-generated pornographic material without authorization further contributes to harm. The AI system's development and use directly led to these harms, fulfilling the criteria for an AI Incident. The involvement of AI is explicit, and the harms are realized, not just potential.
Thumbnail Image

'마치 경찰 보디캠인 것처럼' AI로 허위 영상물 만들어 유포한 유튜버 구속

2026-02-02
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate false video content that was distributed widely, causing harm by misleading the public and damaging trust in public institutions. The harm is realized and direct, as the fake videos were viewed millions of times and had a significant negative impact on public trust, which qualifies as harm to communities. The AI system's role is pivotal in creating the false content, fulfilling the criteria for an AI Incident.
Thumbnail Image

"경찰 보디캠 아니었어?"...AI 허위 영상물 제작·유포한 30대 구속

2026-02-02
아시아경제
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of generative AI systems to produce false video content that has been widely distributed, causing harm by undermining public trust in police and public institutions, which is harm to communities. The AI system's use directly led to the dissemination of misinformation and harmful content. The involvement of AI in creating these videos and the resulting social harm meets the criteria for an AI Incident. The additional illegal activities related to AI-generated pornographic content further support the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'경찰 보디캠'인 척...허위 AI 영상 제작 30대 유튜버 구속송치

2026-02-02
YTN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (AI chatbots) to create false and misleading videos that were widely disseminated, causing harm by undermining public trust in police authority and facilitating criminal financial schemes. The AI-generated content was used maliciously, leading to legal consequences. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities (public trust) and breaches of law.
Thumbnail Image

"중국인 난동" 가짜 영상 만든 30대, 음란물도 팔다 구속

2026-02-02
문화일보
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate fake videos and content that have been widely distributed, causing harm by undermining public trust in police authority, a harm to communities. The AI system's use directly led to this harm. The illegal sale of AI-generated pornographic material further supports the classification as an AI Incident due to violations of law and potential harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'가짜 경찰 보디캠 영상' AI로 제작한 유튜버, 음란물도 팔다 구속

2026-02-02
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used to generate fake videos and pornographic content. The AI-generated videos caused harm by misleading the public and undermining trust in public institutions, which is harm to communities. Additionally, the individual engaged in illegal activities facilitated by AI-generated content. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

경찰 보디캠 허위 영상 올리던 유튜버 구속...AI 음란물도 제작 | 중앙일보

2026-02-02
중앙일보
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create fake videos that were widely disseminated and believed by many viewers, causing confusion and harm to public trust in police authority, which is a harm to communities. The AI-generated content was deliberately misleading and had a significant negative societal impact. The production of AI-generated pornographic material for profit further indicates misuse of AI with potential legal and ethical harms. These factors meet the criteria for an AI Incident as the AI system's use directly led to realized harm.
Thumbnail Image

'경찰 보디캠 허위영상' AI로 만들어 유포...유튜버 구속[영상] | 중앙일보

2026-02-02
중앙일보
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI to create fake police bodycam footage, which was then distributed widely, causing harm by undermining public trust in law enforcement. This is a clear violation of societal trust and can be considered harm to communities. The AI system's development and use directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'가짜 경찰 보디캠' 영상으로 수천 만 조회수...허위영상물 유튜버 구속 - 시사저널

2026-02-02
시사저널
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to generate fake videos that misrepresent police activity, causing harm by spreading misinformation and eroding public trust in authorities. The harm is realized and ongoing, as evidenced by the large viewership and police action. The AI system's development and use directly contributed to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

경찰 보디캠 사칭 AI 영상 유포 유튜버 구속...투자사기·AI 음란물까지 - 중부일보

2026-02-02
중부일보 - 경기·인천의 든든한 친구
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems to create fake videos that impersonate police bodycam footage, which were disseminated widely and caused harm by misleading the public and damaging trust in public authority, fulfilling the criteria for harm to communities. The AI-generated content was deceptive and lacked proper disclosure, increasing the risk and actual occurrence of harm. The associated investment fraud and AI-generated pornographic content sales further demonstrate misuse of AI leading to realized harms. Hence, this qualifies as an AI Incident due to direct harm caused by AI system use.
Thumbnail Image

"여장남자 女탈의실 출동" "중국인 테이저건 체포"...3000만명이 봤는데

2026-02-02
inews24
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (e.g., ChatGPT and other AI tools) to create fake videos and AI-generated pornographic content. The AI-generated fake police bodycam videos were widely disseminated, causing harm by undermining public trust in law enforcement, which is a violation of societal rights and harms communities. The AI system's use directly led to these harms. The event also involves illegal activities related to unauthorized content creation and distribution. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"테이저건으로 중국인 쏴서..." 3000만뷰 그 채널 결국

2026-02-02
데일리안
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of AI systems to generate fake video content that was used maliciously to deceive the public and damage societal trust, which constitutes harm to communities. The AI-generated misinformation was actively distributed and viewed widely, fulfilling the criteria for an AI Incident due to realized harm. Additionally, the AI was used in criminal schemes, further supporting the classification as an AI Incident. The involvement of AI in creating and disseminating harmful misinformation and illicit content directly led to violations of societal trust and legal breaches, meeting the definition of an AI Incident.