AI-Generated Criminal Memes Cause Secondary Harm to Victims in South Korea

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems are being used to create realistic images and videos of notorious criminals, which are widely shared as entertainment online. This trivializes serious crimes and inflicts secondary trauma on victims and their families. The incident has sparked controversy and calls for regulation in South Korea.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems generating synthetic videos and images that directly cause harm by trivializing serious crimes and potentially causing secondary harm to victims and communities. The AI-generated content is actively spreading and consumed widely, indicating realized harm rather than just potential. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights (harm category c and d). The legal and societal challenges further underscore the significance of the harm caused.[AI generated]
AI principles
Human wellbeingRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Other

Harm types
Psychological

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"식재료가 중국산이라 그닥"...AI '범죄자 근황' 밈 확산, 웃고 넘기기엔 - 매일경제

2026-05-02
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic videos and images that directly cause harm by trivializing serious crimes and potentially causing secondary harm to victims and communities. The AI-generated content is actively spreading and consumed widely, indicating realized harm rather than just potential. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and violations of rights (harm category c and d). The legal and societal challenges further underscore the significance of the harm caused.
Thumbnail Image

"살인범이 웃고 춤춘다"...범죄자 AI 콘텐츠 확산, '2차 가해' 논란 - 매일경제

2026-05-03
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate realistic content of criminals, which is then widely distributed and consumed, causing secondary harm to victims and communities. The harm is realized and ongoing, including psychological injury and violation of rights (e.g., dignity, privacy). Therefore, this qualifies as an AI Incident because the AI system's use directly leads to harm. The article does not merely warn of potential harm but reports actual harm occurring due to AI-generated content.
Thumbnail Image

청주여자교도소 5인방 화보?...'AI 범죄자 밈' 2차 가해 논란 | 연합뉴스

2026-05-01
연합뉴스
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos and images of criminals, which are then widely shared and consumed as memes. This use of AI directly leads to harm by causing secondary victimization of crime victims and harm to communities through trivialization and entertainment of serious crimes. The article also highlights the lack of clear legal frameworks to address this harm, but the harm itself is occurring. Hence, it meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

조주빈, 이은해, 김소영 등 교도소 근황? AI 가짜 영상 확산...피해자 2차 가해 우려

2026-05-02
First-Class 경제신문 파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create manipulated videos and images of criminals, which are then spread online. The harm is realized as the victims and their families suffer secondary harm through emotional distress and social harm from the repeated exposure to these AI-generated defamatory contents. The AI system's role is pivotal in producing the fake media that leads to this harm. Hence, it meets the criteria for an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

'청주여자교도소 5인방?' 유튜브서 판치는 범죄자 AI영상...전문가 "2차가해 우려"

2026-05-02
문화일보
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic videos of criminals, which are then widely distributed and consumed, causing direct harm to victims through secondary victimization and potential violations of rights. The AI system's use directly leads to harm to communities and individuals (psychological harm to victims), fitting the definition of an AI Incident. The article does not merely warn of potential harm but documents ongoing harm caused by AI-generated content. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

이은해·김소영이 교도소서 화보를?...선 넘은 'AI 범죄자 콘텐츠' 논란

2026-05-02
매일방송
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images and videos of real criminals, which are then widely distributed and consumed as entertainment. This use of AI directly causes harm by inflicting secondary trauma on victims and their families, and by trivializing serious crimes, which can be considered harm to communities and a violation of rights. The AI system's role is pivotal in creating realistic fake content that would not exist otherwise. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

이은해, 정유정 모여서 사진을?" 청주 女교도소 AI 밈...2차 가해 우려

2026-05-02
서울신문
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos and images using facial and voice data of criminals, which are then widely distributed online. This has directly led to harm by causing secondary victimization and social harm through trivialization and entertainment consumption of serious crimes. The AI system's role is pivotal in creating realistic fake content that would not be possible without AI. The harm is realized and ongoing, not just potential. Hence, it meets the criteria for an AI Incident involving harm to communities and individuals.
Thumbnail Image

조주빈 먹방부터 '교도소 5인방' 화보까지...흉악범 '주인공' 만든 AI 밈 논란

2026-05-02
데일리안
Why's our monitor labelling this an incident or hazard?
The AI system is used to generate realistic videos and images that mock criminals, which indirectly causes harm to victims and communities by trivializing serious crimes and causing secondary trauma. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to communities and violations of rights. The article does not describe potential future harm but actual ongoing harm from the spread of these AI-generated contents. Therefore, the event is classified as an AI Incident.
Thumbnail Image

청주여자교도소 5인방 화보?...'AI 범죄자 밈' 논란

2026-05-02
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate realistic images and videos of criminals, which are then distributed widely as memes. This use of AI directly leads to harm by causing secondary victimization and social harm through the trivialization and entertainment of serious crimes. The harm to victims and communities is realized and significant. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities and violations of rights. The article also discusses regulatory challenges but the primary focus is on the realized harm caused by the AI-generated content.
Thumbnail Image

청주여자교도소 5인방 화보?⋯'AI 범죄자 밈' 논란

2026-05-02
아이뉴스24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic videos and images of criminals, which are then widely shared and consumed as entertainment, causing harm to victims and communities by trivializing serious crimes and inflicting secondary trauma. The AI's role in creating and enabling the spread of this harmful content is direct and pivotal. The harm is realized, not just potential, as victims are experiencing significant secondary harm. Hence, this is an AI Incident under the definitions provided.