AI-Generated Fake Social Media Accounts Spread Political Misinformation in South Korea

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated images of young women were used to create fake social media accounts in South Korea, spreading political messages and deceiving users. The accounts, operated by men, used deepfake technology to manipulate public perception, leading to misinformation, violation of individual rights, and undermining social trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that the videos were created using AI technology to fabricate a person's appearance and speech, which were then widely shared on social media, deceiving many people. This manipulation of public perception through AI-generated fake content is a direct cause of harm to communities by spreading misinformation and undermining democratic discourse. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI misuse.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

"윤어게인"...태극기집회 아이돌급 미모 여성 정체는? - 매일경제

2026-05-08
mk.co.kr
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the videos were created using AI technology to fabricate a person's appearance and speech, which were then widely shared on social media, deceiving many people. This manipulation of public perception through AI-generated fake content is a direct cause of harm to communities by spreading misinformation and undermining democratic discourse. Therefore, this event qualifies as an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

"윤어게인" 외친 짦은 치마 미모20대女...알고보니 AI

2026-05-08
문화일보
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate realistic images of a young woman, which were then used to create deceptive social media accounts spreading political messages. This manipulation has caused harm by misleading users and facilitating the spread of extremist political content, impacting communities and public discourse. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

"윤어게인!" 외치던 미모의 젊은 여성, 알고보니...'화들짝'

2026-05-08
Wow TV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake technology used to create fake social media content that spreads political messages, deceiving viewers and causing harm to communities by distorting perceptions and political discourse. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation, which is a significant and clearly articulated harm.
Thumbnail Image

"윤어게인, 멸공!" 외치던 미모의 여성...알고 보니 '대반전'

2026-05-09
Wow TV
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create a fake persona on social media that spread political messages, deceiving users into believing it was a real person. This deception constitutes harm to communities and a violation of rights, fulfilling the criteria for an AI Incident. The AI system's use directly led to this harm through misinformation and manipulation, not merely a potential risk. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

어느샌가 내 SNS에 나타나 '윤 어게인'을 외치는 여성들의 정체 : 극우와 첨단기술의 만남

2026-05-07
허프포스트코리아
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake images and creating deceptive social media accounts that actively spread political messages, which constitutes indirect harm to communities through misinformation and manipulation. The AI system's use in this context has directly led to the dissemination of misleading political content, fulfilling the criteria for an AI Incident under harm to communities. The article reports that these AI-generated accounts have been actively used, indicating realized harm rather than just potential risk.
Thumbnail Image

"윤어게인, 오늘도 멸공" 20대 보수 여성의 SNS 정체...알고 보니 AI

2026-05-07
파이낸셜뉴스
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake images used to create deceptive social media accounts spreading political messages, which is a direct use of AI leading to harm. The harms include misinformation dissemination affecting communities and violation of individual rights through unauthorized use of images. The AI system's role is pivotal in enabling these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"尹 대통령님, 사랑합니다"...'윤어게인' 외친 미모의 20대 여성, 알고보니

2026-05-08
아시아경제
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake AI) to generate manipulated images and content that have been actively used to mislead social media users, constituting a violation of rights and harm to communities through misinformation and propaganda. The harm is realized as the AI-generated content has already influenced public perception and social trust. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing harm to communities and violating social trust.
Thumbnail Image

"윤어게인" "멸공" 외치던 미모의 여성, 알고보니 남자였다

2026-05-08
아이뉴스24
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of AI deepfake technology to create fake images that were used to deceive social media users with political messages. The harm is realized as many people were misled, constituting harm to communities and a violation of rights related to information and identity. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.