AI-Generated Fake Wolf Photo Disrupts Emergency Response in Daejeon

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A man in Daejeon, South Korea, used AI to create and distribute a fake photo of an escaped zoo wolf, misleading authorities and the public. The image caused emergency services to alter search operations, issue disaster alerts, and delayed the wolf's capture, highlighting the real-world harm from AI-generated misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to create a manipulated image that was disseminated, leading to significant disruption of emergency management and public safety operations. The harm includes interference with critical infrastructure management (emergency response and disaster alert systems) and potential risk to public safety. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm and disruption.[AI generated]
AI principles
SafetyTransparency & explainability

Industries
Government, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interestEconomic/PropertyPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

모두를 속인 '가짜 늑구사진'유포자 검거 "재미로 했다

2026-04-24
국제신문
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a manipulated image that was disseminated, leading to significant disruption of emergency management and public safety operations. The harm includes interference with critical infrastructure management (emergency response and disaster alert systems) and potential risk to public safety. Therefore, this qualifies as an AI Incident because the AI system's use directly caused harm and disruption.
Thumbnail Image

"재미로 그랬다" 오월드 늑대 '늑구' 가짜 사진 유포한 남성

2026-04-24
국제뉴스
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate manipulated images that were falsely presented as real, leading to significant disruption of critical infrastructure management (emergency response and search operations). The AI system's use directly caused harm by misleading authorities and the public, fulfilling the criteria for an AI Incident under the disruption of critical infrastructure category.
Thumbnail Image

'늑구 골든타임' 놓치게 한 그 사진...40대 유포자 "재미로 그랬다" - 매일경제

2026-04-24
mk.co.kr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake image that was spread online, causing confusion and misdirecting law enforcement and emergency responders during a critical search operation. This misuse of AI directly led to disruption of the management and operation of critical infrastructure (emergency response and public safety), fulfilling the criteria for an AI Incident. The harm was realized, not just potential, as the search efforts were hindered and resources were misallocated, delaying the capture of the escaped wolf and impacting citizen safety.
Thumbnail Image

"재미로 그랬다" AI 가짜 '늑구 사진' 유포자 잡았다

2026-04-24
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system to create manipulated images that were disseminated, causing real-world harm by disrupting emergency response operations and public safety efforts. This constitutes a violation of public order and obstructs official duties, which falls under harm to communities and disruption of critical infrastructure management. Since the harm has already occurred and is directly linked to the AI system's use, this qualifies as an AI Incident.
Thumbnail Image

늑구 AI 가짜사진 유포자 검거..."경찰 기동대·특공대 헛걸음" 오월드수색 업무 방해혐의

2026-04-24
문화일보
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system to produce a fake image that was disseminated online, leading to real-world harm by obstructing police and emergency operations. This misuse of AI directly contributed to harm to community safety and disruption of critical public safety infrastructure, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the fake image caused operational delays and resource misallocation during a critical public safety event.
Thumbnail Image

[속보]늑구 AI 조작 사진 유포자 검거 "수색 시간 허비"

2026-04-24
문화일보
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of an AI program to create and distribute false images that interfered with official police operations, causing harm by delaying the capture of the escaped wolf and misdirecting law enforcement resources. This disruption of public safety operations and obstruction of official duties constitutes harm to communities and public order, fitting the definition of an AI Incident. The AI system's use was central to the harm caused, fulfilling the criteria for direct or indirect harm due to AI system use.
Thumbnail Image

"재미로 그랬다"...수색에 혼선 부른 '가짜 늑구 사진' 유포자 검거 - 전국 | 기사 - 더팩트

2026-04-24
더팩트
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system to produce manipulated content that was disseminated and believed to be real, causing disruption to critical public safety operations. The harm includes obstruction of official duties (police and fire departments), misdirection of search efforts, and delayed capture of the escaped wolf, which could have endangered public safety. This meets the criteria for an AI Incident as the AI system's use directly led to harm through interference with public safety and emergency response.
Thumbnail Image

"늑대가 사거리 돌아다녀"⋯AI로 가짜 늑구 사진 생성해 유포한 40대 체포

2026-04-24
아이뉴스24
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create fake images that were disseminated to interfere with official search and rescue operations, which is a disruption of critical infrastructure management. The harm is realized as the false images misled authorities and delayed effective response, constituting an AI Incident under the definition of harm to critical infrastructure management caused directly or indirectly by AI use.
Thumbnail Image

"재미삼아" 가짜 늑구 사진 유포한 40대 男 결국...

2026-04-24
데일리안
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate false images that directly interfered with public safety operations, leading to harm by obstructing official duties and delaying critical response times. This constitutes an AI Incident because the AI-generated content caused real-world harm by misleading authorities and the public, thus fulfilling the criteria of harm to communities and disruption of critical infrastructure management.
Thumbnail Image

모두 속인 '가짜 늑구 사진' AI 제작 유포자 검거..."재미로 그랬다"

2026-04-24
연합뉴스TV
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake photo that directly interfered with emergency response efforts, leading to misallocation of resources and public safety risks. This constitutes harm to community safety and disruption of critical infrastructure management (emergency services). Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI-generated manipulated content.
Thumbnail Image

'가짜 늑구' 사진 제작한 40대 검거..."재미로 그랬다

2026-04-24
채널A
Why's our monitor labelling this an incident or hazard?
An AI system (ChatGPT) was explicitly used to generate false images that were then spread online, causing real-world disruption to critical infrastructure management (police and emergency response operations). This meets the criteria for an AI Incident because the AI system's use directly led to harm in the form of disruption of critical infrastructure management and public safety efforts. The harm is realized and significant, not merely potential or speculative.
Thumbnail Image

"늑대가 대전 도로에 있네요"...'늑구' AI 조작사진 유포한 40대 입건

2026-04-24
뉴스핌
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system to create manipulated images that were deliberately disseminated to mislead authorities during a critical public safety operation. This misuse of AI directly caused harm by delaying the capture of the escaped wolf, wasting emergency resources, and potentially endangering the community. The harm is realized and significant, fitting the definition of an AI Incident due to the AI system's role in causing disruption and harm to community safety and public order.
Thumbnail Image

늑구 거리 활보 사진 'AI로 만든 가짜'...경찰, 유포자 검거

2026-04-24
서울신문
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of a generative AI system to produce a fabricated photo that was disseminated online, misleading both the public and emergency responders. This misuse of AI directly caused disruption to critical infrastructure management (emergency response teams) and endangered public safety by delaying the capture of the escaped wolf. Therefore, it meets the criteria for an AI Incident due to the realized harm caused by the AI system's malicious use.
Thumbnail Image

'AI 조작' 늑구 사진 유포 40대 검거..."재미삼아"

2026-04-24
서울경제
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate fake images that were disseminated, which directly interfered with police and fire department operations, causing harm through disruption of emergency management and public safety. The harm is realized and directly linked to the AI-generated content. Therefore, this qualifies as an AI Incident.
Thumbnail Image

"늑구가 도로에 있네요?"...AI로 만든 가짜사진 유포한 40대 검거 - 동행미디어 시대

2026-04-24
동행미디어 시대
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate false content that was spread and caused real-world disruption to emergency services and public safety operations. The AI-generated fake photo misled authorities and the public, leading to misdirected deployment of police and emergency personnel and issuance of false safety warnings. This disruption to critical infrastructure management and operation meets the criteria for an AI Incident, as the AI system's use directly led to harm (disruption).
Thumbnail Image

대전시·소방 전부 속인 '가짜 늑구 사진'...40대 유포자 "재미로" - 시사저널

2026-04-24
시사저널
Why's our monitor labelling this an incident or hazard?
The event explicitly states that an AI program was used to create a manipulated photo that was then spread, leading to false disaster alerts and misallocation of emergency resources. This misuse of AI directly disrupted the management and operation of critical infrastructure (emergency response and public safety systems). The harm is realized and significant, as it delayed the capture of the escaped wolf and misled public authorities and citizens. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's malicious use.
Thumbnail Image

수색 시간 허비시킨 'AI 늑구사진' 유포자 검거..."재미로 했다"

2026-04-24
국민일보
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create manipulated images that were disseminated, causing confusion and misallocation of emergency resources during a wildlife escape incident. The harm is realized and direct, as the AI-generated false images led to delays and inefficiencies in the official search operation, impacting public safety. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's misuse.
Thumbnail Image

늑구 'AI 가짜 사진'으로 수색 방해한 40대 검거... "재미로 그랬다

2026-04-24
[하메네이 사망] 호르무즈 해협 비상...국제유가 뛰고 안전자산도 들썩
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake image that was disseminated, leading to disruption of critical infrastructure management (emergency search operations) and public safety. The harm is realized as the authorities were misled, search efforts were diverted, and public safety communications were affected. This fits the definition of an AI Incident because the AI system's use directly led to harm in the form of disruption and potential risk to the community.
Thumbnail Image

"재미로 그랬는데"...AI로 '가짜 늑구' 사진 유포한 40대 검거

2026-04-24
아시아경제
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create manipulated images that were then disseminated, misleading authorities and causing them to alter their search operations unnecessarily. This misuse of AI led to a disruption of critical infrastructure management (emergency response and public safety), which fits the definition of an AI Incident under harm category (b). The harm is realized, not just potential, as the authorities wasted time and resources based on false AI-generated content, and public safety was compromised. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Neukgu: South Korea police arrest man over AI image of runaway wolf

2026-04-24
BBC
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake image that misled authorities during an active search for a runaway wolf. The AI-generated content caused authorities to relocate their search operation unnecessarily and issue emergency alerts, which disrupted government work. This disruption of government operations qualifies as harm under the definition of an AI Incident. The event involves the use and misuse of an AI system leading directly to harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Neukgu: South Korea police arrest man over AI image of runaway wolf

2026-04-24
BBC
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a false image that was disseminated and used by authorities, leading to a disruption of government work and public communication. The harm is realized as the false AI-generated image caused deception and interference with official search efforts, which fits the definition of an AI Incident due to the direct harm caused by the AI system's use (misuse) in disrupting government operations and misleading the public.
Thumbnail Image

"For fun···" Person who spread AI-manipulated photo that wasted time in the search for Neuggu arrested

2026-04-24
경향신문
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a generative AI system to create manipulated content that was disseminated, causing real-world harm. The harm includes disruption of emergency management operations and public safety risks due to the false information. The AI system's use was central to the incident, as the manipulated photo was the cause of the confusion and resource misallocation. Therefore, this qualifies as an AI Incident under the definitions provided, as the AI system's use directly led to significant harm.
Thumbnail Image

South Korea arrests man for spreading AI-generated image of escaped wolf

2026-04-24
The Straits Times
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image of an escaped wolf, which was widely shared and believed, causing a nine-day delay in the wolf's capture. This led to significant disruption of critical infrastructure management (emergency services and public safety operations) and harm to the community (school closure and public alarm). The AI-generated content was central to the harm caused, fulfilling the criteria for an AI Incident.
Thumbnail Image

South Korea arrests man for spreading AI-generated image of escaped wolf

2026-04-24
CNA
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image that was widely shared, misleading authorities and the public. This misuse of AI directly caused disruption to the management and operation of critical infrastructure (emergency services) and harm to the community by delaying the wolf's capture and causing school closure. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated content.
Thumbnail Image

Man faces 5 years in prison for using AI to fake sighting of runaway wolf

2026-04-24
Ars Technica
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake image that misled police and emergency responders, causing them to divert resources and issue public warnings unnecessarily. This misuse of AI directly disrupted the management and operation of a critical public safety effort, which fits the definition of harm under AI Incident category (b) - disruption of critical infrastructure management and operation. The harm is realized, not just potential, and the AI system's role is pivotal in causing the incident. Hence, the event is classified as an AI Incident.
Thumbnail Image

Man Arrested in South Korea Over AI-Generated Wolf Photo

2026-04-24
Newser
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a doctored image that directly interfered with police search efforts, leading to a misallocation of resources and disruption of critical infrastructure management (public safety operations). The AI-generated content caused tangible operational harm, meeting the criteria for an AI Incident.
Thumbnail Image

South Korea Arrests Man for a Fake AI Wolf Photo That Raised Alarms - Decrypt

2026-04-24
Decrypt
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image that deceived officials and the public, leading to a misallocation of emergency resources and delaying the capture of a dangerous escaped animal. The harm is direct and significant, involving disruption of critical infrastructure (emergency response) and potential risk to public safety. The AI system's role is pivotal as the image's AI-generated nature was central to the deception and subsequent harm. Hence, this is classified as an AI Incident.
Thumbnail Image

S. Korea arrests man for spreading AI-generated image of escaped wolf

2026-04-24
The Peninsula
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image that misled authorities and the public, resulting in a nine-day delay in capturing the escaped wolf. This caused significant operational disruption and public safety concerns, fulfilling the criteria for harm to critical infrastructure management and communities. Therefore, this qualifies as an AI Incident due to the direct and significant harm caused by the AI-generated content.
Thumbnail Image

South Korean Man Might Get Prison Time for Posting AI Wolf Picture

2026-04-25
Gizmodo
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a manipulated image of the escaped wolf, which misled authorities and delayed the capture by up to nine days. This delay caused significant disruption to emergency services and public safety efforts, fulfilling the criteria for harm to communities and disruption of critical infrastructure management. The AI-generated image's role was pivotal in this harm, making it an AI Incident rather than a hazard or complementary information. The event is not unrelated as the AI system's use directly contributed to the harm described.
Thumbnail Image

Man nabbed for spreading AI image of escaped wolf

2026-04-26
The Star
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fabricated image that misled authorities and the public, causing a nine-day delay in capturing an escaped wolf. This delay led to tangible harm including school closure and extensive deployment of emergency personnel, constituting disruption of critical infrastructure and harm to the community. Therefore, this qualifies as an AI Incident due to the direct and significant harm caused by the AI-generated content.
Thumbnail Image

South Korean Man Arrested For Spreading AI Generated Image Of 'Escaped' Wolf & Igniting Fear Among Citizens

2026-04-25
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a fake image that was spread widely, causing real-world harm by misleading authorities and delaying emergency response. The harm is realized and significant, including disruption of critical public safety operations and emergency services. The AI-generated content was central to the incident, and the malicious use of AI directly caused the harm. Therefore, this qualifies as an AI Incident.
Thumbnail Image

South Korea arrests man for spreading AI-generated image of escaped wolf - VnExpress International

2026-04-26
VnExpress International – Latest news, business, travel and analysis from Vietnam
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated image that was spread online, misleading authorities and the public. This misuse of generative AI directly caused harm by delaying the capture of the escaped wolf and diverting emergency personnel from their primary duties, constituting disruption of critical infrastructure management. The event meets the criteria for an AI Incident because the AI system's use directly led to significant harm and disruption.
Thumbnail Image

Νότια Κορέα: Σύλληψη άντρα για AI εικόνα λύκου που αναζητούνταν από ζωολογικό κήπο

2026-04-24
ΣΚΑΪ
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in creating a false image that was widely shared, which directly delayed the capture of a dangerous escaped animal. This delay caused serious disruption to public safety operations and community safety, fulfilling the criteria for harm to communities and disruption of critical infrastructure management. Therefore, this qualifies as an AI Incident because the AI system's use directly led to significant harm and disruption.
Thumbnail Image

Νότια Κορέα: Η ψεύτικη εικόνα του λύκου που προκάλεσε "χαμό" στην αστυνομία

2026-04-24
NEWS 24/7
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake image that was widely shared and caused a delay in a critical public safety operation. This delay disrupted the management and operation of emergency services and endangered public safety, which fits the definition of an AI Incident under harm category (d) harm to communities or (b) disruption of critical infrastructure operations. The AI system's use directly led to these harms, so this is classified as an AI Incident.
Thumbnail Image

Νότια Κορέα: Συνελήφθη άνδρας επειδή κοινοποίησε φωτογραφία ΑΙ - λύκου που είχε δραπετεύσει από ζωολογικό κήπο

2026-04-24
The TOC
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a false image that was widely shared, leading to a delay in capturing a dangerous escaped animal. This delay caused disruption to critical infrastructure (emergency services) and public safety, which qualifies as harm under the framework. The AI system's use directly contributed to this harm, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Νότια Κορέα: Συνελήφθη άνδρας που κοινοποίησε ΑΙ φωτογραφία με έναν λύκο που είχε δραπετεύσει από ζωολογικό κήπο

2026-04-24
enikos.gr
Why's our monitor labelling this an incident or hazard?
The AI system was used to create and spread a false image, which directly caused delays in the wolf's capture and disrupted the operations of emergency services, constituting harm to public safety and critical infrastructure management. The event meets the criteria for an AI Incident because the AI-generated content directly led to significant harm and disruption.
Thumbnail Image

Νότια Κορέα: Συνελήφθη άνδρας επειδή κοινοποίησε μια ψεύτικη φωτογραφία ΑΙ στο διαδίκτυο

2026-04-24
CNN.gr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake image (AI-generated content). The dissemination of this image directly led to harm by delaying the capture of a dangerous animal, causing disruption to emergency services and public safety operations, and closure of a school. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and disruption of critical infrastructure management. The event is not merely a potential hazard or complementary information but a realized incident with clear harm caused by the AI-generated content.
Thumbnail Image

Ν. Κορέα: Σύλληψη άνδρα για ψεύτικη ΑΙ φωτογραφία λύκου από ζωολογικό κήπο

2026-04-24
Business Daily
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deceptive image that misled authorities and the public, resulting in a delay in capturing the escaped wolf. This delay caused significant disruption to critical infrastructure management (public safety and emergency response) and harm to the community through the temporary school closure and resource diversion. Therefore, the AI system's use directly led to harm as defined under AI Incident criteria, specifically disruption of critical infrastructure and harm to communities. The involvement is not merely potential but realized, and the harm is clearly articulated.
Thumbnail Image

Νότια Κορέα: Συνελήφθη άνδρας επειδή κοινοποίησε ψεύτικη φωτογραφία λύκου προϊόν "AI"

2026-04-24
ΑΘΗΝΑ 9,84
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to create a fake image that was shared online, which directly caused a delay in the capture of a dangerous animal. This delay disrupted the management and operation of critical infrastructure (emergency services) and harmed the community by causing safety concerns and school closure. The AI-generated content was pivotal in causing these harms, meeting the criteria for an AI Incident.
Thumbnail Image

Νότια Κορέα: Συνελήφθη άνδρας που κοινοποίησε ΑΙ φωτογραφία με έναν λύκο που είχε δραπετεύσει από ζωολογικό κήπο

2026-04-24
news.makedonias.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated image being shared and the subsequent arrest of the individual responsible. While AI was used to create the fake photo, there is no indication that the image caused harm such as public panic, injury, or rights violations. The event is about the legal and societal response to the misuse of AI-generated content. This fits the definition of Complementary Information, which includes governance responses and updates related to AI incidents but does not itself describe a new incident or hazard causing harm.
Thumbnail Image

Νότια Κορέα: Συνελήφθη άνδρας επειδή κοινοποίησε ψεύτικη φωτογραφία - προϊόν ΑΙ - ενός λύκου | Athens24.gr

2026-04-25
athens24.gr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake photo, which was then widely disseminated, causing misinformation. This misinformation constitutes harm to communities, fulfilling the criteria for an AI Incident. The event describes realized harm (misinformation spread) directly linked to the AI-generated content, not just a potential risk or a complementary update.