AI-Generated Deepfake Video Fuels Misinformation After Tainan Policewoman's Death

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Following a fatal accident involving a policewoman in Tainan, AI-generated deepfake videos misrepresented the actions of the suspect, a female student, portraying her as indifferent. These manipulated videos, allegedly originating from China, spread widely online, inciting public outrage and reputational harm, and raising concerns about AI-driven misinformation and social disruption.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly involved in creating a fabricated video that misleads the public about a sensitive incident, causing reputational harm and social disruption. The harm is realized as the video attracted millions of views and led to public outrage and online harassment. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violations of rights through misinformation and emotional manipulation. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

台南女警車禍影片疑經AI變造?台南警:已掌握原始檔案 | 聯合新聞網

2026-05-07
UDN
Why's our monitor labelling this an incident or hazard?
An AI system is reasonably inferred as involved because the video is suspected to have been altered using AI (likely deepfake or video manipulation AI). The AI involvement is in the use of AI to create a manipulated video that could mislead viewers. However, the article does not indicate that this AI-manipulated video has directly caused harm such as injury, rights violations, or significant community harm. The police have the original video and are continuing their investigation, and no legal action has been taken against the spreaders of the altered video. Thus, the AI involvement could plausibly lead to harm (misinformation, reputational damage), but no direct or indirect harm from the AI-manipulated video is confirmed. This fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

撞女警女大生「反覆看車損」影片 吸300萬瀏覽 律師:AI做的 - 生活

2026-05-07
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in creating a fabricated video that misleads the public about a sensitive incident, causing reputational harm and social disruption. The harm is realized as the video attracted millions of views and led to public outrage and online harassment. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and violations of rights through misinformation and emotional manipulation. Therefore, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

撞女警女大生「反覆看車損」影片 吸300萬瀏覽 律師:AI做的 - 時事

2026-05-07
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake video that misrepresented a real tragic event, leading to widespread misinformation and public emotional harm. The AI-generated content was used maliciously to distort facts and manipulate public opinion, which is a violation of rights and harms the community. The harm is realized and significant, not merely potential. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

女警車禍事故影片遭AI變造! 檢警:調查不受影響 - 社會

2026-05-07
中時新聞網
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a manipulated video (deepfake) that misrepresented facts about a serious accident, leading to public misinformation and reputational harm. This manipulation directly harms the individuals involved and the community's trust in information, fitting the definition of harm to communities and violation of rights. The AI's role is pivotal in causing this harm. Although the investigation continues unaffected, the AI-generated misinformation has already caused harm, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

消費女警車禍悲劇 AI變造肇事者影片吸「仇恨」流量 - 臺南市 - 自由時報電子報

2026-05-06
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a manipulated video (deepfake) that falsely portrays the suspect, leading to reputational harm and social unrest. This meets the criteria for an AI Incident because the AI's use directly led to harm to the community (spread of misinformation and social hostility) and potential violation of the suspect's rights. The event involves the use and misuse of AI-generated content causing realized harm, not just a potential risk, so it is classified as an AI Incident.
Thumbnail Image

台南女警車禍亡「女大生查看車損」傳是AI假影片 網指來源是中國 - 社會 - 自由時報電子報

2026-05-07
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a fake video, which is a form of AI misuse. However, the article does not describe a direct or realized harm such as injury, rights violation, or significant community harm caused by the AI-generated video itself. Instead, it highlights the spread of misinformation and the potential social consequences, with a focus on the source and intent behind the AI-generated content. This fits the definition of Complementary Information, as it informs about societal and governance responses to AI misuse and the broader implications of AI-generated misinformation, rather than documenting a specific AI Incident or AI Hazard.
Thumbnail Image

揭中國假帳號放AI影片致女大生被網暴 律師示警:認知作戰小試身手 - 政治 - 自由時報電子報

2026-05-07
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated fake video that was spread by fake accounts, leading to online harassment of the female student. The AI system's role in fabricating and distributing the video is central to the harm caused, which includes emotional and reputational damage and social manipulation. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a person and harm to the community through misinformation and online abuse. The involvement is through the use of AI to create deceptive content that manipulates public perception and incites harassment.
Thumbnail Image

台南女警案影片疑經AI變造 警已掌握原始檔案 | 社會 | 中央社 CNA

2026-05-07
Central News Agency
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to alter video content, which is a form of AI-generated misinformation. The police have secured the original footage, and the manipulated video is circulating online, suggesting a risk of harm through misinformation or reputational damage. Since no direct harm (such as physical injury or legal rights violation) caused by the AI-manipulated video is reported, but the potential for harm through misinformation and social disruption exists, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the potential misleading impact of AI-manipulated content rather than a realized harm event.
Thumbnail Image

認知作戰?害死女警女大生反覆查看車損引撻伐!律師:AI假影片 | 社會 | Newtalk新聞

2026-05-07
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake video (deepfake) that misrepresented the actions of a person involved in a fatal accident. The fake video led to public outrage and misinformation, which is a harm to communities and individuals' reputations. The AI-generated content directly caused this harm by manipulating public perception and spreading false information. Hence, this event meets the criteria for an AI Incident due to realized harm caused by AI misuse.
Thumbnail Image

逆風!撞死女警「女大生冷血看車」是AI假影片? 律師:又是中國搞的! | 社會 | 三立新聞網 SETN.COM

2026-05-07
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology to create a fabricated video that misrepresents a person's behavior in a sensitive incident, leading to social harm by manipulating public opinion and emotions. The AI system's use directly led to the dissemination of false information, which is a violation of rights related to truthful information and harms community trust. The article explicitly states the video is AI-generated and is used as a tool for cognitive warfare, confirming AI involvement and realized harm. Hence, it meets the criteria for an AI Incident.
Thumbnail Image

女大生撞死女警!「冷血看車」影片被AI動手腳 台南警方回應了 | 社會 | 三立新聞網 SETN.COM

2026-05-07
三立新聞
Why's our monitor labelling this an incident or hazard?
The AI system is involved in the creation of a deepfake video, which is a misuse of AI technology leading to misinformation and social harm. The harm is indirect, through the spread of false information that inflames public emotions and misleads the community. However, the article's main focus is on revealing this manipulation and the official response, not on the incident of harm itself or a new hazard. The police and prosecutors are responding to the situation, indicating ongoing societal and governance responses. Thus, the event fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

害死新娘警...AI造假女大生「冷血看車」律師夢回13年前:感慨一切都沒變 | 社會 | 三立新聞網 SETN.COM

2026-05-08
三立新聞
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI to create fake videos and false narratives that have been disseminated to the public, directly causing reputational harm and emotional distress to the individual involved. The AI system's role is pivotal in fabricating and spreading these falsehoods. This meets the definition of an AI Incident because the AI's use has directly led to harm to communities and violation of rights through misinformation and defamation. The article does not merely discuss potential or future harm, nor is it a general AI news update; it reports on actual harm caused by AI misuse.
Thumbnail Image

女警車禍亡|女大生「查看車損」傳出AI變造片!來源竟是中國|壹蘋新聞網

2026-05-07
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a manipulated video (deepfake) that distorts reality and misleads the public. The AI's role is central in fabricating the video content. Although the article does not report actual harm occurring yet, it warns about the potential for such AI-generated misinformation to cause social and cognitive harm, described as "cognitive warfare." This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to harm (misinformation, social disruption). There is no indication of direct injury, legal violation, or property harm at this stage, so it is not an AI Incident. The focus is on the potential future harm from the AI-manipulated content, not on a response or update to a past incident, so it is not Complementary Information. Therefore, the classification is AI Hazard.
Thumbnail Image

女警遭輾死|AI偽造女大生查看車損影片 台南警方回應了|壹蘋新聞網

2026-05-07
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is reasonably inferred because the video is described as AI-manipulated or AI-generated (deepfake). The event concerns the use of AI to create falsified content that could mislead or harm reputations, which fits the definition of an AI Hazard since it could plausibly lead to harm such as misinformation, reputational damage, or interference with legal processes. Since no actual harm is confirmed or reported as having occurred, and the authorities are responding to the potential threat, this is best classified as an AI Hazard rather than an AI Incident. It is not Complementary Information because the main focus is on the AI-manipulated video and its potential impact, not on responses or updates to a prior incident. It is not Unrelated because AI involvement is explicit and relevant.
Thumbnail Image

驚傳畫面被AI動手腳!台南女大生追撞女警「事後影片疑變造」 警方回應了

2026-05-07
mnews.tw
Why's our monitor labelling this an incident or hazard?
The presence of AI is reasonably inferred from the mention of AI-altered video footage (deepfake) that manipulates the perception of the involved parties. The harm is realized as the manipulated video spreads misinformation, potentially damaging reputations and influencing public opinion and legal processes, which is harm to communities and a violation of rights. The police and prosecutors are involved in investigating the matter, indicating the seriousness of the incident. Since the AI system's use has directly led to harm through misinformation and social disruption, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

女警遭輾死|AI偽造女大生反覆看車損 巴毛律師:夢回2013媽媽嘴|壹蘋新聞網

2026-05-07
壹蘋新聞網
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated manipulated videos (deepfakes) that have been used to spread misinformation and defame an individual. While the AI system's misuse has caused reputational harm and social misinformation, the article primarily reports on the existence and impact of these AI-generated false videos and rumors rather than describing a new AI Incident causing direct or indirect harm such as physical injury or legal rights violations. The main focus is on the societal response and clarification of misinformation, which fits the definition of Complementary Information rather than a new AI Incident or AI Hazard.