Foreign deepfake video targeting Taiwanese candidate Luo Zhizheng sparks election controversy

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Former DPP legislator Luo Zhizheng faced viral explicit videos before elections, which he claimed were AI-generated deepfakes by foreign actors to sway voters. Investigation found the hosting IPs abroad, hindering identity tracing; prosecutors closed the case, finding no evidence of legal wrongdoing by the reporting outlet.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI technology (deepfake video) to create and distribute manipulated content falsely attributed to a person, which is a form of AI system use. The harm is indirect but significant, involving potential violation of privacy, defamation, and interference with electoral processes, which are violations of rights and harm to communities. Since the harm is plausible and the investigation was hindered by foreign IPs, but no legal charges or confirmed harm have been established, this fits the definition of an AI Hazard rather than an AI Incident. The article does not report a confirmed harm caused by the AI system but highlights the credible risk and challenges in addressing it.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomyRobustness & digital security

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
OtherGeneral public

Harm types
ReputationalPublic interestHuman or fundamental rightsPsychological

Severity
AI hazard

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

羅致政選前不雅片瘋傳遭疑「深偽介選」 新北地檢簽結:境外IP難溯源

2024-08-22
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of "deepfake" technology, which is an AI system capable of generating realistic fake videos. The malicious use of this AI system to create and disseminate false explicit videos of a political candidate directly harmed his reputation and influenced the election result, constituting harm to communities and violation of rights. The involvement of AI in the creation of these videos and their impact on the election meets the criteria for an AI Incident. Although the investigation could not conclusively confirm the deepfake nature publicly due to privacy laws, the candidate and authorities treated the videos as deepfakes, and the harm occurred regardless.
Thumbnail Image

疑羅致政影片散布案IP在國外 檢方難溯源全案簽結

2024-08-22
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake video) to create and distribute manipulated content falsely attributed to a person, which is a form of AI system use. The harm is indirect but significant, involving potential violation of privacy, defamation, and interference with electoral processes, which are violations of rights and harm to communities. Since the harm is plausible and the investigation was hindered by foreign IPs, but no legal charges or confirmed harm have been established, this fits the definition of an AI Hazard rather than an AI Incident. The article does not report a confirmed harm caused by the AI system but highlights the credible risk and challenges in addressing it.
Thumbnail Image

羅致政不雅片案「未提告」簽結了!境外IP查不到散布者 影片真偽?檢方回應了

2024-08-22
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event describes the use of AI deepfake technology to create a potentially harmful video, which could plausibly lead to harm such as reputational damage or misinformation. However, the investigation did not confirm the video's authenticity or identify perpetrators, and no legal charges were filed. Since no direct or indirect harm has been confirmed or legally established, and the case was closed without prosecution, this event represents a plausible risk scenario rather than a realized harm. Therefore, it qualifies as an AI Hazard due to the plausible future harm from deepfake misuse, but not an AI Incident.
Thumbnail Image

IP在境外! 羅致政影片疑雲 檢方查無散佈者身分已簽結 - 社會 - 自由時報電子報

2024-08-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI technology (deepfake video generation) suspected to be used for political manipulation and misinformation. However, the investigation did not confirm the identity of the distributors, and no direct harm or legal violation was established. The article focuses on the potential misuse of AI-generated deepfake content for election interference, which is a plausible risk but not confirmed as having caused harm. Therefore, this qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

IP在境外! 羅致政影片疑雲 檢方查無散佈者身分已簽結 - 新北市 - 自由時報電子報

2024-08-22
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves AI technology in the form of deepfake videos, which are AI-generated synthetic media. However, the article reports on an investigation that found no identifiable perpetrator and no confirmed dissemination causing harm. The videos are alleged to be used for election interference, which is a potential harm, but since no confirmed harm or ongoing incident is established and the case is closed, this constitutes a plausible risk rather than a realized harm. Therefore, this event is best classified as an AI Hazard, reflecting the plausible future harm from AI-generated deepfake videos used in political manipulation, but without confirmed incident or harm at this time.
Thumbnail Image

疑羅致政影片散布案IP在國外 檢方難溯源全案簽結 | 社會 | 中央社 CNA

2024-08-22
Central News Agency
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a deepfake video, which involves AI technology for synthetic media generation. The video is alleged to be used for political manipulation and privacy violation, which are recognized harms. However, the article states that the investigation was closed due to difficulties in tracing the source and lack of direct evidence of wrongdoing by the media outlet. There is no clear indication that the AI-generated content has directly caused harm yet, only that it could plausibly lead to harm if such deepfake videos are used maliciously. Hence, the event fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.