Deepfake Video Targets Taiwanese Presidential Candidate, Prompts Investigation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake technology was used to create and spread a manipulated video falsely depicting Taiwanese presidential candidate Lai Ching-te endorsing political rivals. The video, widely circulated on social media, aimed to mislead voters and influence election outcomes, prompting authorities to launch a criminal investigation and warn the public about deepfake risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes a deepfake video that uses AI-based techniques to alter the speech of a presidential candidate, which is then spread on social media. This manipulation can mislead the public and influence election outcomes, constituting harm to communities and a violation of democratic rights. The involvement of AI in creating the deepfake and the resulting harm to the electoral process qualifies this as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRespect of human rightsRobustness & digital securitySafetyDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generationOrganisation/recommenders

In other databases

Articles about this incident or hazard

Thumbnail Image

賴清德讚藍白合影片 調查局:經深偽變造| 台灣大紀元

2023-11-24
大紀元時報 - 台灣(The Epoch Times - Taiwan)
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video that uses AI-based techniques to alter the speech of a presidential candidate, which is then spread on social media. This manipulation can mislead the public and influence election outcomes, constituting harm to communities and a violation of democratic rights. The involvement of AI in creating the deepfake and the resulting harm to the electoral process qualifies this as an AI Incident.
Thumbnail Image

利用AI「一鍵脫衣」尚無聯邦法律規範 受害者求償無門

2023-11-22
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to create deepfake pornographic content without consent, which directly harms individuals' psychological health and violates their rights. The article details realized harms such as mental health impacts and legal challenges faced by victims, fulfilling the criteria for an AI Incident. The AI system's use is central to the harm, and the lack of adequate legal protection exacerbates the impact. Therefore, this is classified as an AI Incident due to direct harm caused by AI-generated content.
Thumbnail Image

賴清德回應藍白合影片 調查局:深偽假影片

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems through the use of deepfake technology to create manipulated videos. The deepfakes have been disseminated with the intent to influence voter judgment, which can directly harm the fairness of elections and misinform the public, thus meeting the criteria for harm to communities and violation of rights. The investigation and legal actions further confirm the seriousness of the incident. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation affecting democratic processes.
Thumbnail Image

抖音出現變造賴清德回應 調查局將查境外勢力

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to create a manipulated video of a political candidate, which has been spread to influence voter perception. This constitutes a direct AI Incident because the AI system's use has led to harm to communities by threatening election integrity and public trust. The malicious use of AI-generated content to distort political discourse and potentially violate election laws fits the definition of an AI Incident under violations of rights and harm to communities.
Thumbnail Image

賴清德在社群平台談藍白合? 調查局呼籲勿亂傳深偽影片

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI technology (deepfake) to create manipulated video content that misrepresents a political figure, which is an AI system's use leading to misinformation and potential harm to the democratic process (harm to communities and violation of electoral fairness). Since the manipulated video is already circulating and could influence voter behavior, this constitutes realized harm. Therefore, this qualifies as an AI Incident due to the direct involvement of AI-generated deepfake content causing harm to societal trust and election integrity.
Thumbnail Image

網傳賴清德回應藍白合「台灣民意主流」 調查局:深偽惡意變造

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely deepfake technology, used to maliciously manipulate video and audio content. The use of this AI-generated deepfake has directly led to the dissemination of false information that could harm the democratic process and election fairness, which constitutes harm to communities and a violation of legal frameworks protecting electoral integrity. Therefore, this qualifies as an AI Incident due to realized harm from the AI system's malicious use.
Thumbnail Image

賴清德回應藍白合 調查局抓出是深偽影片

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI system-generated manipulated content. The malicious use of this AI-generated deepfake has already occurred and is causing harm by spreading false information that could influence election outcomes, which constitutes harm to communities and potentially violates democratic rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation affecting election fairness.
Thumbnail Image

2024選前首例Deepfake「變臉」,假賴清德騙投資!Meta怎麼防AI攪局?

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (deepfake technology) to create manipulated videos of political figures, which have been disseminated online. This use of AI has directly led to harm by spreading false information that could mislead voters and disrupt the democratic process, thus harming communities. The involvement of AI in the creation and spread of these fake videos meets the criteria for an AI Incident, as the harm is realized and the AI system's role is pivotal. The article also discusses responses by major platforms to mitigate such harms, but the primary focus is on the incident of AI-generated misinformation itself.
Thumbnail Image

賴清德大讚柯侯很適任?檢追查是「深偽影片」

2023-11-24
Yahoo News (Taiwan)
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-based generative technology. The video misrepresents a political figure's statements, potentially influencing voter behavior and election outcomes, which is a clear harm to communities and democratic rights. The investigation and forensic analysis confirm the AI-generated nature of the video and its malicious intent. Therefore, the AI system's use has directly led to harm, meeting the criteria for an AI Incident.
Thumbnail Image

賴清德遭散布談「藍白合」深偽影音 調查局報請彰檢偵辦溯源 - 政治 - 自由時報電子報

2023-11-24
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system—deepfake technology—that was used to create a falsified video. The use of this AI system has directly led to harm by spreading misinformation that could influence voters and affect the fairness of democratic elections, which constitutes harm to communities and a violation of democratic rights. The investigation and legal actions indicate recognition of this harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm.
Thumbnail Image

影音平台出現賴談「藍白合」深偽影片 彰檢:全力追查源頭 - 政治 - 自由時報電子報

2023-11-24
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (deepfake technology) to create manipulated video content that has already been disseminated on social media platforms. The AI-generated deepfake has directly led to harm by attempting to mislead voters and interfere with the democratic election process, which constitutes harm to communities and a violation of legal protections for fair elections. Therefore, this qualifies as an AI Incident because the AI system's use has directly caused harm through misinformation and election interference.
Thumbnail Image

賴回應藍白合影片 調局︰遭深偽變造 - 政治 - 自由時報電子報

2023-11-24
Liberty Times Net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used maliciously to create a fabricated video that misleads voters, directly impacting the democratic process and potentially violating election laws. The harm is realized as the manipulated video is circulating on social media platforms, influencing public opinion and voter behavior. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's use in spreading disinformation with societal and legal implications.
Thumbnail Image

「藍白合是台灣民意主流」」 賴清德影片遭深偽變造 調查局立案追查 | 聯合新聞網

2023-11-24
UDN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of deepfake AI technology to create a fabricated video of a political figure, which is being spread to influence voter judgment and election outcomes. This manipulation of information through AI-generated content directly harms the community by spreading misinformation and potentially undermining democratic processes. The investigation and legal actions further confirm the seriousness of the harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the malicious use of an AI system (deepfake technology).
Thumbnail Image

抖音出現變造賴清德回應 調查局將查境外勢力 | 聯合新聞網

2023-11-24
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used maliciously to create a fabricated video that misrepresents a political candidate's statements. This has directly led to harm by misleading the public and potentially affecting democratic processes, which falls under violations of rights and harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

賴清德挺藍白合? 影片遭深偽變造 | 聯合新聞網

2023-11-24
UDN
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system in the form of deepfake technology used to create manipulated video and audio content. The malicious use of this AI system has directly led to harm by spreading false information that could influence public opinion and election results, thus harming communities and potentially violating democratic rights. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation with significant societal impact.
Thumbnail Image

網傳賴清德稱讚藍白合影片 台調查局:深偽變造 - 大紀元

2023-11-24
The Epoch Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI deepfake technology to create a manipulated video that misrepresents a political figure's speech. This manipulation is being actively disseminated, causing misinformation that could influence voter behavior and election fairness, which constitutes harm to communities and a violation of rights. The involvement of AI in the creation of the deepfake and the resulting harm meets the criteria for an AI Incident, as the harm is realized and ongoing.
Thumbnail Image

檢調追查社群影音平台深偽影片 籲民眾提高警覺 - Rti央廣

2023-11-24
Rti 中央廣播電臺
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of deepfake technology used to create manipulated videos that misrepresent political candidates. The deepfake videos have been spread on social media platforms, misleading the public and potentially affecting election fairness, which is a harm to communities and a violation of legal protections for fair elections. The fact that authorities have opened investigations and are pursuing legal action confirms that the AI system's use has directly or indirectly led to harm. Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

網傳賴清德回應藍白合「台灣民意主流」 調查局:深偽惡意變造 | 社會 | 三立新聞網 SETN.COM

2023-11-24
三立新聞
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of deepfake AI technology to create a fabricated video of a political figure making false statements. This AI-generated content is being spread with the intent to influence voter judgment and election outcomes, which is a direct harm to the fairness of elections and the community's trust. The investigation and legal actions further confirm the recognition of harm caused by the AI system's misuse. Therefore, this event qualifies as an AI Incident due to the realized harm from malicious AI use affecting societal and legal rights.