Philippine Officials Spread AI-Generated Deepfake Video, Eroding Public Trust

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Philippine officials, including Senator Ronald dela Rosa and Davao City Mayor Sebastian Duterte, shared an AI-generated deepfake video featuring fake student interviews about Vice President Sara Duterte's impeachment. The incident sparked public concern and official condemnation, highlighting how AI-generated disinformation by leaders undermines public trust and spreads misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a fake video that was shared by government officials, leading to the spread of disinformation and erosion of public trust. The harm to communities through misinformation is realized and directly linked to the AI-generated content. The officials' sharing of the AI-generated video without acknowledging its falsehood contributes to the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (disinformation and erosion of trust).[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securityDemocracy & human autonomySafetyRespect of human rightsHuman wellbeing

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
General publicGovernment

Harm types
Public interestReputational

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Claire Castro: Gov't officials' sharing of AI videos erodes public trust

2025-06-16
Inquirer.net
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video that was shared by government officials, leading to the spread of disinformation and erosion of public trust. The harm to communities through misinformation is realized and directly linked to the AI-generated content. The officials' sharing of the AI-generated video without acknowledging its falsehood contributes to the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm (disinformation and erosion of trust).
Thumbnail Image

Palace calls out Dela Rosa, Baste Duterte for sharing 'fake news'

2025-06-16
Inquirer.net
Why's our monitor labelling this an incident or hazard?
An AI-generated video was shared by public officials, which is a form of AI-generated content leading to misinformation. The sharing of this content by officials can erode public trust and cause harm to communities through disinformation. However, the article focuses on the criticism and calls for responsibility rather than reporting actual realized harm or consequences. Therefore, this event represents a plausible risk of harm due to AI-generated disinformation but does not document a concrete incident of harm yet. Hence, it qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Claire Castro: Sharing fake AI content damages public trust in officials

2025-06-16
CDN Digital
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated content being shared by officials, which is disinformation and fake news. This has caused harm by eroding public trust, a form of harm to communities. The AI system's role in generating the fake video is pivotal to the incident. Therefore, this qualifies as an AI Incident due to realized harm from the use of AI-generated disinformation.
Thumbnail Image

Palace hits AI-generated 'fake news'

2025-06-16
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (an AI-generated video) that was shared by officials, leading to the spread of false information and public disinformation. This constitutes a violation of trust and can be seen as harm to communities and a breach of obligations under applicable law regarding truthful communication. The AI system's use in generating fake news that is disseminated by officials directly led to this harm. Therefore, this qualifies as an AI Incident. The article also includes discussion of governance responses and potential penalties, but the primary focus is on the realized harm caused by the AI-generated disinformation.
Thumbnail Image

Palace calls out dela Rosa on AI fake news

2025-06-16
The Manila times
Why's our monitor labelling this an incident or hazard?
An AI system was involved as the video was explicitly described as AI-generated. The use of this AI-generated content by officials to spread false narratives directly led to harm in the form of misinformation and erosion of public trust, which is harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use directly contributed to the harm. The event is not merely a potential hazard or complementary information, but a realized incident of AI-enabled disinformation causing harm.
Thumbnail Image

Palace warns officials against sharing AI-generated deepfakes - Manila Standard

2025-06-16
Manila Standard
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating deepfake video content that is false and misleading. The sharing of this content by government officials contributes to the spread of misinformation, which harms communities by undermining public trust and potentially influencing political processes. This meets the criteria for an AI Incident because the AI-generated content has directly led to harm (misinformation and erosion of trust), and the event involves the use and misuse of an AI system's outputs. The discussion of possible legal consequences further supports the recognition of harm caused by the AI-generated content.
Thumbnail Image

Palace hits Sen. Bato's AI-generated online post - Manila Standard

2025-06-16
Manila Standard
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content (deepfake videos) that has been shared by government officials, leading to the spread of false information and disinformation. This constitutes harm to communities by undermining trust and spreading misinformation. The AI system's use in generating fabricated videos directly contributes to this harm. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI-generated disinformation.
Thumbnail Image

Palace to Officials: Stop spreading AI-generated fake news

2025-06-16
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions AI-generated fake news being shared by elected officials, which has directly led to misinformation spreading among the public. This misinformation harms the community by eroding trust in government and political processes, fulfilling the harm criteria under (d) harm to communities. The AI system's role in generating the misleading video is pivotal to the incident. The event describes actual harm occurring, not just potential harm, so it is classified as an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

House slams Bato for spreading fake news

2025-06-18
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to generate disinformation that was actively disseminated by a public official, leading to harm by spreading false narratives and influencing public views. This fits the definition of an AI Incident because the AI-generated content directly caused harm to communities through misinformation and undermining political processes. The legislative calls and educational responses are complementary information but do not overshadow the primary incident of AI-generated disinformation causing harm.
Thumbnail Image

Progressive, IT groups criticize a senator for spreading AI-generated 'fake news'

2025-06-18
Bulatlat
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating deepfake video content that is false and used to mislead the public on a political matter. The AI-generated content has been shared by public officials, spreading disinformation and causing harm to the community by undermining truthful political processes. The harm is realized and ongoing, not merely potential. Hence, it meets the criteria for an AI Incident due to the direct role of AI-generated content in causing harm through misinformation and political manipulation.
Thumbnail Image

[Tech Thoughts] Realistic, but fake? It's getting harder to tell the difference!

2025-06-20
Rappler
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating fake videos that are being shared and amplified, causing misinformation and misleading the public. This disinformation harms communities by undermining trust and spreading falsehoods, which fits the definition of harm to communities under AI Incident. The AI system's use directly leads to this harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dela Rosa ridiculed over AI video on Sara Duterte impeachment

2025-06-16
Inquirer.net
Why's our monitor labelling this an incident or hazard?
While the video is AI-generated and the senator's sharing of it caused misinformation and public confusion, the event does not describe any realized harm such as injury, rights violations, or disruption of critical infrastructure. The incident mainly concerns misinformation and public misunderstanding, which does not meet the threshold for an AI Incident. It also does not present a plausible future harm scenario beyond the current misinformation. Therefore, this is best classified as Complementary Information about societal reactions and challenges in distinguishing AI-generated content.
Thumbnail Image

Sara Duterte: Nothing wrong with sharing AI video opposing her impeachment

2025-06-16
Rappler
Why's our monitor labelling this an incident or hazard?
The article describes the use and sharing of an AI-generated video in a political context, which involves an AI system. However, there is no evidence of direct or indirect harm caused by the AI system's use, such as misinformation causing societal harm, violation of rights, or other significant harms. The event focuses on the political and social reactions to the AI-generated video rather than any realized or plausible harm. Thus, it fits best as Complementary Information, providing context on how AI-generated content is being used and perceived in political discourse without constituting an incident or hazard.
Thumbnail Image

Dela Rosa draws flak for sharing AI video on Duterte impeachment

2025-06-16
CDN Digital
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video that was shared by a public official, leading to public misunderstanding and misinformation. The AI-generated content directly contributed to harm by misleading the public, which fits the definition of harm to communities. The senator's failure to verify the content before sharing exacerbated the impact. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Dela Rosa draws flak over AI video of students opposing Sara Duterte's impeachment

2025-06-16
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video that has been widely disseminated and believed to be real by many viewers, causing misinformation and undermining public trust. The involvement of a government official in sharing the video without clear acknowledgment of its AI-generated nature amplifies the harm. The harms include misinformation, erosion of trust in public institutions, and potential political manipulation, which fall under harm to communities and violations of rights. Since the harm is realized and directly linked to the AI-generated content, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Sara Duterte sees 'no problem' with using AI to generate support for her

2025-06-16
Philstar.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfake videos and manipulated content used in political campaigns, which have directly led to misinformation and manipulation of voter sentiment, a form of harm to communities. The sharing of AI-generated videos supporting political figures and the use of AI-generated anti-political content demonstrate the AI system's use in causing harm. The lack of AI governance exacerbates the issue. Since the harm is realized and linked directly to AI-generated content, this is classified as an AI Incident.
Thumbnail Image

FACT CHECK | After Bato and Baste's AI shares: How to identify AI-generated content online

2025-06-16
MindaNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating fake videos and images that have been widely shared, causing misinformation. This constitutes harm to communities through misinformation, which is a recognized AI Incident. However, the article itself is a fact-check and guide to identifying AI-generated content, focusing on raising awareness and providing detection methods rather than reporting a new incident or hazard. Therefore, the main content is complementary information about AI-related misinformation and societal responses, not a new AI Incident or Hazard. The article's primary purpose is educational and supportive of understanding AI impacts, fitting the definition of Complementary Information.
Thumbnail Image

Solon: Gov't officials must be 'source of truth,' not of fake AI videos

2025-06-17
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The article describes public officials sharing AI-generated videos that misinform the public, which is a direct example of harm caused by the use of AI systems generating false content. This misinformation harms communities by distorting public discourse and undermining trust in public officials, fitting the definition of an AI Incident due to violations of rights and harm to communities. The involvement of AI in generating the misleading videos is explicit, and the harm is realized, not just potential.
Thumbnail Image

VP: No problem in sharing AI content if 'no money involved'

2025-06-17
Sun.Star Network Online
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate videos that were shared by political figures, spreading misleading information that influenced public perception and political debate. The involvement of AI in creating and disseminating false content that affects political processes and public trust constitutes a violation of rights and harm to communities. The article reports that this disinformation is occurring and causing societal harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

VP Sara sees no problems with AI content supporting her

2025-06-17
MindaNews
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated videos that were shared by politicians as genuine, which misled the public and spread misinformation. This misinformation harms communities by undermining trust and distorting political discourse, fulfilling the harm criteria under (d) harm to communities. The AI system's use in generating and disseminating false content is central to the incident. Although the vice president sees no problem with AI content supporting her, the fact remains that the AI-generated videos are fake and have caused harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI-manufactured agenda by DDS

2025-06-19
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and deepfakes used to spread false information about political figures, which is a clear example of AI system use causing harm. The harm includes misinformation, manipulation of public opinion, and exploitation of vulnerable populations, which are harms to communities and violations of rights. Since the AI-generated content is actively used and has caused these harms, this qualifies as an AI Incident rather than a hazard or complementary information. The article details realized harm rather than potential harm, and the AI system's role is pivotal in the dissemination of false narratives.
Thumbnail Image

AI fakes duel over Sara Duterte impeachment in Philippines

2025-06-25
Economic Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos of non-existent people expressing political opinions. These AI-generated deepfakes have been widely viewed and have influenced public perception, thereby causing harm to communities by spreading misinformation and undermining trust in democratic processes. The harm is realized and ongoing, not merely potential, meeting the criteria for an AI Incident. The AI system's use directly led to the dissemination of misleading content that impacts societal trust and political discourse, which is a significant harm under the framework.
Thumbnail Image

AI fakes duel over Sara Duterte impeachment in Philippines

2025-06-25
The Straits Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly used to generate fake videos that have gone viral and influenced political opinions. The harm is realized as the misinformation fosters distrust in political institutions and processes, which is a harm to communities and democratic rights. The AI system's role is pivotal as it enabled the creation of realistic fake personas and statements that would not be possible otherwise. Therefore, this qualifies as an AI Incident due to the direct harm caused by AI-generated misinformation affecting political discourse and public trust.
Thumbnail Image

AI fakes duel over VP Duterte impeachment in Philippines

2025-06-25
The Peninsula
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems used to create fake videos of people making political statements, which have gone viral and influenced public opinion. The harm is realized as these AI-generated deepfakes mislead viewers, foster distrust in political institutions, and distort democratic discourse, which aligns with harm to communities. The AI system's use directly led to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI fakes duel over Sara Duterte impeachment in Philippines

2025-06-25
Digital Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating realistic fake videos that have been widely disseminated and have influenced political opinions and public trust. The harm is realized as the misinformation undermines democratic discourse and fosters distrust towards lawmakers and the impeachment process. The AI system's role is pivotal in creating and spreading these fakes, directly causing harm to communities and political processes. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI fakes duel over Sara Duterte impeachment in Philippines

2025-06-25
The Manila times
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated videos that have been widely viewed and have influenced public opinion on a sensitive political issue. The AI system's use directly led to the dissemination of misinformation, which harms communities by undermining trust in political institutions and processes. The videos are not merely potential threats but have already caused social harm, meeting the criteria for an AI Incident. The involvement of AI in creating realistic fake content that misleads the public and affects democratic discourse is central to the harm described.
Thumbnail Image

AI fakes duel over VP impeachment

2025-06-25
The Manila times
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems generating synthetic video content that influenced political discussions. However, there is no evidence of realized harm such as misinformation causing violence, rights violations, or other significant harms. The AI-generated content is openly acknowledged and discussed as political expression, with disclaimers and fact-checking identifying the videos as AI creations. The event highlights the evolving role of AI in political communication and disinformation challenges but does not document an incident or credible hazard of harm. Thus, it is best classified as Complementary Information, providing context and updates on AI's societal impact without constituting an AI Incident or Hazard.
Thumbnail Image

AI fakes duel over Sara Duterte impeachment in Philippines

2025-06-25
RTL Today
Why's our monitor labelling this an incident or hazard?
The event describes AI-generated deepfake videos that have been widely viewed and used to influence political opinions and public trust in democratic processes. The AI system's involvement is explicit (Veo platform generating videos) and the harm is realized in the form of misinformation and erosion of trust in political institutions, which qualifies as harm to communities. This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm to communities by distorting democratic discourse and fostering distrust.
Thumbnail Image

AI videos stir VP impeachment debate in Phl

2025-06-25
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to create synthetic videos that have been widely disseminated and have influenced political debate and public trust. The harm is realized in the form of misinformation and manipulation of public opinion, which constitutes harm to communities and a violation of democratic rights. The AI system's role is pivotal as the videos would not exist without AI generation, and the harm arises directly from their use and spread. Therefore, this qualifies as an AI Incident under the framework, specifically harm to communities through misinformation and political manipulation.