Philippine government probes deepfake video targeting President Marcos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Philippine agencies formed a PNP task force, deployed AI detection tools, and threatened legal action after a deepfake video showing President Ferdinand Marcos Jr. allegedly using drugs surfaced online. Traced to a Los Angeles Maisug rally, the false footage spurred condemnations and investigations by the DILG, NSC, DND and DOJ.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event clearly involves AI systems used to generate a deepfake video, which is a form of AI-generated manipulated content. The video has been circulated and caused political and social disruption, which constitutes harm to communities. The involvement of AI in creating the manipulated video is direct and pivotal to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation and destabilization attempts.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyPrivacy & data governanceRespect of human rightsDemocracy & human autonomy

Industries
Government, security, and defenceMedia, social platforms, and marketingDigital security

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Face-swapped? Deepfake detector flags alleged Marcos video as 'suspicious'

2024-07-23
Rappler
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems used to generate a deepfake video, which is a form of AI-generated manipulated content. The video has been circulated and caused political and social disruption, which constitutes harm to communities. The involvement of AI in creating the manipulated video is direct and pivotal to the harm caused. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm through misinformation and destabilization attempts.
Thumbnail Image

NSC, DICT, DND, DOJ slam latest 'deepfake' PBBM video | BusinessMirror

2024-07-22
BusinessMirror
Why's our monitor labelling this an incident or hazard?
The article explicitly refers to the video as a 'deepfake,' a known AI technology used to fabricate realistic but false videos. The harm is realized as the video is being circulated with the intent to discredit the President and destabilize political institutions, which fits the definition of harm to communities and violation of rights. The AI system's use in generating the video is central to the incident, making it an AI Incident rather than a hazard or complementary information. The legal and governmental responses further confirm the recognition of harm caused by the AI-generated content.
Thumbnail Image

Teodoro pledges to stop any attempt to destabilize Marcos government - Manila Standard

2024-07-22
Manila Standard
Why's our monitor labelling this an incident or hazard?
Deepfake videos are generated using AI systems that manipulate images and videos to create realistic but false content. The article describes the circulation of such videos targeting the President, which is a direct use of AI-generated content causing harm to the political environment and potentially violating rights related to reputation and information integrity. The harm is realized as the videos have been disseminated, leading to destabilization attempts. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

PNP creates task force to probe fake PBBM video

2024-07-22
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The event involves a deepfake video, which is a product of AI-generated content manipulation. The video is deliberately fabricated and disseminated, causing harm by spreading misinformation, undermining public trust, and threatening political stability. These harms fall under harm to communities and the State's interests, fulfilling the criteria for an AI Incident. The article reports that the video is already circulating and causing harm, not just a potential risk, so it is not merely a hazard or complementary information. Hence, the classification is AI Incident.