Deepfake AI Videos Raise National Security and Reputational Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Malaysian cybersecurity expert Prof Dr Selvakumar Manickam warns that deepfake AI videos featuring political figures and sensitive issues pose significant national security risks by simulating attacks or false-flag operations. The rise in deepfake manipulations, also targeting celebrities, threatens destabilization of nations and damage to individual reputations across social media.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to create deepfake videos that have been disseminated and are influencing public perception and political stability, which is a form of harm to communities and national security. The harm is occurring as the videos have already spread and affected public trust and political discourse. Therefore, this is an AI Incident due to the direct involvement of AI in causing harm through misinformation and potential political destabilization.[AI generated]
AI principles
AccountabilityRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyRespect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfake AI videos of politicians addressing sensitive issues threaten national security, says expert

2025-03-13
The Star
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos that have been disseminated and are influencing public perception and political stability, which is a form of harm to communities and national security. The harm is occurring as the videos have already spread and affected public trust and political discourse. Therefore, this is an AI Incident due to the direct involvement of AI in causing harm through misinformation and potential political destabilization.
Thumbnail Image

Deepfake AI videos of politicians pose a national security threat

2025-03-13
thesun.my
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfake videos involving political leaders that have been produced and spread on social media, which is a direct use of AI systems generating harmful content. The harms include threats to national security, political stability, and public trust, which fall under harm to communities and violation of political rights. The spread of these videos is ongoing and has already occurred, indicating realized harm rather than just potential. The involvement of AI in creating realistic fake videos that manipulate public opinion and could trigger conflict meets the criteria for an AI Incident. The article does not merely warn about potential harm but reports on actual dissemination and impact, confirming the classification as an AI Incident.
Thumbnail Image

Research reveals 'major vulnerabilities' in deepfake detectors

2025-03-12
techxplore.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake detectors) and their performance in detecting AI-generated synthetic media (deepfakes). While the study identifies critical flaws that could lead to misinformation and related harms, the article does not describe any actual harm occurring from these vulnerabilities, only the potential risk. Therefore, this constitutes an AI Hazard, as the flawed detectors could plausibly lead to harms such as misinformation, fraud, and privacy violations if not improved. The article focuses on the research findings and recommendations rather than reporting a realized incident or harm, so it is not an AI Incident or Complementary Information.
Thumbnail Image

Sony Removes 75,000 Deepfake Items, Highlighting a Growing Problem

2025-03-11
Security Boulevard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake content causing direct commercial harm to artists (intellectual property rights violation), fraud losses to businesses, and phishing scams using AI-generated videos. These are realized harms directly linked to the use of AI systems generating deepfakes. The removal of content by Sony and reports of scams confirm that harms have materialized, qualifying this as an AI Incident. The article also discusses broader societal and security concerns, but the presence of actual harms takes precedence over potential future harms.
Thumbnail Image

The Celebrities Most At-Risk for Deepfake. How to Protect Yourself from Internet Scammers

2025-03-10
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate deepfake content that has directly led to harm in the form of privacy violations, reputational damage, and potential emotional distress to the individuals targeted. The deepfakes are AI-generated manipulated media, which fits the definition of an AI system's use causing harm. The harms include violations of personal rights and harm to communities (public figures and their audiences). Therefore, this qualifies as an AI Incident.
Thumbnail Image

Deepfake Scams Are Stealing Millions -- How To Spot One

2025-03-09
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used maliciously to create realistic fake videos and voices that directly caused financial harm through fraud. The harm is realized and significant, meeting the criteria for an AI Incident. The article details the use of AI-generated content to deceive and cause monetary loss, which is a clear harm to individuals and organizations. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Slotozilla Analyzes the Growing Issue of Deepfake Fraud

2025-03-12
itnewsonline.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) whose use has directly led to harms including fraud, identity crime, political disinformation, and erosion of trust, which fall under harms to communities and violations of rights. The article documents ongoing incidents of harm caused by AI-generated deepfakes, not just potential risks. Therefore, this qualifies as an AI Incident due to the realized and widespread harms linked to AI system use.
Thumbnail Image

Research Exposes Key Flaws in Deepfake Detectors

2025-03-12
Mirage News
Why's our monitor labelling this an incident or hazard?
The article centers on research findings about the limitations of AI deepfake detectors and the challenges in reliably identifying AI-generated synthetic media. While deepfakes themselves can cause harm (misinformation, fraud, privacy violations), this article does not describe a specific incident where harm has occurred due to AI systems, nor does it report a near miss or imminent threat. Instead, it provides complementary information about the current state of AI detection tools and the need for advancements to mitigate future risks. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI ecosystem challenges and responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Spreading AI-generated content could lead to expensive fines

2025-03-12
Popular Science
Why's our monitor labelling this an incident or hazard?
The article clearly identifies AI systems generating harmful deepfake content that has caused trauma and misinformation, which qualifies as AI Incidents due to harm to individuals and communities. However, the article's primary focus is on the legislative and regulatory responses to these harms, including new laws and fines being proposed or enacted. It does not describe a new specific AI Incident or Hazard event itself but rather the societal and governance measures addressing ongoing issues. Hence, it fits the definition of Complementary Information, as it enhances understanding of the AI ecosystem and responses without reporting a new primary harm event.
Thumbnail Image

Generative AI and deepfakes are fuelling health misinformation. Here's what to look out for so you don't get scammed

2025-03-13
The Conversation
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through generative AI and deepfake technology used to create manipulated content that misleads people about health products, causing direct harm to individuals' health and finances. This fits the definition of an AI Incident because the AI system's use has directly led to harm (health misinformation and scams). The article also includes complementary information about responses and recommendations, but the primary focus is on the realized harm caused by AI-generated deepfakes in health scams.
Thumbnail Image

AI Deepfakes Fuel Health Misinformation: How to Spot

2025-03-13
Mirage News
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create deepfake videos and audio that mislead people about health products, directly causing harm to individuals' health and financial well-being, as well as harm to communities through misinformation. This fits the definition of an AI Incident because the AI system's use has directly led to harm (a) injury or harm to health, and (d) harm to communities. The article describes realized harm through scams and misinformation, not just potential harm. Therefore, it is classified as an AI Incident.
Thumbnail Image

Deepfake AI Videos Of Politicians Addressing Sensitive Issues Threaten National Security - Expert

2025-03-13
BERNAMA
Why's our monitor labelling this an incident or hazard?
The event involves AI deepfake technology (an AI system) used to create realistic fake videos of political leaders. Although no direct harm has been reported in Malaysia yet, the article highlights the plausible risk that such deepfakes could destabilize politics and manipulate public opinion, which is a significant harm to communities. The article also references prior incidents where AI deepfakes caused real harm, reinforcing the credible risk. Therefore, this event is best classified as an AI Hazard, as the harm is plausible but not yet realized in this specific case.
Thumbnail Image

Ban on deepfake pornography being considered by state lawmakers

2025-03-13
Huron County View -
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) that can cause harm by creating non-consensual sexual content, which is a violation of rights and causes harm to individuals. However, the article does not report a specific AI Incident where harm has already occurred; instead, it discusses the potential harms and legislative efforts to address them. Therefore, this is best classified as Complementary Information, as it provides context and governance response to AI-related harms without describing a concrete incident or imminent hazard.
Thumbnail Image

The deepfake crisis: Why existing detection systems are failing and what needs to change | Technology

2025-03-13
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article focuses on the evaluation of existing AI deepfake detectors and their shortcomings, which is informative and contextual but does not describe a concrete AI Incident or AI Hazard. There is no mention of a specific harmful event caused by AI deepfakes or detection failures, nor a direct or plausible imminent threat from AI systems. The content is best classified as Complementary Information because it enhances understanding of the AI ecosystem related to deepfake detection and cybersecurity without reporting a new incident or hazard.
Thumbnail Image

Celebs affected by deepfake pranks: Who should be held accountable?

2025-03-10
Digital Journal
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake content that directly harms individuals by violating privacy and causing reputational damage, fulfilling the criteria for an AI Incident. The article documents realized harms (non-consensual deepfake videos) and their societal impact. Although it also discusses the need for regulation and detection tools, the primary focus is on the existing harm caused by AI-generated deepfakes, not just potential future harm or responses, so it is not merely Complementary Information or an AI Hazard.