Dramatic Surge in Deepfake AI Harms in 2025

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In 2025, AI-generated deepfakes—realistic synthetic faces, voices, and performances—became highly convincing and widespread, with their use in misinformation, harassment, and financial scams surging. The volume of deepfakes grew nearly 900%, making it difficult for ordinary people and institutions to distinguish real from fake media, causing significant societal harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses AI systems (deepfake generation models, voice cloning, large language models) that have been used to create synthetic media causing real harm such as financial scams and misinformation campaigns. These harms fall under harm to communities and individuals. The AI systems' development and use have directly led to these harms, qualifying this as an AI Incident. Although it also discusses future risks, the presence of realized harm takes precedence, making this classification an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityRobustness & digital securityAccountabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
ConsumersGeneral public

Harm types
Economic/PropertyPsychologicalReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfakes Leveled up in 2025â€"Here’s What’s Coming Next

2025-12-26
Gizmodo
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (deepfake generation models, voice cloning, large language models) that have been used to create synthetic media causing real harm such as financial scams and misinformation campaigns. These harms fall under harm to communities and individuals. The AI systems' development and use have directly led to these harms, qualifying this as an AI Incident. Although it also discusses future risks, the presence of realized harm takes precedence, making this classification an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes leveled up in 2025 - here's what's coming next

2025-12-26
The Conversation
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (video generation models, voice cloning, large language models) that have directly led to harms such as misinformation, harassment, and financial scams. These harms affect communities and individuals, fulfilling the criteria for an AI Incident. The discussion of future risks further supports the severity but does not negate the current realized harms. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Deepfakes leveled up in 2025 -- here's what's coming next - UPI.com

2025-12-26
UPI
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake generation models, voice cloning, large language models) that have been used to create synthetic media causing real-world harm, including misinformation, harassment, and financial scams. These harms fall under harm to communities and individuals. The involvement of AI in both the development and use stages is clear, and the harms are ongoing and significant. Although it also discusses future risks, the presence of actual harm makes this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes Leveled Up In 2025, What's Coming Next Is Even More Frightening

2025-12-26
Study Finds
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems that generate synthetic media (deepfakes) and voices, which have already caused real-world harms including misinformation, harassment, and financial scams. These harms fall under harm to communities and harm to persons. The AI systems' use has directly led to these harms, qualifying this as an AI Incident. Although it also discusses future risks and defenses, the presence of realized harm from AI-generated deepfakes makes this classification an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes leveled up in 2025 - here's what's coming next

2025-12-26
Tolerance
Why's our monitor labelling this an incident or hazard?
Deepfakes are AI-generated synthetic media created by AI systems that mimic real people. The article states that these deepfakes have become highly realistic and are increasingly used to deceive people, which constitutes harm to communities by spreading misinformation and potentially causing social disruption. Since the harm is occurring due to the use of AI systems, this qualifies as an AI Incident under the framework, specifically harm to communities (d).
Thumbnail Image

2026 will be the year you get fooled by a deepfake, researcher says. Voice cloning has crossed the 'indistinguishable threshold' | Fortune

2025-12-27
Fortune
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation models and voice cloning AI) whose use has directly led to harms including misinformation, harassment, and financial scams, which affect communities and individuals. These harms are occurring now, not just potential future risks. Therefore, this qualifies as an AI Incident. The article also discusses future risks and mitigation strategies, but the presence of realized harm takes precedence in classification.
Thumbnail Image

2026 will be the year you get fooled by a deepfake, researcher says. Voice cloning has crossed the 'indistinguishable threshold'

2025-12-27
DNyuz
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake generation models, voice cloning, large language models) whose use has directly led to harms including misinformation, harassment, and financial scams. These harms fall under harm to communities and harm to persons. Since the harms are already realized and ongoing, this qualifies as an AI Incident rather than a hazard or complementary information. The article provides detailed evidence of actual harm caused by AI-generated synthetic media.
Thumbnail Image

Deepfakes, AI leveled up in 2025 - here's what's coming next

2025-12-28
WGN-TV
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake generation models, voice cloning, large language models) that have been used to produce synthetic media causing real-world harms such as misinformation, targeted harassment, and financial scams. These harms fall under harm to communities and harm to persons. The involvement of AI is clear and central, and the harms are occurring, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Synthetic media|Deepfakes leveled up in 2025: here's what's next

2025-12-29
dtnext.in
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative models creating deepfakes) whose use has directly led to harm by deceiving people and institutions, fulfilling the criteria for an AI Incident. The harms include misinformation and deception affecting communities, which is a recognized form of harm under the framework. The article describes realized harm (deception occurring now), not just potential harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the harm caused by the AI system's outputs, not on responses or updates. It is not unrelated because the event clearly involves AI systems and their harmful impact.
Thumbnail Image

The Conversation | Siwei Lyu | Deepfakes leveled up in 2025 - here's what's coming next

2025-12-29
CNHI News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated deepfakes have already caused real-world harm including misinformation, targeted harassment, and financial scams, which are harms to communities and individuals. The AI systems involved are deepfake generation models and voice cloning AI, which have been used maliciously to deceive and scam people. The harms are direct and ongoing, not merely potential. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information. The detailed description of the harms and AI's pivotal role in causing them supports this classification.
Thumbnail Image

Easy AI mistake most Aussies falling for

2026-01-12
News.com.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfakes in scams that are actively targeting Australians, causing harm by deceiving people and potentially leading to financial or emotional damage. This is a direct harm caused by the use of AI systems in malicious activities. Therefore, this qualifies as an AI Incident because the development and use of AI systems have directly led to harm through sophisticated scams exploiting human trust and psychology.
Thumbnail Image

Easy AI mistake most Aussies falling for

2026-01-12
The West Australian
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically generative AI used to create deepfake images and audio for scams. The harms discussed are financial scams and deception, which fall under harm to communities and individuals. Since the article emphasizes the potential for these AI deepfake scams to cause harm and the public's overconfidence increasing vulnerability, but does not report a concrete realized harm event, it fits the definition of an AI Hazard. It plausibly leads to AI Incidents (financial harm from scams) but does not document a specific incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Australians' Skill at Detecting AI Deepfake Scams

2026-01-12
Mirage News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems generating deepfake content used in scams that have caused realized harm to individuals and businesses, such as financial losses and deception. The harms fall under harm to communities and individuals. The AI system's use in creating convincing fake content is central to the scams and their success. The article reports on actual incidents of these scams occurring, not just potential risks. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Easy AI mistake most Aussies falling for

2026-01-13
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The article involves AI systems (generative AI creating deepfakes) and discusses their use by scammers, which could plausibly lead to harm such as financial loss or psychological harm to individuals targeted by scams. However, it does not describe a concrete incident of harm that has already occurred due to AI misuse or malfunction. The focus is on the risk and potential for harm, supported by research data on people's overconfidence in spotting deepfakes. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident but does not report an actual incident yet.
Thumbnail Image

CommBank research says Australians are overconfident in identifying AI deepfake scams

2026-01-14
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake scams that have been witnessed by a significant portion of the population, indicating realized harm to individuals and communities through scams and deception. The AI system's use in generating realistic fake videos, voices, and texts is central to the harm described. This fits the definition of an AI Incident because the AI system's use has directly led to harm to people (financial loss, emotional harm) and communities (trust erosion). The article does not merely warn about potential harm but documents ongoing harm, so it is not an AI Hazard. It is not merely complementary information because the main focus is on the occurrence and impact of AI-enabled scams, not on responses or broader ecosystem context. It is not unrelated because AI deepfake technology is central to the event.