AI-Generated Deepfakes Cause Widespread Harm and Security Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered deepfakes are increasingly used for identity theft, fraud, misinformation, and non-consensual pornography, causing financial, reputational, and psychological harm. These incidents undermine trust in media, disrupt markets, and challenge legal and security systems, prompting urgent development of AI-based detection and regulatory responses.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly describes AI systems (deep learning algorithms generating deepfakes) being used to create false videos and audio that have caused real harm, including political misinformation, extortion, and reputational damage. These harms correspond to violations of rights and harm to communities. The article also references specific cases and societal impacts, confirming that harm has occurred, not just potential harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingFinancial and insurance servicesDigital securityGovernment, security, and defence

Affected stakeholders
General publicBusinessGovernment

Harm types
Economic/PropertyReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfakes Are Here, Can They Be Stopped?

2023-09-20
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deep learning algorithms generating deepfakes) being used to create false videos and audio that have caused real harm, including political misinformation, extortion, and reputational damage. These harms correspond to violations of rights and harm to communities. The article also references specific cases and societal impacts, confirming that harm has occurred, not just potential harm. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes make banks keep it real

2023-09-21
Financial Times News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI models creating deepfakes) being used maliciously to commit identity theft and fraud, which are harms to individuals and businesses (harm to persons and communities). It also discusses the use of AI in detection and prevention, but the primary focus is on the realized harm caused by AI-generated deepfakes. The FBI's report of increased complaints confirms that harm has occurred. Hence, this is an AI Incident due to the direct involvement of AI in causing harm through fraud and identity theft.
Thumbnail Image

Deepfake Pornography: Is Consent Over Your Image a Lost Cause? - Decrypt

2023-09-18
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake AI generating fake videos/images) and their use has directly led to significant harms to individuals, including violations of consent, sexual violence, psychological harm, and intellectual property rights violations. These harms fall under categories (a) injury or harm to health, (c) violations of human rights and intellectual property rights, and (d) harm to communities. Since the harms are realized and directly linked to AI misuse, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Rise Of Deepfakes: From Virtual Reality To Misinformation - New Technology - UK

2023-09-18
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article clearly identifies deepfakes as AI systems (using deep learning and GANs) that have already caused harm by spreading misinformation, manipulating political opinion, and undermining trust in media and judicial processes. These harms fall under violations of rights, harm to communities, and disruption of investigations and disputes. The article also discusses the ongoing and future risks posed by deepfakes, but the presence of realized harms makes this primarily an AI Incident. The detailed examples of misinformation campaigns and legal challenges confirm direct or indirect harm caused by AI systems. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

How Faking Videos Got So Easy and Why It's a Threat

2023-09-21
matzav.com
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deep learning algorithms for video and voice synthesis) being used to create deepfake content that has been disseminated and caused harm, such as false videos of political figures, doctored audio affecting elections, and misleading images causing stock market reactions. These are direct examples of AI systems leading to violations of rights and harm to communities. The article also mentions ongoing regulatory and detection efforts, but the primary focus is on the harms already occurring due to AI misuse. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Deepfakes make banks keep it real - Financial Times

2023-09-21
Business Telegraph
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems, specifically generative AI models used to create deepfakes and AI-based detection tools. The harms described include identity theft and fraud leading to financial and personal harm, which are direct harms to individuals and businesses. Since these harms are occurring and linked to the use and misuse of AI systems, this qualifies as an AI Incident. The article also discusses mitigation efforts but the primary focus is on the realized harms caused by AI-generated deepfakes and their impact on security and trust.
Thumbnail Image

Deepfake Porn Is Out of Control

2023-10-17
freedomsphoenix.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems (deepfake technology based on machine learning) to create nonconsensual pornographic videos, which directly harms the individuals depicted by violating their rights and causing personal and social harm. The scale and growth of this phenomenon indicate ongoing realized harm, not just potential harm. Therefore, this qualifies as an AI Incident due to violations of human rights and harm to communities caused by the AI system's use.
Thumbnail Image

Tim Draper warns of crypto scams using his AI-synthesized voice

2023-10-19
Cointelegraph
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI voice generators to create deepfake voices for scams targeting crypto users, which constitutes direct harm (financial loss) to people. The AI system's use in this fraudulent activity directly leads to harm, fulfilling the criteria for an AI Incident. Therefore, this event is classified as an AI Incident due to realized harm caused by malicious use of AI-generated deepfake voices in scams.
Thumbnail Image

The Internet Is Full of Deepfakes, and Most of Them Are Porn

2023-10-18
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-generated deepfake videos and images used for pornographic purposes, which are created by AI systems capable of generating realistic fake content. The harms described include psychological trauma, sexual violation, and violations of rights, which are direct consequences of the AI system's outputs. The widespread distribution and use of these deepfakes constitute realized harm to individuals and communities, particularly women, thus meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfake Porn Is Out of Control

2023-10-16
Wired
Why's our monitor labelling this an incident or hazard?
The article details the use of AI-based deepfake technology to create nonconsensual pornographic videos, which directly harms individuals by violating their rights and privacy. The harm is materialized and widespread, with a large number of videos produced and distributed. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident under the OECD framework. The event is not merely a potential risk or a complementary update but a clear case of AI-enabled harm occurring at scale.
Thumbnail Image

What are Deepfakes, how to identify them, how are they created, and more questions answered | 91mobiles.com

2023-10-19
91mobiles.com
Why's our monitor labelling this an incident or hazard?
The article describes AI-generated deepfake content and its potential negative impacts, which aligns with the definition of AI systems and their possible harms. However, it does not describe a particular event where deepfakes have directly or indirectly caused harm, nor does it report a specific credible threat or near miss. Instead, it provides general information and guidance about deepfakes and detection tools, which fits the category of Complementary Information as it enhances understanding of AI-related risks without reporting a new incident or hazard.
Thumbnail Image

These Israelis are fighting Hamas on the war's emerging 'deepfake' cyberfront

2023-10-18
The Times of Israel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology to create false videos that mislead the public and influence political and social discourse during the Israel-Gaza war. This misinformation can cause harm to communities by distorting reality and affecting public opinion and decision-making. The involvement of AI in generating these deepfakes and their deployment in an active conflict setting meets the criteria for an AI Incident, as the harm is realized and directly linked to the AI system's use. The article also references ongoing efforts to counteract these harms, but the primary focus is on the existing misuse and its consequences, not just potential future risks or responses.
Thumbnail Image

How people are using AI technology to watch pornography

2023-10-20
Pulse Nigeria
Why's our monitor labelling this an incident or hazard?
Deepfake technology is an AI system that generates realistic fake images or videos by manipulating facial features. The creation and distribution of non-consensual deepfake pornography directly violates individuals' rights to privacy and consent, which falls under violations of human rights and applicable laws protecting fundamental rights. The article highlights that this harm is occurring and widespread, thus qualifying as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

DeepMedia AI Helps Governments Detect Deepfakes - Decrypt

2023-10-18
Decrypt
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (DeepIdentify.AI) designed to detect deepfakes, which are AI-generated manipulated videos. The use of this AI system is directly linked to addressing harms caused by malicious deepfakes, including misinformation, political manipulation, and threats to national security. Since the harms from deepfakes are occurring and the AI system is actively used to detect and mitigate these harms, this qualifies as an AI Incident. The article focuses on the AI system's role in detecting and countering ongoing harms rather than just potential future risks or general AI developments, so it is not merely complementary information or a hazard.
Thumbnail Image

Deepfake videos are already taking over the internet

2023-10-16
ZME Science
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI deepfake systems being used to create fraudulent and deceptive content that has already caused harm, such as financial scams and misinformation. The harms are direct and realized, involving violations of trust, financial loss, and potential social disruption. The AI system's development and use are directly linked to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Ubiquity of Sexuality and the Deepfake AI Challenge

2023-10-18
bbntimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake AI systems (using GANs and other machine learning techniques) being used to generate pornographic videos without consent, which is a direct violation of individuals' rights and can cause harm to persons and communities. This misuse of AI technology aligns with the definition of an AI Incident, as it has directly led to harm through the creation and dissemination of non-consensual deepfake pornography and related disinformation.