AI-Generated Deepfakes Cause Widespread Harm and Legal Challenges

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI systems, including xAI's Grok, have enabled the mass creation and dissemination of sexualized and nonconsensual deepfake images, leading to reputational, emotional, and psychological harm, especially among minors. Social media platforms have increased takedown efforts, but the rapid spread of deepfakes continues to pose significant societal and legal challenges globally.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly links the rise of AI-generated deepfake content to societal harm, including misinformation and potential damage to individuals and public discourse. The AI system's use in generating deepfakes has directly led to these harms, fulfilling the criteria for an AI Incident. The platforms' increased takedown efforts are responses to an ongoing incident rather than the main focus, so the article is not primarily about complementary information. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. Hence, the classification is AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
ReputationalPsychologicalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Platforms have almost doubled-tripled deepfake takedowns: Vaishnaw

2026-03-30
Business Standard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes, which are produced by AI systems. However, it does not describe any realized harm or specific incident resulting from these deepfakes. The main focus is on the increased takedown efforts by platforms and the recognition of deepfakes as a societal threat. This fits the definition of Complementary Information, as it provides context and updates on responses to a known AI-related risk without detailing a new AI Incident or AI Hazard.
Thumbnail Image

Deepfake Surge Prompts Platforms to Step Up Content Removal Efforts

2026-03-30
The Hans India
Why's our monitor labelling this an incident or hazard?
The article explicitly links the rise of AI-generated deepfake content to societal harm, including misinformation and potential damage to individuals and public discourse. The AI system's use in generating deepfakes has directly led to these harms, fulfilling the criteria for an AI Incident. The platforms' increased takedown efforts are responses to an ongoing incident rather than the main focus, so the article is not primarily about complementary information. The harm is realized and ongoing, not merely potential, so it is not an AI Hazard. Hence, the classification is AI Incident.
Thumbnail Image

Explicit AI image creation increasingly a legal issue amid crackdown on deepfakes

2026-03-30
Las Vegas Sun
Why's our monitor labelling this an incident or hazard?
The article explicitly reports on AI systems being used to generate nonconsensual sexual deepfake images of minors and others, which has directly led to harm including psychological trauma and violations of rights. The involvement of AI in creating these images is clear and central to the harms described. The article also references legal actions and legislation responding to these harms, confirming the recognized severity and reality of the incidents. Since the harms are realized and directly linked to AI system use, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Kodak to Deepfakes: Publicity Rights and Abuse of Our Likenesses

2026-03-27
Default
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok, an AI generative tool, whose use has directly led to the creation and dissemination of harmful deepfakes causing dignitary harm, reputational damage, and emotional distress to individuals. These harms fall under violations of human rights and harm to communities. The article also references legislative responses and legal considerations but focuses on the realized harms caused by the AI system's misuse. Hence, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Platforms have almost doubled-tripled deepfake takedowns: Vaishnaw

2026-03-30
NewsDrum
Why's our monitor labelling this an incident or hazard?
The article discusses the increased volume of AI-generated deepfake content and the platforms' escalated takedown efforts, indicating awareness of a plausible risk to society. However, it does not describe any realized harm or specific incident where AI caused injury, rights violations, or other harms. Therefore, it fits the definition of Complementary Information as it provides context and updates on societal and platform responses to AI-related challenges without reporting a concrete AI Incident or AI Hazard.