Deepfake Exploits Undermine Women's and Celebrity Image Rights

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Two incidents underscore the dangers of deepfake technology. Véronique Dahan warns of malicious deepfakes undermining women's image rights, while an unauthorized celebrity protest video sparked legal and ethical debates about misusing AI to alter likenesses. Both cases emphasize the urgent need for stricter regulation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions deepfake scams using AI technology to manipulate videos of public figures to promote fraudulent investment and PR application schemes. These scams have already occurred and pose direct harm to victims through financial fraud and misinformation, fulfilling the criteria for harm to communities and individuals. The AI system's use in creating these deepfakes is central to the incident, as the manipulated content is the vehicle for the scam. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
AccountabilityFairnessPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainability

Industries
Media, social platforms, and marketingDigital securityArts, entertainment, and recreation

Affected stakeholders
WomenOther

Harm types
ReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation

In other databases


Articles about this incident or hazard

Thumbnail Image

PM Lawrence Wong warns of deepfake scams promoting investments and PR applications

2025-03-07
The Online Citizen
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions deepfake scams using AI technology to manipulate videos of public figures to promote fraudulent investment and PR application schemes. These scams have already occurred and pose direct harm to victims through financial fraud and misinformation, fulfilling the criteria for harm to communities and individuals. The AI system's use in creating these deepfakes is central to the incident, as the manipulated content is the vehicle for the scam. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Weekly take: Why deepfakes pose a threat to women's IP rights

2025-03-06
MIP
Why's our monitor labelling this an incident or hazard?
The article clearly describes the use of AI systems (deepfake generation tools) that have directly led to harms including violations of personal image rights, reputational damage, and emotional harm to women. It references actual cases where courts have condemned platforms for hosting non-consensual deepfake content, and discusses legal penalties for distributing such content. The harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident because the development and use of AI deepfake systems have directly caused significant harm to individuals and communities, particularly women, through violations of rights and reputational damage. The article also discusses governance and platform responses, but the primary focus is on the harms caused by AI deepfakes and the legal challenges in addressing them.
Thumbnail Image

Deepfake Danger: Why AI Deception Isn't Just a Celebrity Problem

2025-03-06
thesource.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake technology) used to generate fabricated videos that misrepresent individuals, causing harm to their reputation and privacy, which falls under violations of rights and harm to communities. The harm is realized as the videos have been distributed and have sparked public criticism and concern. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and misuse of likenesses.
Thumbnail Image

PM Wong warns of deepfakes of him promoting scam products, services

2025-03-07
CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI techniques to create deepfake videos that falsely depict a government leader endorsing scam products. This manipulation has already occurred and is causing harm by misleading the public and potentially facilitating scams. The AI system's role is pivotal in generating the misleading content, which constitutes a violation of rights and harm to communities. Therefore, this qualifies as an AI Incident.
Thumbnail Image

A New Interactive Blog Reveals How Deepfakes Are Transforming Digital Trust

2025-03-06
StreetInsider.com
Why's our monitor labelling this an incident or hazard?
The article focuses on raising awareness and providing educational content about deepfakes and their associated risks. It does not describe a specific event where an AI system directly or indirectly caused harm, nor does it report a new hazard event. The main purpose is to inform and support understanding of AI manipulation and digital trust, which fits the definition of Complementary Information as it enhances understanding of AI impacts and responses without reporting a new incident or hazard.
Thumbnail Image

Safeguarding+your+business+in+an+age+of+misinformation+%7C+Corporate+Finance

2025-03-03
Rochester Business Journal
Why's our monitor labelling this an incident or hazard?
The article explicitly describes how AI systems (deepfake technology) have been used maliciously to cause harm to businesses, such as fraudulent transactions, reputational damage, and data breaches. These are direct harms resulting from the use of AI systems, fitting the definition of an AI Incident. The article also discusses mitigation and response strategies but the primary focus is on the harms already occurring due to AI misuse.
Thumbnail Image

Attestiv Launches AI-Powered Context Analysis to Combat Video Deepfakes

2025-03-06
idtechwire.com
Why's our monitor labelling this an incident or hazard?
The article focuses on the introduction of an AI system that helps detect deepfakes, which are a known source of misinformation and potential harm. However, the event itself does not describe any realized harm or incident caused by the AI system. Instead, it presents a new AI tool aimed at mitigating risks associated with deepfakes. Therefore, this is not an AI Incident or AI Hazard but rather Complementary Information about a development in AI technology and its role in addressing AI-related risks.