AI-Generated Deepfakes Cause Harm Across South Asia and Beyond

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered deepfake technology has led to widespread harm, including misinformation, financial fraud, privacy violations, and targeted harassment—especially against women and public figures in India, Bangladesh, and Pakistan. Incidents include political manipulation, financial scams, and personal attacks, prompting governments to consider stricter regulations and legal measures.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems (deepfake technology using AI and machine learning) that have been used maliciously to cause financial fraud, which is a form of harm to property and individuals. The described events include real financial losses (e.g., a $35 million bank heist and a Rs 40,000 scam), directly caused by the use of AI deepfake systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingPrivacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityFinancial and insurance servicesGovernment, security, and defence

Affected stakeholders
WomenGeneral publicConsumers

Harm types
Economic/PropertyReputationalPsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfakes In Financial Fraud: A Comprehensive Exploration - White Collar Crime, Anti-Corruption & Fraud - India

2023-12-11
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake technology using AI and machine learning) that have been used maliciously to cause financial fraud, which is a form of harm to property and individuals. The described events include real financial losses (e.g., a $35 million bank heist and a Rs 40,000 scam), directly caused by the use of AI deepfake systems. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm.
Thumbnail Image

When Putin 'met' Putin: The real Russian president talks to the AI version of himself - Times of India

2023-12-15
The Times of India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) that generated a synthetic version of Putin. While the deepfake was used in a public setting and caused concern, there is no indication that any actual harm (such as injury, rights violations, or disruption) occurred as a result of this specific incident. The concerns raised are about potential future harms from deepfakes, but the article does not describe any realized harm or direct impact beyond the surprise and concern caused. Therefore, this event is best classified as an AI Hazard, as it plausibly could lead to harm in the future due to the nature of deepfake technology and its potential misuse.
Thumbnail Image

Deepfake images and videos target female leaders in Bangladesh and Pakistan

2023-12-14
OpIndia
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation and dissemination of deepfake content, which is AI-generated manipulated media. The harms described include direct injury to individuals (e.g., the honour killing linked to a doctored image), violations of rights (privacy, dignity), and harm to communities (harassment and intimidation of women and LGBTQ individuals). These harms have already occurred, making this an AI Incident rather than a hazard or complementary information. The governmental responses are mentioned but are secondary to the main narrative of realized harm.
Thumbnail Image

7 Deepfake Controversies That Rocked 2023

2023-12-15
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the use of AI systems (deep learning, generative neural networks like GANs) to create synthetic media (deepfakes) that have been distributed and caused harm such as misinformation, privacy violations, and unauthorized use of likeness. These harms fall under violations of rights and harm to communities. The involvement of AI systems in the creation and dissemination of these deepfakes is clear and direct. Since the harms have already occurred and are ongoing, this qualifies as an AI Incident rather than a hazard or complementary information. The article also references legal and societal responses but the main focus is on the incidents of harm caused by AI-generated deepfakes.
Thumbnail Image

California Looks to Boost Deepfake Protections Before Elections

2023-12-15
news.bloomberglaw.com
Why's our monitor labelling this an incident or hazard?
The article describes the potential for AI-generated deepfakes to cause harm in elections and other areas, and legislative efforts to prevent these harms. It does not report any realized harm or incident caused by AI systems, but rather discusses plausible future risks and governance responses. Therefore, it fits the definition of an AI Hazard (plausible future harm) and also contains elements of Complementary Information (policy and governance responses). However, since the main focus is on the potential for harm and legislative proposals to address it, the classification as AI Hazard is most appropriate. There is no direct or indirect harm reported yet, so it is not an AI Incident.
Thumbnail Image

Deepfakes In Financial Fraud: A Comprehensive Exploration - White Collar Crime, Anti-Corruption & Fraud - India

2023-12-11
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology (an AI system) in actual financial fraud incidents causing direct financial losses to victims. This meets the definition of an AI Incident because the AI system's use has directly led to harm (financial loss) to persons and organizations. The article also discusses the broader threat and evolving nature of these harms, but the presence of concrete, realized fraud cases with AI deepfakes confirms this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

The Deepfake Dilemma: Navigating Truth And Deception In Today's Digital Era - New Technology - India

2023-12-14
Mondaq Business Briefing
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake technology) being used to create false and harmful content that has already caused misinformation, defamation, and violations of personality rights, which are harms to individuals and communities. It cites specific examples of deepfake videos causing reputational damage and misinformation, as well as legal cases addressing these harms. The involvement of AI in generating these harmful deepfakes is clear, and the harms are realized, not just potential. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly or indirectly led to violations of rights and harm to communities.
Thumbnail Image

AI, like cryptography, holds immense power

2023-12-12
The Pioneer
Why's our monitor labelling this an incident or hazard?
The article clearly identifies realized harms caused by AI-generated deepfake videos, including misinformation, political destabilization, and personal and societal harm. These harms fall under violations of rights, harm to communities, and cyber security concerns, all directly linked to the use of AI systems generating deepfakes. Therefore, this qualifies as an AI Incident. Additionally, the article includes complementary information about governance and societal responses, but the primary focus is on the harms already occurring due to AI deepfakes, making AI Incident the appropriate classification.