AI Deepfakes Fuel Corporate Fraud and Political Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In separate incidents, AI-powered deepfake technology fueled a $25 million Business Email Compromise fraud by mimicking corporate executives, and cybermeddlers, including suspected Iranian hackers, used AI-generated videos and spear-phishing in the 2024 election to spread misinformation and steal campaign data.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article focuses on a proposed law to regulate AI use to prevent the spread of harmful deepfake content ahead of elections. While no AI incident (actual harm) is reported, the bill acknowledges the credible risk that AI-generated deepfakes could cause significant harm to individuals and the electoral process. Therefore, this is an AI Hazard, as it concerns plausible future harm from AI misuse.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityDemocracy & human autonomyRespect of human rightsAccountabilitySafety

Industries
Financial and insurance servicesBusiness processes and support servicesDigital securityMedia, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
BusinessGeneral public

Harm types
Economic/PropertyReputationalPublic interestHuman or fundamental rights

Severity
AI hazard

Business function:
ICT management and information securityCompliance and justice

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Philippines: New law to regulate use of AI ahead of elections

2024-08-12
Adnkronos
Why's our monitor labelling this an incident or hazard?
The article focuses on a proposed law to regulate AI use to prevent the spread of harmful deepfake content ahead of elections. While no AI incident (actual harm) is reported, the bill acknowledges the credible risk that AI-generated deepfakes could cause significant harm to individuals and the electoral process. Therefore, this is an AI Hazard, as it concerns plausible future harm from AI misuse.
Thumbnail Image

Council Post: A Misguided AI Arms Race: By The Time You Detect A Deepfake, It's Already Too Late

2024-08-09
Forbes
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (generative adversarial networks and other generative AI) to create deepfakes that have directly led to significant harms such as financial fraud (the $25 million transfer incident), identity theft, and potential election disruption. These harms fall under violations of rights and harm to communities. The involvement of AI is central and pivotal to these harms. The article also discusses ongoing impacts and responses but the primary focus is on realized harms caused by AI misuse, qualifying it as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

U.S. Senate Passes Anti-Deepfake Law Targeting Non-Consensual Pornography - Decrypt

2024-08-13
Decrypt
Why's our monitor labelling this an incident or hazard?
The article focuses on the legislative action (the DEFIANCE Act) to address harms caused by AI-generated deepfake pornography, which is a societal and governance response to an existing AI-related harm. While it discusses the broader risks and harms of deepfakes, including political and financial harms, the main event is the passing of a law to provide legal remedies for victims. This fits the definition of Complementary Information, as it provides context and a governance response to AI harms rather than describing a new AI Incident or AI Hazard.
Thumbnail Image

Navigating the World of Deepfake Technology

2024-08-09
IGI Global
Why's our monitor labelling this an incident or hazard?
The content focuses on explaining the nature of deepfakes and their potential harms, along with strategies for identification and mitigation. There is no mention of a concrete AI Incident or AI Hazard event, nor any realized or imminent harm caused by AI systems. Therefore, this is best classified as Complementary Information, as it supports understanding and response to AI-related risks without reporting a new incident or hazard.
Thumbnail Image

Bill filed to regulate artificial intelligence use in 2025 polls

2024-08-11
Philstar.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfakes and generative AI) and their potential misuse to cause harm (misinformation and electoral interference). Since the article discusses the plausible future harm from AI misuse in elections and legislative efforts to mitigate these risks, but does not describe an actual incident of harm occurring, it fits the definition of an AI Hazard. The focus is on the credible risk posed by AI-generated deepfakes to the electoral process, justifying classification as an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Deepfakes: the next frontier in digital deception

2024-08-13
BetaNews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (deepfake technology) being used maliciously to cause financial harm through deception, which directly led to a significant financial loss ($25 million). This constitutes harm to property and financial interests, fitting the definition of an AI Incident. The article also discusses the broader implications and responses but the primary focus includes a concrete example of realized harm caused by AI misuse.
Thumbnail Image

Iran hacking Trump? AI deepfakes? Cyber side of 2024 election heats up.

2024-08-13
The Christian Science Monitor
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI deepfakes being used to create false political videos that have been widely disseminated, causing reputational harm and societal divisiveness, which qualifies as harm to communities. It also details cyberattacks by Iranian hackers targeting political campaigns, involving spear-phishing and AI-generated content, which have led to data theft and investigations, constituting harm through violation of political rights and security. These harms are realized and ongoing, not merely potential. The article also discusses responses to these harms, but the main focus is on the incidents themselves. Hence, the event is best classified as an AI Incident.
Thumbnail Image

Act now on proposed AI regulation bill, Villafuerte urges lawmakers - Manila Standard

2024-08-12
Manila Standard
Why's our monitor labelling this an incident or hazard?
The article centers on the potential dangers of AI-generated deepfakes and manipulated media in the context of elections, which could plausibly lead to harm such as misinformation, reputational damage, and disruption of democratic processes. However, it does not describe any actual harm or incident that has already occurred. Therefore, it fits the definition of an AI Hazard, as it concerns credible risks that could plausibly lead to an AI Incident if unaddressed. The legislative proposal and warnings serve as context for this potential harm rather than reporting a realized incident or complementary information about responses to a past event.
Thumbnail Image

Solon proposes law regulating AI - Manila Standard

2024-08-12
Manila Standard
Why's our monitor labelling this an incident or hazard?
The article centers on the potential misuse of AI systems (deepfakes and generative AI) to disrupt elections, which could plausibly lead to harm such as misinformation and political manipulation. However, it does not describe any realized harm or incident where AI has directly or indirectly caused harm. Instead, it reports on a proposed law and warnings from officials about future risks. Therefore, this qualifies as an AI Hazard because it concerns plausible future harm from AI misuse, not an AI Incident or Complementary Information.