AI Deepfakes Fuel Disinformation and Fraud

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfakes have been used to spread disinformation during the US election, targeting figures like Kamala Harris, and to perpetrate financial fraud by impersonating Elon Musk. These incidents highlight the potential of AI to deceive and manipulate, leading to significant political and financial harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes real-world use of AI deepfake systems to impersonate Elon Musk in fraudulent investment pitches, resulting in direct financial losses to victims. This constitutes harm caused by an AI system’s malicious use (fraud), fitting the definition of an AI Incident.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomyPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingFinancial and insurance servicesDigital securityGovernment, security, and defence

Affected stakeholders
ConsumersGeneral publicGovernment

Harm types
Public interestEconomic/PropertyReputationalHuman or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Deepfakes of Elon Musk contribute to billions in fraud losses

2024-11-20
Aol
Why's our monitor labelling this an incident or hazard?
The article describes real-world use of AI deepfake systems to impersonate Elon Musk in fraudulent investment pitches, resulting in direct financial losses to victims. This constitutes harm caused by an AI system’s malicious use (fraud), fitting the definition of an AI Incident.
Thumbnail Image

Disinformation and deepfakes played a part in the US election. Australia should expect the same

2024-11-20
The Conversation
Why's our monitor labelling this an incident or hazard?
The article’s primary focus is to analyze and warn about the threat of AI-generated deepfakes and disinformation in elections based on past events, not to report a new incident or immediate hazard. It offers contextual information about known harms, detection challenges, and potential future impacts, fitting the definition of Complementary Information.
Thumbnail Image

Disinformation and deepfakes played a part in the US election. Australia should expect the same

2024-11-21
Phys.org
Why's our monitor labelling this an incident or hazard?
While the article references real past deepfake incidents (e.g., US election disinformation, deepfakes made by Australian politicians), its main purpose is to warn about the potential misuse of AI-generated deepfakes in upcoming Australian elections. It does not describe a novel incident or provide new follow-up on remediation, but rather outlines a credible future threat, fitting the definition of an AI Hazard.
Thumbnail Image

Why are 'deepfake porn' tutorials still showing up in search engines?

2024-11-22
Glamour UK
Why's our monitor labelling this an incident or hazard?
The article describes real, ongoing harm: AI-powered search engines (Google, Bing, Yahoo) are funneling users to explicit deepfake porn tutorials and tools, enabling non-consensual sexual content creation. This involves AI systems in their use phase producing harmful outcomes (violation of privacy, non-consensual pornography), constituting an AI Incident.
Thumbnail Image

Deepfakes of Elon Musk contribute to billions in fraud losses

2024-11-20
Yahoo
Why's our monitor labelling this an incident or hazard?
The article describes real harms—individuals losing money to investment scams driven by deepfake videos created with AI—and quantifies actual fraud losses (over $12 billion). Since an AI system’s outputs directly led to financial harm, it constitutes an AI Incident.
Thumbnail Image

Quiz: could you spot a deepfake?

2024-11-21
Which?
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI in creating deepfake scams, which can cause harm to individuals through deception. However, it does not describe a specific event where harm has occurred or a particular AI system malfunction or misuse leading to harm. Instead, it provides general advice and awareness about the potential risks of AI-generated deepfakes. Therefore, it is best classified as Complementary Information, as it supports understanding of AI-related risks without reporting a new incident or hazard.
Thumbnail Image

Detecting deepfakes vital for a trustworthy digital future

2024-11-21
Hindustan Times
Why's our monitor labelling this an incident or hazard?
The article describes the nature and risks of deepfakes, which are AI-generated synthetic media, and the potential harms they can cause if left unchecked. However, it does not describe a concrete event where deepfakes have caused harm or a malfunction or misuse of an AI system leading to harm. The discussion is about plausible future harms and the necessity of detection systems to prevent such harms. Therefore, this qualifies as an AI Hazard, as it outlines credible risks that AI-generated deepfakes could plausibly lead to significant harms such as misinformation, political disruption, and personal harm, but no specific incident is reported.
Thumbnail Image

Threat of AI-Generated Deepfakes Remains Deep Rooted

2024-11-23
Deccan Chronicle
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake generation using GANs) being used maliciously to create realistic fake audio and video content that has caused actual harm, such as financial scams resulting in significant monetary losses, impersonation, and harassment. These harms fall under injury to persons (financial and psychological harm), violations of rights (privacy, impersonation), and harm to communities (erosion of trust, misinformation). Since these harms are occurring and directly linked to the use of AI-generated deepfakes, this qualifies as an AI Incident rather than a hazard or complementary information. The article does not merely warn about potential harm but documents ongoing harm caused by AI misuse.
Thumbnail Image

Delhi High Court Calls for Action on Deepfake Regulation | Law-Order

2024-11-23
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake technology) and concerns their potential misuse, which could plausibly lead to harms such as misinformation and identity theft. However, no specific AI Incident (realized harm) is described. Instead, the court's directive to form a committee and review regulations is a governance response aimed at preventing future harms. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and governance responses to AI-related risks without reporting a new incident or hazard.
Thumbnail Image

India News | HC Directs Centre to Nominate Panel Members to Examine Deepfake Menace | LatestLY

2024-11-23
LatestLY
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (deepfake technology) and discusses their potential harms, but it does not describe any realized harm or incident caused by AI. Instead, it reports on the formation of a committee and legal proceedings aimed at addressing and mitigating the risks associated with deepfakes. This fits the definition of Complementary Information, which includes governance responses and societal measures related to AI risks. There is no direct or indirect harm reported yet, nor a specific event where AI malfunction or misuse has caused damage. Hence, it is not an AI Incident or AI Hazard but Complementary Information.
Thumbnail Image

Half of businesses lack strong confidence in deepfake detection

2024-11-21
Digital Journal
Why's our monitor labelling this an incident or hazard?
The article centers on the risk and preparedness regarding deepfake detection, an AI-related issue, but does not report any realized harm or incident caused by AI systems. It highlights a credible risk environment and gaps in defenses, which could plausibly lead to harm in the future, but no actual harm or incident is described. Therefore, it fits best as Complementary Information, providing context and insight into the AI ecosystem and societal responses to AI-driven threats, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Deepfake AI Detection Market Size, Growth, Share by 2031

2024-11-22
theinsightpartners.com
Why's our monitor labelling this an incident or hazard?
The article focuses on market size, growth projections, and the general threat landscape related to deepfake AI and detection technologies. It does not report a particular AI Incident (harm realized) or AI Hazard (plausible future harm from a specific event). Instead, it provides complementary information about the broader AI ecosystem, including the increasing prevalence of deepfake fraud and the development of detection technologies as a response. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI-related risks and responses without describing a new incident or hazard.
Thumbnail Image

Innovations in deepfake detection can't come fast enough | Biometric Update

2024-11-22
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating deepfakes (AI-generated images and audio) that have directly led to identity fraud and financial losses, which constitute harm to individuals and businesses (harm to property and communities). This meets the criteria for an AI Incident. Additionally, the article provides detailed information on research efforts, new detection datasets, and innovative detection methods, which serve as complementary information enhancing understanding and responses to the AI Incident. However, since the primary focus is on the realized harms and the ongoing fraud, the classification prioritizes AI Incident over Complementary Information.
Thumbnail Image

'KYC alone is not enough': Proof, Reality Defender on threat of AI-driven fraud | Biometric Update

2024-11-22
Biometric Update
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (generative AI, deepfake technology) being used maliciously to impersonate individuals in real-time video calls, leading to significant financial fraud (e.g., $25 million stolen). This constitutes direct harm to property and financial assets caused by AI misuse. The discussion of deployed AI detection systems and identity verification platforms further confirms AI involvement in both causing and mitigating harm. Therefore, the event qualifies as an AI Incident because the AI system's use has directly led to realized harm (financial fraud and identity theft).
Thumbnail Image

Disinformation, Deepfakes Loom Over Australian Election

2024-11-20
Mirage News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems generating deepfake videos and images that have been used to spread false information during political campaigns, directly impacting democratic processes and public trust. The harm to communities and democratic systems is realized, not just potential, as evidenced by the spread of disinformation and impersonation scams. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by AI-generated disinformation.
Thumbnail Image

Real or Fake? Finding the best ways to detect digital deception

2024-11-20
IT News Online
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation and detection AI) and their use in detecting manipulated media. However, the article does not report any incident where AI caused harm or where harm has occurred due to AI misuse or malfunction. Instead, it details ongoing research and development to improve detection tools and support decision-making by human experts. This fits the definition of Complementary Information, as it provides supporting data and context about AI systems and societal responses to AI-related challenges without describing a new AI Incident or AI Hazard.
Thumbnail Image

Rise in AI and 'nudification' apps aiding child abuse deepfakes

2024-11-21
thetimes.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI apps used to create deepfake images that sexually exploit children, which is a clear violation of human rights and causes harm to communities. The AI system's use directly leads to this harm, qualifying the event as an AI Incident. The mention of government action is complementary but secondary to the primary harm described.
Thumbnail Image

HC directs Centre to nominate panel members to examine deepfake menace

2024-11-23
NewsDrum
Why's our monitor labelling this an incident or hazard?
The article does not report any realized harm caused by deepfake AI systems but highlights the potential for significant harm through misuse of deepfake technology. The court's direction to form a committee to study and recommend regulatory frameworks indicates a response to a credible risk rather than an incident. Therefore, this event fits the definition of an AI Hazard, as it concerns plausible future harms from AI systems (deepfakes) and the need for governance measures to mitigate these risks.
Thumbnail Image

To Deliver on Targets of Halving VAWG in a Decade, the Government Must Ban AI Nudifying Apps - Progressive Britain

2024-11-22
Progressive Britain
Why's our monitor labelling this an incident or hazard?
The event involves the use of generative AI systems to create nude deepfake images without consent, which has directly led to harm to individuals' bodily autonomy, privacy, and psychological well-being, especially among vulnerable groups like children and women. This fits the definition of an AI Incident as it involves violations of human rights and harm to communities caused by the use of AI systems. The article reports on realized harm and ongoing issues rather than just potential risks or responses, so it is not a hazard or complementary information. Therefore, the classification is AI Incident.
Thumbnail Image

As Election Looms, Disinformation 'Has Never Been Worse'

2024-10-23
The New York Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of artificial intelligence as an accelerant for creating fake or fanciful content online, such as fabricated videos and images that have been widely disseminated on social media platforms. These AI-generated disinformation campaigns have directly led to harm by spreading false accusations, undermining trust in democratic institutions, and corroding political debate. The involvement of AI in generating and amplifying disinformation that affects election integrity and public trust constitutes an AI Incident under the framework, as it has directly led to harm to communities and violations of democratic rights.