AI-Generated Deepfakes Disrupt Elections and Spread Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-powered deepfake technology has been used to create convincing fake videos, audio, and images, leading to misinformation, harassment, privacy violations, and disruption of democratic processes in the US and globally. Notable incidents include deepfake robocalls in the 2024 US election and widespread manipulation threatening election integrity in India.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system generating a deepfake audio recording that was used to mislead voters, directly causing harm to the democratic process by potentially influencing election participation. This fits the definition of an AI Incident because the AI-generated content has directly led to harm to communities (harm to democratic processes and voter trust). The article also discusses legislative responses, but the primary focus is on the realized harm from the AI-generated deepfake robocall and the broader implications for election integrity.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
PsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

States are cracking down on deepfakes ahead of the 2024 election

2024-03-27
ABC News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a deepfake audio recording that was used to mislead voters, directly causing harm to the democratic process by potentially influencing election participation. This fits the definition of an AI Incident because the AI-generated content has directly led to harm to communities (harm to democratic processes and voter trust). The article also discusses legislative responses, but the primary focus is on the realized harm from the AI-generated deepfake robocall and the broader implications for election integrity.
Thumbnail Image

Misinformation Spread Via Deepfakes Biggest Threat To Upcoming Polls In India: Tenable

2024-03-25
Zee News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfakes and fake content being used to spread misinformation and disinformation on social media platforms, which are AI systems or rely on AI for content generation and dissemination. The harm described is the potential influence on elections, which is a significant harm to communities and democratic rights. Since the article focuses on the threat and potential for harm rather than a realized harm event, it fits the definition of an AI Hazard. The involvement of AI in generating deepfakes and the plausible risk of election interference aligns with the criteria for an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Commentary: Deepfakes are still new, but 2024 could be the year they have an impact on elections

2024-03-23
CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (deepfake generation tools) being used to create synthetic media that have already caused harm by spreading false information about political candidates and manipulating election narratives. The harms include misinformation affecting democratic integrity and harassment limiting political participation, which are direct harms to communities and violations of rights. The involvement of AI in generating these deepfakes is clear and central to the harm described. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

How to Find AI 'Deepfake' Images

2024-03-27
VOA
Why's our monitor labelling this an incident or hazard?
The article clearly describes harms caused by AI systems (deepfake generation tools) that have already occurred, such as cheating, identity theft, propaganda, and election interference, which are harms to communities and violations of rights. The AI systems involved are generative AI models creating fake images and videos. Therefore, this event qualifies as an AI Incident because the development and use of AI systems have directly led to significant harms. The article also discusses detection tools and challenges but the primary focus is on the harms caused by AI deepfakes that are already happening.
Thumbnail Image

Tech expert says we must 'act now' against AI as 'deepfakes make anyone target'

2024-03-26
Daily Star
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems to create non-consensual explicit deepfake images, which directly harms individuals by violating their rights and causing psychological and social harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of human rights and harm to communities. The harms are ongoing and realized, not merely potential. The article also discusses governance responses but the primary focus is on the existing harms caused by AI deepfakes.
Thumbnail Image

A whole new world: Cybersecurity expert calls out the breaking of online trust

2024-03-26
TheStreet
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deepfake generators and AI image/audio synthesis) whose use has directly led to significant harms including online harassment, privacy violations, and misinformation, which constitute violations of rights and harm to communities. The harms are realized and ongoing, not merely potential. Therefore, this qualifies as an AI Incident. The discussion of regulatory and corporate responses is complementary but secondary to the primary focus on the harms caused by AI misuse.
Thumbnail Image

The rise of Deepfakes: Can IP solutions help?

2024-03-25
Lexology
Why's our monitor labelling this an incident or hazard?
The article centers on the legal and regulatory context surrounding deepfakes, which are AI-generated content, and discusses potential harms and enforcement strategies. It does not describe a concrete AI Incident (no specific harm event caused by AI is reported) nor an AI Hazard (no specific plausible future harm event is described). Instead, it provides a detailed discussion of the challenges and responses related to AI-generated deepfakes, including ongoing and proposed legal measures. This fits the definition of Complementary Information, as it enhances understanding of AI harms and governance without reporting a new incident or hazard.
Thumbnail Image

Lok Sabha Election 2024: Misinformation Spread Through AI-Generated Deepfakes and Fake Content Biggest Threat to Upcoming Elections in India, Says Tenable | 📲 LatestLY

2024-03-24
LatestLY
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the use of generative AI to create deepfakes and fake content. The harm described is the potential disruption of the electoral process and harm to communities via misinformation and disinformation, which could influence election outcomes and public trust. Since the article focuses on the threat and potential for harm rather than a realized incident, it qualifies as an AI Hazard. The involvement is through the use of AI-generated content spreading misinformation, which could plausibly lead to violations of rights and harm to communities if not mitigated.
Thumbnail Image

A whole new world: Cybersecurity expert calls out the breaking of online trust

2024-03-26
Post and Courier
Why's our monitor labelling this an incident or hazard?
The article explicitly details the use of AI systems to generate deepfake content that has caused real harm, such as harassment, bullying, and misinformation, which are violations of rights and harm to communities. The involvement of AI in generating these synthetic media is clear and central to the harms described. Therefore, this qualifies as an AI Incident because the AI systems' use has directly led to significant harms. While the article also discusses potential regulatory responses and societal implications, the primary focus is on the realized harms caused by AI-generated deepfakes.
Thumbnail Image

How to spot AI-generated deepfake images

2024-03-24
Jamaica Gleaner
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (generative AI models like DALL-E, Midjourney, and others) used to create deepfake images and videos, which can cause harm such as scams and election manipulation. However, it does not report a concrete AI Incident where harm has already materialized or a specific AI Hazard event with a credible imminent risk. Instead, it provides general information and advice about the risks and detection of AI deepfakes, which fits the definition of Complementary Information as it enhances understanding of AI harms and responses without describing a new incident or hazard.
Thumbnail Image

A whole new world: Cybersecurity expert calls out the breaking of online trust

2024-03-26
Lexington Herald Leader
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake generators and AI image generators) being used to create harmful content that has been actively spread online, causing real harm to individuals (e.g., minors and celebrities) and communities (through harassment and disinformation). The harms include violations of rights, online harassment, and erosion of trust, which fit the definitions of AI Incident. The involvement of AI is clear and central to the harms described. Although the article also discusses potential future regulatory responses, the primary focus is on realized harms, not just potential risks or complementary information.
Thumbnail Image

Detecting Deepfakes: How to Spot AI Fakery Online

2024-03-24
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems to create deepfake content that causes harm to communities by spreading misinformation and potentially manipulating elections, which constitutes realized harm. It also discusses AI tools developed to detect such content, but the main focus is on the existing and ongoing harms caused by AI-generated deepfakes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm through misinformation and deception online.
Thumbnail Image

Is the UK's Deepfake Defense Enough? A Billion Pound Industry Emerges to Combat Threats | Cryptopolitan

2024-03-27
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation and detection technologies) and addresses the potential for harm (disinformation, societal division, election interference) that could plausibly arise from deepfake misuse. However, the article does not describe any realized harm or a specific incident where AI caused direct or indirect harm. Instead, it reports on the growth of defense mechanisms and the importance of these efforts to prevent harm. Therefore, this is best classified as Complementary Information, as it provides context and updates on societal and technical responses to AI-related threats without describing a new AI Incident or AI Hazard.
Thumbnail Image

One Tech Tip: How to spot AI-generated deepfake images

2024-03-25
The Columbian
Why's our monitor labelling this an incident or hazard?
The article describes the use of AI systems (generative AI models like DALL-E, Midjourney, OpenAI's Sora) to create deepfake images and videos. It emphasizes the potential for these AI-generated fakes to be used maliciously, which could plausibly lead to harms such as scams, identity theft, and election interference. However, it does not document a specific realized harm or incident but rather warns about the credible risks associated with these AI systems. Therefore, this qualifies as an AI Hazard, as the development and use of these AI systems could plausibly lead to an AI Incident involving harm to communities or violations of rights.
Thumbnail Image

THE DANGERS OF DEEPFAKES

2024-03-25
pantagraph.com
Why's our monitor labelling this an incident or hazard?
The article explicitly details the use of generative AI systems to create deepfakes that have been disseminated during elections in various countries, causing real and ongoing harm by misleading voters and undermining democratic processes. The harms are direct and realized, not hypothetical, as evidenced by specific examples of fake videos and audio clips influencing public perception and election outcomes. The AI systems' use is central to the harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why Deepfakes are Dangerous and How to Identify? - Techiexpert.com

2024-03-25
Techiexpert.com
Why's our monitor labelling this an incident or hazard?
The article clearly involves AI systems (deepfake generation using variational auto-encoder networks and other AI techniques). It describes harms that have occurred (e.g., fake calls from President Biden, fake pictures of Taylor Swift) and potential harms (election interference, scams, reputation damage). However, it does not describe a specific new event or incident but rather explains the general risks and detection challenges associated with deepfakes. Therefore, it is not reporting a new AI Incident or AI Hazard but is providing complementary information that enhances understanding of AI-related harms and responses.
Thumbnail Image

A whole new world: Cybersecurity expert calls out the breaking of online trust

2024-03-26
The Daily Courier
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (deepfake generators) that have directly led to significant harms such as online harassment, bullying, violation of privacy and intellectual property rights, and widespread disinformation. The harms are ongoing and have affected real individuals, including minors and public figures, fulfilling the criteria for an AI Incident. The article's focus is on the realized harms and societal impact rather than just potential risks or responses, so it is not merely Complementary Information or an AI Hazard.
Thumbnail Image

Commentary: Bringing people and technology together to combat the threat of deepfakes - Maryland Matters

2024-03-25
Maryland Matters
Why's our monitor labelling this an incident or hazard?
The article involves AI systems in the form of deepfake generation and detection technologies, which are relevant to AI harms. However, it does not describe a concrete AI Incident (no realized harm from AI use or malfunction) nor an AI Hazard (no specific event or circumstance indicating plausible future harm). Instead, it offers complementary information about research, societal challenges, and governance needs related to AI deepfakes. Therefore, it fits the definition of Complementary Information, as it enhances understanding of AI harms and responses without reporting a new incident or hazard.
Thumbnail Image

Misinformation spread via deepfakes biggest threat to upcoming polls in India: Tenable - Weekly Voice

2024-03-24
Weekly Voice
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating deepfake content used for misinformation and disinformation, which could plausibly lead to harm to communities by undermining election integrity and public trust. Since the article describes this as a significant threat and potential influence operation but does not document actual realized harm yet, it fits the definition of an AI Hazard. The involvement of AI-generated deepfakes and the credible risk of election interference align with the criteria for an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Emerging AI technology outpaces regulation, offers positive and negative aspects - Yellow Scene Magazine

2024-03-23
Yellow Scene Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly describes harms caused by AI-generated deepfakes, including harassment, misinformation affecting elections, and reputational damage, which are direct harms to individuals and communities. It also discusses the erosion of media trust and potential legal violations related to non-consensual content, constituting violations of rights. The presence and use of AI systems (deep learning algorithms generating deepfakes) are central to these harms. Although the article also discusses potential future risks and regulatory responses, the primary focus is on existing harms and incidents caused by AI deepfake technology. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Navigating the Deepfake Dilemma: A Comprehensive Analysis of Its Impact on Democracies and Security

2024-03-26
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI systems (deepfake technology leveraging machine learning and GANs) that have directly led to harms including misinformation campaigns affecting democracies, privacy violations, and financial scams. These constitute violations of rights and harm to communities, fitting the definition of AI Incidents. The article also discusses responses and research efforts, but the primary focus is on realized harms caused by AI systems, not just potential or complementary information.
Thumbnail Image

The Deepfake Threat to the 2024 US Presidential Election

2024-03-27
GNET
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI-generated deepfake content being used in a real election context, causing misinformation and disruption to the democratic process, which is a harm to communities and political rights. The AI system's outputs (deepfakes) have been deployed and caused actual harm, not just potential harm. The article also discusses extremist use of AI for propaganda and recruitment, further evidencing realized harms. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hillary Clinton, election officials warn AI could threaten elections

2024-03-29
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated disinformation and deepfake videos as tools that could be used to misinform voters and disrupt elections, which constitutes a plausible risk of harm to communities and democratic rights. Although a single robocall incident is mentioned, it was addressed and does not represent a large-scale harm. The main narrative centers on warnings and preparations for potential AI-driven election interference rather than confirmed widespread harm. Hence, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Hillary Clinton: AI Could Disrupt 2024 Election

2024-03-29
NewsMax
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content being used to spread disinformation that misleads voters, which is a direct violation of rights and causes harm to communities by undermining democratic processes. The robocall example shows realized harm, not just potential risk. The involvement of AI in generating misleading content and deepfakes that influence voter behavior fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities and violations of rights. The article does not merely warn about potential future harm but documents actual AI-enabled disinformation impacting the election cycle.
Thumbnail Image

Pennsylvania and other states push to combat AI threat to elections * Pennsylvania Capital-Star

2024-04-02
Pennsylvania Capital-Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (deepfakes, AI-generated voices) being used to deceive voters, including a concrete example of an AI-generated robocall that suppressed voter turnout. This constitutes direct harm to the democratic process and communities. The involvement of AI in creating misleading content that has already caused harm meets the criteria for an AI Incident. The legislative responses and advocacy efforts described are complementary information but do not negate the presence of realized harm. Hence, the primary classification is AI Incident.
Thumbnail Image

States rush to combat AI threat to elections * Michigan Advance

2024-04-02
Michigan Advance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (generative AI creating deepfakes and AI-generated voices) being used to produce misleading content that has already been disseminated to voters, causing confusion and potential disruption to elections. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities (disruption of democratic processes and voter confusion). While it also discusses potential future risks and legislative responses, the presence of actual AI-generated disinformation campaigns and their impact qualifies this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

States rush to combat AI threat to elections * Virginia Mercury

2024-04-01
Virginia Mercury
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated content (deepfakes, AI-generated robocalls) that has already been used to mislead voters and suppress votes, which is a direct harm to democratic processes and communities. The involvement of AI systems in generating this disinformation is clear. The harms are realized, not just potential, as evidenced by the robocall incident and the Slovakian audio example. Therefore, this is an AI Incident. The discussion of legislative measures and advocacy reports supports understanding but does not overshadow the primary focus on the incident of AI-driven election interference.