AI-Generated Deepfake Audio of UK Opposition Leader Circulates, Threatening Election Integrity

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake audio and video clips falsely depicting UK Labour leader Keir Starmer verbally abusing staff and criticizing Liverpool circulated widely on social media during the Labour Party conference. The deepfakes misled the public, sparked political disruption, and raised serious concerns about AI's threat to democratic processes and election integrity.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated deepfake audio recordings that have been widely viewed and believed, leading to misinformation and potential harm to democratic processes. This constitutes harm to communities and a violation of democratic rights, directly linked to the use of AI systems generating misleading content. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation affecting political processes.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securitySafetyDemocracy & human autonomyPrivacy & data governanceRespect of human rights

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Keir Starmer deepfake shows alarming AI fears are already here

2023-10-09
The Independent
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated deepfake audio recordings that have been widely viewed and believed, leading to misinformation and potential harm to democratic processes. This constitutes harm to communities and a violation of democratic rights, directly linked to the use of AI systems generating misleading content. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation affecting political processes.
Thumbnail Image

Deepfakes warning after false video emerges of Keir Starmer during Labour conference

2023-10-09
Yahoo Sports
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to generate deepfake audio and video content that falsely portrays a political figure, which has been disseminated on social media platforms. This has caused harm by misleading the public and threatening democratic processes, fulfilling the criteria for harm to communities. The harm is realized as the fake clips have been widely viewed and circulated, not merely a potential risk. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to communities through misinformation and election interference.
Thumbnail Image

Deepfake Audio Is a Political Nightmare

2023-10-09
Wired
Why's our monitor labelling this an incident or hazard?
The event describes a suspicious audio recording that is likely AI-generated (an AI system is involved in creating deepfake audio). The potential harm is to the democratic process and public trust, which falls under harm to communities. Since the harm is not confirmed but plausible if the deepfake spreads and influences voters, this constitutes an AI Hazard rather than an AI Incident. The article focuses on the risk and ongoing investigation rather than confirmed harm or misuse.
Thumbnail Image

'Deepfake' Starmer clips posted during Labour conference in democracy 'threat'

2023-10-09
Mirror
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake audio clips that have been viewed by millions and are misleading the public. The harm is realized as the false clips threaten democracy by spreading misinformation and undermining trust, which constitutes harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Shadow of deepfake hangs over British politics - English

2023-10-11
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly through the creation of deepfake audio content, which is a known AI application. The use of these AI-generated deepfakes has directly caused harm by misleading the public and damaging political reputations, which constitutes harm to communities and democratic processes. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm. The article also discusses the broader risks and the need for governance, but the primary focus is on the realized harm from the deepfake incidents.
Thumbnail Image

Deepfakes warning after forged video emerges of Keir Starmer at Labour conference

2023-10-09
Evening Standard
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used to create deepfake videos and audio that have been shared online, directly leading to harm in the form of misinformation and potential disruption to democratic processes and political communities. This constitutes harm to communities and a violation of rights related to truthful information dissemination. Therefore, it qualifies as an AI Incident because the AI-generated content has already caused harm by misleading the public and threatening election integrity.
Thumbnail Image

Deepfake Audio Is a Political Nightmare

2023-10-09
WIRED UK
Why's our monitor labelling this an incident or hazard?
The event describes the use or potential use of an AI system (audio deepfake technology) to create manipulated content that is actively circulating and causing harm to political communities by spreading misinformation. This fits the definition of an AI Incident because the AI-generated content is directly leading to harm to communities by undermining democratic trust and potentially influencing elections. The harm is realized as the audio is already circulating and causing political disruption, not merely a potential future risk.
Thumbnail Image

The professor warning of an 'avalanche' of deep fakes ahead of next year's election

2023-10-11
The Herald
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of generating deepfake audio, which are being used to create misleading political content. Although the article does not confirm that these deepfakes have definitively caused harm, it strongly suggests a credible risk of harm to democratic processes and communities through misinformation and manipulation. This fits the definition of an AI Hazard, as the development and use of deepfake AI systems could plausibly lead to significant harms such as election interference and erosion of public trust. There is no indication that a specific AI Incident (realized harm) has occurred yet, nor is the article primarily about responses or updates, so it is not Complementary Information. It is clearly related to AI and its societal impact, so it is not Unrelated.