AI-Generated Deepfake Falsely Claims UK MP Defection

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A deepfake video generated using AI falsely depicted Conservative MP George Freeman announcing his defection to Reform UK. The video, widely circulated on social media, misused Freeman's image and voice without consent, prompting him to report the incident to police due to concerns over misinformation and democratic harm.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The deepfake has been widely circulated, causing misinformation and potential harm to political discourse and democracy, which qualifies as harm to communities. Since the disinformation is actively spreading and causing harm, this is an AI Incident. The MP's report to police and the description of the video as a fabrication confirm the AI system's role in causing harm through misuse.[AI generated]
AI principles
AccountabilityPrivacy & data governanceRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Tory MP George Freeman reports deepfake defection video to police

2025-10-18
BBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated manipulated content. The deepfake has been widely circulated, causing misinformation and potential harm to political discourse and democracy, which qualifies as harm to communities. Since the disinformation is actively spreading and causing harm, this is an AI Incident. The MP's report to police and the description of the video as a fabrication confirm the AI system's role in causing harm through misuse.
Thumbnail Image

Tory MP George Freeman reports deepfake defection video to police

2025-10-18
BBC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is a form of AI-generated content. The video falsely portrays the MP making a political announcement, which constitutes misinformation and disinformation that can harm communities by distorting democratic processes. Since the deepfake is circulating and causing harm, this qualifies as an AI Incident due to violation of rights related to political expression and harm to communities through disinformation.
Thumbnail Image

Tory MP reports 'AI-generated deepfake' video of him announcing defection to Reform UK

2025-10-18
The Guardian
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is a clear example of AI-generated misinformation. The harm includes violation of the MP's rights (image and voice used without consent) and harm to the community by spreading false political information that can disrupt democratic processes. Since the video has been widely circulated and the MP has reported it to the police, the harm is realized, making this an AI Incident rather than a hazard or complementary information. The AI system's use directly led to the harm described.
Thumbnail Image

Tory MP George Freeman reports deepfake Reform defection video to...

2025-10-18
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is a product of AI-generated synthetic media. The video falsely shows the MP announcing a political defection, which he denies and has reported to the police. The harm caused is the spread of political disinformation, which can distort and disrupt democratic processes and harm communities by spreading false information. Since the AI system's use directly led to this harm, this qualifies as an AI Incident under the framework, specifically harm to communities and violation of rights related to truthful information and political integrity.
Thumbnail Image

Tory MP reports deepfake defection video to police

2025-10-18
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create a deepfake video, which is a form of AI-generated content. The use of this AI system has directly led to harm in the form of political disinformation, which can disrupt democratic processes and harm communities by spreading false information. The MP has reported the incident to the police, indicating the harm is realized and significant. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated disinformation.
Thumbnail Image

Tory MP George Freeman reports deepfake Reform defection video to police

2025-10-18
Yahoo
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created using AI technology that falsely depicts a politician defecting to another party. The AI system's use here directly led to harm in the form of political disinformation, which can be considered harm to communities and a violation of rights related to truthful information and democratic integrity. The harm is realized as the video was widely circulated and has the potential to disrupt democracy. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated deepfake content.
Thumbnail Image

Deepfake video of Tory MP defecting to Reform reported to police

2025-10-18
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a deepfake video, which is a form of AI-generated content. The use of this AI system has directly led to harm by spreading false political information, which can disrupt democratic processes and harm the community's trust. The harm is realized, not just potential, as the video has been circulated and caused concern. Therefore, this qualifies as an AI Incident under the framework, as it involves the use of AI leading to harm to communities and violation of rights through misinformation.
Thumbnail Image

Deepfake video of Tory MP defecting to Reform reported to police

2025-10-18
The Telegraph
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (deepfake technology) used to generate a fake video that misrepresents a public figure's political affiliation. The circulation of this video on social media constitutes a violation of rights through misinformation and harms the community by spreading false information that can disrupt political discourse. Since the harm (misinformation and reputational damage) is occurring due to the AI-generated content, this qualifies as an AI Incident under the framework.
Thumbnail Image

Tory MP reports deepfake video of him defecting to Reform UK to police

2025-10-18
EXPRESS
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (deepfake video generation) used maliciously to create false content about a public figure. While the video is fake and has been circulated, the article does not report any realized harm such as injury, rights violations, or operational disruption. The MP's reporting to police suggests concern about potential harm. Given that the harm is plausible but not confirmed as having occurred, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the AI-generated deepfake and its potential consequences, not on responses or broader ecosystem context.
Thumbnail Image

Tory MP reports deepfake claiming he joined Reform to police

2025-10-18
Metro
Why's our monitor labelling this an incident or hazard?
The event involves an AI system used to generate a deepfake video, which is an AI system's use leading to harm in the form of misinformation and potential disruption to democratic processes and public trust. The harm is realized as the video has been widely circulated, causing reputational damage and political misinformation. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated content.
Thumbnail Image

Conservative MP reports fake video to police after it claimed he had 'joined Reform'

2025-10-17
Eastern Daily Press
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate a deepfake video, which is a form of AI-generated content. The harm realized includes misinformation that damages the MP's reputation and poses a threat to democratic processes, which constitutes harm to communities and a violation of democratic rights. Since the AI-generated video has been widely circulated and caused actual harm, this qualifies as an AI Incident. The MP's reporting and public statement further confirm the harm has occurred and is significant.
Thumbnail Image

Conservative MP George Freeman reports AI-generated 'defection' video to police - The Global Herald

2025-10-18
The Global Herald
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is a clear example of AI-generated content causing harm through misinformation. The harm is realized as the video is circulating and falsely portrays the MP's political stance, which can mislead the public and disrupt democratic processes. The MP has reported the incident to authorities, indicating the seriousness of the harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities (disinformation affecting democracy) and a violation of rights (misuse of the MP's image and voice without consent).
Thumbnail Image

Deepfake chaos as fake video of Tory MP defecting to Reform referred to police

2025-10-18
GB News
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video (deepfake) of a Tory MP falsely announcing a defection to another party. The video is circulating on social media, which can cause harm to the individual's reputation and mislead the public, thus harming communities by spreading misinformation. Since the harm is occurring due to the AI-generated content, this qualifies as an AI Incident under the definition of harm to communities through misinformation caused by AI.
Thumbnail Image

Video of George Freeman MP announcing defection to Reform UK is fake - Full Fact

2025-10-20
Full Fact
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation technology) used to create a fake video that misrepresents a public figure, leading to misinformation and reputational harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm (harm to communities through misinformation and potential violation of rights). The article confirms the video is fake and AI-generated, and the MP has reported it to authorities, indicating the seriousness of the harm. Therefore, the classification is AI Incident.
Thumbnail Image

AI Disinformation Threat Grows as Deepfake Targets UK MP in Political Attack

2025-10-20
International Business Times UK
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (generative AI creating a hyper-realistic deepfake video) whose use directly led to harm to communities by spreading political disinformation that distorted public perception and trust. The harm is realized, not just potential, as the video caused confusion and a dent in public trust. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and democratic processes. The article also discusses the broader implications and calls for governance responses, but the primary focus is the incident itself.
Thumbnail Image

Creator of fake MP Reform defection video speaks out after police are called in

2025-10-21
Eastern Daily Press
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to create deepfake videos that falsely portray an MP defecting to another party, which has been widely disseminated and caused reputational and democratic harm. The harm is realized, not just potential, as the video has been shared and has influenced public perception. The involvement of police and fact-checkers confirms the seriousness of the incident. Therefore, this meets the criteria for an AI Incident due to violations of rights (misinformation affecting democratic rights) and harm to communities (damage to democratic trust).
Thumbnail Image

Fact check: Fake MP defection video, bus pass verification and...

2025-10-24
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, involving lip-sync deepfake technology to create a false narrative about the MP's political affiliation. This constitutes an AI system's use leading directly to harm in the form of misinformation and reputational damage, which is a harm to communities and individuals. Therefore, this qualifies as an AI Incident. The other parts of the article about bus passes and immigration data do not involve AI systems causing or potentially causing harm, so they are not relevant to the AI classification.
Thumbnail Image

Fact check: Fake MP defection video, bus pass verification and immigration removals

2025-10-24
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies the video as AI-generated deepfake content, which is an AI system's use leading to misinformation. The misinformation harms the MP's reputation and misleads the public, which is harm to communities and potentially a violation of rights to truthful information. The harm is realized as the video is circulating and has been reported to authorities. Therefore, this qualifies as an AI Incident. The other claims about bus passes and immigration removals are fact-checking clarifications without AI system involvement causing harm, so they are unrelated or complementary information but not incidents or hazards.
Thumbnail Image

People will not need to 're-verify' their bus passes from October - Full Fact

2025-10-24
Full Fact
Why's our monitor labelling this an incident or hazard?
The article references AI-generated misleading claims but does not report any realized harm or plausible future harm caused by an AI system. The focus is on correcting misinformation and clarifying official positions, which fits the definition of Complementary Information. There is no direct or indirect harm caused by AI systems described, nor a credible risk of harm from AI use or malfunction. Hence, it is not an AI Incident or AI Hazard. It is not unrelated because it involves AI-generated content, but the main focus is on clarifying misinformation and official responses, making it Complementary Information.
Thumbnail Image

Fact check: Fake MP defection video, bus pass verification and immigration removals

2025-10-24
Shropshire Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated video used to spread false information about an MP's defection, which is misinformation facilitated by AI technology. However, the article does not report any direct harm such as injury, rights violations, or disruption caused by this video, only the spread of false information. The bus pass claims and immigration data do not involve AI systems causing harm or plausible harm. The main focus is on fact-checking and clarifying misinformation, which fits the definition of Complementary Information as it provides context and response to AI-related misinformation without reporting a new AI Incident or Hazard.