Georgia Lawmakers Use Deepfake of Senator to Demonstrate AI Election Risks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Georgia legislators created and presented an AI-generated deepfake video impersonating Senator Colton Moore and activist Mallory Staples without their consent to illustrate the dangers of political deepfakes. The incident spurred legislative efforts to criminalize deceptive AI use in election ads, highlighting risks of voter deception and election fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (deepfake generation technology) used to create a realistic but false video impersonating political figures. The use of this AI deepfake technology in political communication poses a credible risk of election interference and fraud, which are harms to the democratic process and communities. However, the article primarily discusses the legislative response to this risk and the demonstration of the technology's capabilities rather than reporting an actual incident of harm caused by AI deepfakes in an election. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to significant harm (fraudulent election interference), but no specific harmful incident has yet occurred as described in the article.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
GovernmentCivil society

Harm types
ReputationalHuman or fundamental rights

Severity
AI hazard

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

'Fraud is fraud': Georgia aims to ban AI deepfakes in political campaigns

2024-03-20
The Guardian
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation technology) used to create a realistic but false video impersonating political figures. The use of this AI deepfake technology in political communication poses a credible risk of election interference and fraud, which are harms to the democratic process and communities. However, the article primarily discusses the legislative response to this risk and the demonstration of the technology's capabilities rather than reporting an actual incident of harm caused by AI deepfakes in an election. Therefore, this situation represents an AI Hazard, as the AI system's use could plausibly lead to significant harm (fraudulent election interference), but no specific harmful incident has yet occurred as described in the article.
Thumbnail Image

'Fraud is fraud': Georgia aims to ban AI deepfakes in political campaigns

2024-03-20
Yahoo News UK
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly (deepfake AI tools) used to create misleading political content that can influence elections, which constitutes harm to communities and the democratic process. The article references actual instances of AI-generated misinformation impacting voters, thus the harm is realized, not just potential. The legislative response is a reaction to these harms. Hence, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm (misinformation, election interference).
Thumbnail Image

'Fraud is fraud': Georgia aims to ban AI deepfakes in political campaigns

2024-03-20
Aol
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (deepfake generation using AI image and audio synthesis) used to create a misleading video impersonating political figures. However, the article does not describe a realized harm such as election interference or voter deception that has already occurred due to AI deepfakes. Instead, it highlights the potential for such harm and the legislative efforts to prevent it. Therefore, this qualifies as an AI Hazard because the AI system's use could plausibly lead to significant harm (fraudulent election interference), but no specific AI Incident has been reported yet. The article also includes contextual information about existing federal and state regulatory gaps and enforcement challenges, but the main focus is on the potential risk and legislative response rather than a completed harm event.
Thumbnail Image

State legislator makes deepfake of colleague to prove deepfakes are bad

2024-03-21
The Verge
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of a deepfake AI system to create a false video of a politician without consent, which is a direct violation of personal rights and can be considered fraud. This meets the criteria for an AI Incident as it involves the use of AI leading to a violation of rights (c). The harm is realized in the form of misinformation and unauthorized use of personal characteristics, even if the intent was to support legislation against deepfakes. Therefore, it is not merely a potential hazard or complementary information but an actual incident involving AI harm.
Thumbnail Image

Georgia Could Soon Ban Political AI Deepfakes

2024-03-21
Tech Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation tools) and their potential misuse in political contexts, which could plausibly lead to harm such as voter deception and misinformation affecting election integrity. Since no actual harm or incident has been reported, but credible risks are recognized and legislative measures are being proposed to mitigate these risks, this qualifies as an AI Hazard. The article's main focus is on the potential for harm and the legislative response, not on a realized AI Incident or a complementary update to a past incident.
Thumbnail Image

Bill criminalizing AI use in deceptive election ads advanced by Georgia Senate panel

2024-03-20
Georgia Public Broadcasting
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the context of AI-generated deceptive election ads, which could plausibly lead to harm by misleading voters and undermining election integrity. However, the article centers on a bill being advanced to prevent such harms rather than describing an actual AI incident where harm has occurred. Therefore, this is an AI Hazard, as it concerns the plausible future harm from AI misuse in elections and legislative measures to address that risk.