AI-Generated Deepfake Ads Target Kentucky GOP Candidates in Defamatory Political Attacks

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Super PACs in Kentucky's Republican primary used AI-generated deepfake videos to falsely depict Rep. Thomas Massie and Ed Gallrein in compromising situations, causing reputational harm and misinformation. The ads, criticized as defamatory and potentially violating state laws, highlight the malicious use of AI in political campaigns.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was explicitly used to generate video content falsely showing Rep. Thomas Massie in intimate situations with other politicians, which is a clear case of malicious use of AI-generated deepfakes. The harm is reputational damage and defamation, which falls under violations of rights. The event involves direct harm caused by the AI system's outputs, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article also references legal frameworks intended to prevent such harms, reinforcing the recognition of harm caused.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

MAGA Attack Ad Uses AI To Show Thomas Massie 'Cheating With The Squad'

2026-05-04
Yahoo News
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate video content falsely showing Rep. Thomas Massie in intimate situations with other politicians, which is a clear case of malicious use of AI-generated deepfakes. The harm is reputational damage and defamation, which falls under violations of rights. The event involves direct harm caused by the AI system's outputs, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article also references legal frameworks intended to prevent such harms, reinforcing the recognition of harm caused.
Thumbnail Image

MAGA Attack Ad Uses AI To Show Thomas Massie 'Cheating With The Squad'

2026-05-04
HuffPost
Why's our monitor labelling this an incident or hazard?
The advertisement uses AI-generated video to create a false and defamatory portrayal of Rep. Thomas Massie, which constitutes a violation of rights through digital forgery and nonconsensual use of likeness. This is a clear case where the AI system's use has directly led to harm (reputational harm and potential legal violations). Therefore, this event qualifies as an AI Incident under the framework, as it involves the use of an AI system leading to a violation of rights and harm to an individual.
Thumbnail Image

MAGA Wildcard Deploys First Lady's Law to Blow Up GOP Attack Ad

2026-05-06
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated images used in a political attack ad, which has already been published and is causing reputational harm to a public figure. The AI system's use directly led to the dissemination of false and potentially damaging content, fulfilling the criteria for an AI Incident. The harm includes violation of rights related to personal reputation and misinformation affecting communities. The article also references legal frameworks addressing such AI misuse, reinforcing the recognition of harm. Hence, this is not merely a potential hazard or complementary information but a realized AI Incident.
Thumbnail Image

'TAKE IT DOWN': MTG Says Massie Attack Ad Insinuating a Throuple with AOC and Ilhan Omar Violates Revenge Porn Law

2026-05-05
Mediaite
Why's our monitor labelling this an incident or hazard?
The event describes an AI system used to generate fake intimate images that are knowingly false and defamatory, causing harm to the individuals depicted. The use of AI-generated content in this manner directly leads to a violation of a revenge porn law, which protects individuals from non-consensual intimate imagery. This constitutes a violation of rights and harm to the individuals involved, meeting the criteria for an AI Incident. The harm is realized (defamation and legal violation), not just potential, and the AI system's role is pivotal in creating the harmful content.
Thumbnail Image

Massie calls AI-generated attack ad 'defamatory' ahead of primary

2026-05-05
The Courier-Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated videos used in political attack ads that falsely portray a politician in a defamatory manner. The AI system's outputs have directly led to reputational harm and potential violation of rights. The harm is occurring as the ads are publicly released and discussed, meeting the criteria for an AI Incident. The presence of AI is clear, the harm is realized, and the event is not merely a potential risk or complementary information but a concrete incident of harm caused by AI misuse.
Thumbnail Image

AI 'deepfake' ads attack Massie and Gallrein in northern Kentucky GOP primary

2026-05-05
Louisville Public Media
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos used in political attack ads, which have directly caused reputational harm to the candidates and potentially misled voters, constituting harm to communities and violations of rights. The harm is realized and ongoing, not merely potential. The AI system's role is pivotal in creating the deceptive content. The presence of a state law regulating such ads and the discussion of legal recourse further supports the recognition of harm. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Thomas Massie condemns AI ad showing fake throuple with AOC, Omar

2026-05-06
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The event involves AI-generated synthetic media used in political attack ads, which have directly caused reputational harm and misinformation, a form of harm to communities and potentially a violation of rights to truthful information. The AI system's use in generating these ads is central to the harm. Although the harm is reputational and political rather than physical, it fits within the framework's harm categories (harm to communities and violation of rights). Therefore, this qualifies as an AI Incident. The discussion of legal responses and disclosures is complementary but does not negate the incident classification.
Thumbnail Image

Marjorie Taylor Greene rages at Trump over creepy AI "throuple" ad: "TAKE IT DOWN"

2026-05-06
LGBTQ Nation
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating deepfake video content, which is a form of AI-generated media. The use of this AI-generated content in a political ad is causing public outcry and is alleged to violate laws against deepfakes. However, the article does not describe any actual harm such as injury, disruption, or legal violations that have been adjudicated or confirmed. The harm is potential reputational and informational harm, which is plausible but not confirmed as having occurred. Therefore, this situation fits the definition of an AI Hazard, as the AI-generated deepfake could plausibly lead to harm such as misinformation or reputational damage, but the article does not confirm that such harm has materialized yet.
Thumbnail Image

Massie blasts AI ad as 'disgusting and defamatory'

2026-05-06
WUKY-FM 91.3 Radio
Why's our monitor labelling this an incident or hazard?
The ad explicitly uses AI-generated images to create false and defamatory portrayals of a political figure, which directly harms the individual's reputation and misleads the public. The AI system's use in generating manipulated content that causes reputational damage and misinformation fits the definition of an AI Incident, as it leads to harm to communities and violations of rights. The harm is realized, not just potential, as the defamatory ad is actively being used in a political campaign.
Thumbnail Image

'Satirical' MAGA Attack Ad Slammed For Using AI To Claim GOP Rep Is In 'Throuple' With AOC And Ilhan Omar

2026-05-07
Comic Sands
Why's our monitor labelling this an incident or hazard?
The ad explicitly uses AI-generated images to create false and defamatory content about a political figure, which has been publicly condemned as a lie and a violation of laws against non-consensual intimate imagery. The AI system's use directly led to reputational harm and misinformation, fulfilling the criteria for an AI Incident involving violations of rights and harm to communities through misinformation and defamation. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Woman In Labor Times How Long Her Husband Takes To Poop To See If She Can Push Their Baby Out Faster In Hilarious Viral Video

2026-05-07
Comic Sands
Why's our monitor labelling this an incident or hazard?
The article centers on an AI-generated satirical ad that falsely portrays a politician, leading to reputational harm and public outcry. The AI system is involved in generating manipulated images, which is a clear AI system use. The harm is reputational and political, involving defamation and misinformation, which can be considered a violation of rights but not explicitly framed as a legal or fundamental rights breach under the framework. There is no indication of physical harm, critical infrastructure disruption, or other significant harms listed. The event also includes societal and governance responses such as calls for legal action and condemnation, which aligns with Complementary Information. Therefore, the event is best classified as Complementary Information rather than an AI Incident or AI Hazard.