Chris Cuomo Shares AI-Generated Deepfake of AOC, Spreading Misinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

NewsNation host Chris Cuomo mistakenly shared an AI-generated deepfake video of Rep. Alexandria Ocasio-Cortez making inflammatory remarks, believing it to be real. The incident sparked social media backlash and highlighted the risks of AI-generated misinformation and reputational harm caused by deepfakes.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event centers on a deepfake video, which is a product of AI technology used to create realistic but fabricated content. The misuse of this AI-generated content led to misinformation and reputational harm, which falls under harm to communities and individuals. Since the deepfake video was actively spread and believed, causing harm, this qualifies as an AI Incident due to the direct harm caused by the AI system's output (the deepfake).[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Chris Cuomo Issues WTF Non-Apology To AOC After Falling For Obvious Deepfake Video

2025-08-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event centers on a deepfake video, which is a product of AI technology used to create realistic but fabricated content. The misuse of this AI-generated content led to misinformation and reputational harm, which falls under harm to communities and individuals. Since the deepfake video was actively spread and believed, causing harm, this qualifies as an AI Incident due to the direct harm caused by the AI system's output (the deepfake).
Thumbnail Image

Chris Cuomo mocked for response after falling for deepfake AOC video

2025-08-07
The Guardian
Why's our monitor labelling this an incident or hazard?
An AI system was involved in creating a deepfake video, which is an AI-generated synthetic media. The misuse of this AI-generated content led to misinformation and reputational harm, which can be considered harm to communities and public trust. Since the deepfake video was shared and caused confusion and public criticism, this constitutes an AI Incident due to the realized harm from the AI system's use (the deepfake).
Thumbnail Image

Chris Cuomo Gives AOC Bizarre Non-Apology After Falling for Sydney Sweeney Deepfake

2025-08-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which is a form of AI-generated content. The misuse of this AI-generated deepfake led to misinformation being spread publicly, which is a harm to communities and public trust. The harm has already occurred as the misinformation was disseminated and caused confusion and reputational impact. Therefore, this qualifies as an AI Incident due to the direct role of the AI system in causing harm through misinformation.
Thumbnail Image

Chris Cuomo mocked after falling for deepfake video of AOC slamming...

2025-08-07
New York Post
Why's our monitor labelling this an incident or hazard?
The AI system (deepfake video generation) was used to create a misleading video, which was mistakenly believed to be real by a public figure. However, the harm was limited to reputational embarrassment and misinformation that was promptly addressed. There is no indication of injury, rights violations, or other significant harm resulting from the AI system's use. Therefore, this event does not meet the threshold for an AI Incident or AI Hazard. Instead, it serves as a complementary example illustrating societal challenges with AI-generated content and the importance of critical media literacy.
Thumbnail Image

'Good save!' Chris Cuomo roasted for bizarre response after falling for AOC deepfake

2025-08-07
Raw Story
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system in the form of a deepfake video, which is an AI-generated manipulated video designed to deceive viewers. The sharing and promotion of this deepfake by Chris Cuomo directly led to misinformation and reputational harm, fulfilling the criteria for an AI Incident. The harm is realized as the misinformation was disseminated publicly, causing confusion and reputational damage. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Chris Cuomo trolled for falling for AOC deepfake video about Sydney Sweeney

2025-08-07
The Independent
Why's our monitor labelling this an incident or hazard?
The event centers on a deepfake AI-generated video that fooled a journalist and others, but the video was marked as parody and no direct or significant harm (such as physical injury, legal rights violations, or critical infrastructure disruption) is reported. The misinformation spread is a recognized risk of AI deepfakes, but the article focuses on the social media reaction, public correction, and commentary on the implications for media literacy and AI's societal impact. This fits the definition of Complementary Information, as it provides context and societal response to AI-generated misinformation without documenting a concrete AI Incident or a plausible future hazard causing harm.
Thumbnail Image

Cuomo's Bizarre Apology to AOC After Sydney Sweeney AI Drama

2025-08-07
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake video, which is an AI system generating manipulated content. The misuse of this AI-generated content led to misinformation and reputational harm to a public figure, fulfilling the criteria for harm to communities and violation of rights. The AI system's role is pivotal as the deepfake video was the direct cause of the misinformation and subsequent public confusion and apology. Hence, this is classified as an AI Incident.
Thumbnail Image

NewsNation's Cuomo falls for AOC Sydney Sweeney deepfake

2025-08-06
Salon.com
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as a deepfake made with AI, which misled a public figure and the public, causing reputational and informational harm. This fits the definition of an AI Incident because the AI system's use directly led to harm through misinformation and deception. The event is not merely a potential hazard or complementary information but a realized harm involving an AI system.
Thumbnail Image

Ex-CNN anchor duped by deepfake of AOC -- and it immediately backfires

2025-08-07
NJ.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video, which was then shared and believed to be real by a public figure, causing misinformation. This misinformation harms the community by spreading false narratives and undermining trust in public communication. The harm has already occurred as the video was shared and believed, fulfilling the criteria for an AI Incident. The AI system's use in creating the deepfake is central to the incident, and the resulting harm is direct and realized.
Thumbnail Image

'Idiot': Big name anchor buried in mockery over meltdown at deepfaked AOC video

2025-08-06
Raw Story
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfaked video, which is an AI-generated manipulated video. The sharing of this video by a public figure led to public mockery and highlighted the spread of misinformation. The harm is realized as the deepfake misrepresents a lawmaker, potentially damaging reputation and misleading the public. This fits the definition of an AI Incident because the AI system's use directly led to harm in the form of misinformation and reputational damage, which is harm to communities and a violation of rights.
Thumbnail Image

Chris Cuomo Falls for Embarrassingly Obvious Deepfake of AOC, Who Tells Him, 'Use Your Critical Thinking Skills'

2025-08-06
Mediaite
Why's our monitor labelling this an incident or hazard?
The event explicitly involves a deepfake AI system creating a fabricated video of a politician. The misuse of this AI-generated content led to the spread of misinformation and reputational harm, which fits the definition of an AI Incident due to indirect harm to communities and violation of rights. The AI system's use directly led to the harm (misinformation and reputational damage). Therefore, this is classified as an AI Incident.
Thumbnail Image

Chris Cuomo Fell for a Deepfake of AOC, and Then Didn't Apologize for It

2025-08-07
Distractify
Why's our monitor labelling this an incident or hazard?
Deepfake videos are created using AI systems that generate realistic but fake content. The sharing of such a video by a public figure without apology or correction indicates harm to the community through misinformation. Since the AI system's use directly led to the spread of false information, this qualifies as an AI Incident under harm to communities.
Thumbnail Image

'Do You Need Any Help?': AOC Mocks Chris Cuomo After He Posts 'Deepfake' Video Of Her

2025-08-07
Crooks and Liars
Why's our monitor labelling this an incident or hazard?
The event centers on an AI-generated deepfake video that misrepresents a political figure, which is a misuse of AI technology. While the video could cause reputational and social harm, the article does not report actual realized harm such as physical injury, legal rights violations, or systemic disruption. The video was labeled as parody and deleted, and the discussion focuses on public and political reactions to the misuse of AI-generated content. This fits the definition of Complementary Information, as it informs about societal and governance responses to AI misuse and the challenges posed by deepfakes, without describing a direct AI Incident or a plausible future AI Hazard causing harm.
Thumbnail Image

NewsNation's Chris Cuomo Roasted After Falling for AOC-Sydney Sweeney Deepfake

2025-08-07
TV Insider
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI-generated deepfake video, which is an AI system generating synthetic content. The misuse of this AI-generated content caused harm by spreading false information and misleading a public figure and the public, which constitutes harm to communities and reputational harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm through misinformation and reputational damage.
Thumbnail Image

Chris Cuomo mocked for response after falling for deepfake AOC video

2025-08-07
Democratic Underground
Why's our monitor labelling this an incident or hazard?
The AI system involved is the deepfake video generation technology. However, the video was clearly watermarked as a deepfake, and the harm caused is limited to a mistaken sharing and ensuing social media exchange. There is no evidence of injury, rights violations, or other significant harms as defined. The event does not describe a new AI Incident or AI Hazard but rather provides context on the societal response to AI-generated content. Therefore, it fits best as Complementary Information, enhancing understanding of AI's societal impact without constituting a direct or plausible harm event.
Thumbnail Image

NewsNation's Chris Cuomo Trolled After Falling For AI-Generated AOC Rant

2025-08-07
Tampa Free Press
Why's our monitor labelling this an incident or hazard?
The event describes a deepfake video created by AI that was shared and believed to be real by a public figure, causing a social media controversy. While the AI system's use led to misinformation and public misunderstanding, the article does not report any direct or indirect harm such as injury, rights violations, or significant community harm. The harm is primarily reputational and social, which is not explicitly defined as an AI Incident under the framework. Therefore, this event is best classified as Complementary Information, as it provides context on the societal implications and challenges of AI-generated deepfakes and media literacy without documenting a concrete AI Incident or AI Hazard.
Thumbnail Image

Adam Rippon Shares Hilarious Reaction After Unboxing Emmy Award He Didn't Even Realize He Was Nominated For

2025-08-08
Comic Sands
Why's our monitor labelling this an incident or hazard?
The article describes a deepfake video generated using AI that falsely portrays a public figure making inflammatory statements. The AI system's use directly caused misinformation to spread, leading to reputational harm and misleading the public. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and potential violation of rights to accurate information.
Thumbnail Image

AOC Roasts Chris Cuomo for Believing Sydney Sweeney Deepfake

2025-08-06
The Daily Beast
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a deepfake video that falsely attributes statements to a public figure. The misuse of this AI-generated content led to misinformation being spread by a journalist, which is a form of harm to communities and reputational harm. The AI system's use directly caused this misinformation incident. Although the misinformation was corrected, the event meets the criteria for an AI Incident due to the realized harm from the AI system's outputs. Therefore, the classification is AI Incident.