Fox News Spreads Misinformation After Airing Racist AI-Generated SNAP Videos

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated deepfake videos depicting Black women misusing SNAP benefits circulated widely on social media, spreading racist stereotypes and misinformation. Fox News mistakenly reported these fabricated videos as real, further amplifying the false narratives before issuing a correction. The incident highlights the harm caused by AI-generated misinformation and media failures in the United States.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system (the AI app Sora) generating harmful content that spreads racist misinformation and stereotypes. This content is actively causing harm by influencing public perception negatively, reinforcing racial biases, and contributing to social stigma against SNAP recipients, which constitutes harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.[AI generated]
AI principles
AccountabilityFairnessRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
WomenGeneral public

Harm types
ReputationalPsychologicalPublic interestHuman or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

This Racist SNAP Video Is Everywhere Online -- And Everyone Should Be Alarmed - Yahoo News Singapore

2025-11-07
Yahoo News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (the AI app Sora) generating harmful content that spreads racist misinformation and stereotypes. This content is actively causing harm by influencing public perception negatively, reinforcing racial biases, and contributing to social stigma against SNAP recipients, which constitutes harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm as defined in the framework.
Thumbnail Image

How Anti-Black AI Videos Harm Black Women At Work

2025-11-07
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating videos that propagate racist stereotypes, which have directly harmed Black women by reinforcing bias and discrimination in workplaces and society. The harm includes violations of human rights and harm to communities, as defined in the framework. The AI-generated videos are not hypothetical or potential threats but have already caused social and emotional harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

This Racist SNAP Video Is Everywhere Online -- And Everyone Should Be Alarmed

2025-11-07
HuffPost
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (the Sora app) to generate misleading and racist video content. This content is actively spreading misinformation and harmful stereotypes, which constitutes harm to communities and violates rights by promoting racial discrimination. The harm is realized and ongoing as the videos are widely viewed and engaged with, directly impacting public perception and social cohesion. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI-generated misinformation and racist content.
Thumbnail Image

Fox News Fell For AI-Generated Rage Bait, Rewrote Story To Pretend It Didn't

2025-11-04
Techdirt
Why's our monitor labelling this an incident or hazard?
The AI system generated fabricated videos that were presented as real, leading to false reporting that harmed the reputation and perception of SNAP recipients, a marginalized community. This misinformation can be considered harm to communities and a violation of rights through the spread of false narratives. The AI-generated content was central to the incident, and the media's failure to properly correct the misinformation exacerbates the harm. Hence, the event meets the criteria for an AI Incident as the AI system's outputs directly led to harm.
Thumbnail Image

Fake News! Fox News Falls For Racist AI Video Of Black Woman Upset Over SNAP Funding Cuts

2025-11-04
Black Enterprise
Why's our monitor labelling this an incident or hazard?
An AI system generated a fake video depicting a Black woman making inflammatory statements about SNAP benefits. Fox News initially reported the video as real, spreading misinformation that perpetuated racist stereotypes and stigmatized a vulnerable community. This misinformation constitutes harm to communities and a violation of rights, fulfilling the criteria for an AI Incident. The AI system's role in creating the misleading content was pivotal, and the harm was realized through the spread of false narratives before correction. The subsequent correction and editor's note do not negate the initial harm caused.
Thumbnail Image

Racists Create AI Videos Depicting Black Women On SNAP, And Fox News Falls For It

2025-11-05
Black America Web
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos that were used to spread false and harmful narratives about Black women on welfare. The videos caused real-world harm by misleading a major news network and perpetuating racist stereotypes, which is a violation of human rights and harms communities. The AI system's role is pivotal as it created the false content that led to these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Racists Create AI Videos Depicting Black Women On SNAP, And Fox News Falls For It

2025-11-05
Hot 100.9
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating deepfake videos that were used to spread false and racist narratives about Black women on welfare. The harm is realized as these videos were believed and broadcast by Fox News, contributing to misinformation and reinforcing harmful racial stereotypes, which constitutes harm to communities and a violation of rights. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing social harm and misinformation.