AI-Generated Videos Used to Spread Racist Narratives in Europe

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Far-right figures, including Tommy Robinson, are exploiting generative AI tools to create and disseminate racist, dystopian videos depicting European cities overtaken by migrants. These AI-generated clips, rapidly produced despite moderation efforts, fuel extremist narratives and social division, causing real harm by spreading hate and misinformation online.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated videos being used by far-right figures to spread racist messages and extremist narratives, which have gained millions of views and contributed to social harm. The AI systems' outputs are central to the harm, as they enable rapid creation and dissemination of hateful content that promotes conspiracy theories and fuels violence. The misuse of AI tools here directly leads to violations of rights and harm to communities, fulfilling the criteria for an AI Incident.[AI generated]
AI principles
AccountabilityFairnessSafetyRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
Public interestHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AI tools 'exploited' for racist European city videos - The Economic Times

2025-10-13
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used by far-right figures to spread racist messages and extremist narratives, which have gained millions of views and contributed to social harm. The AI systems' outputs are central to the harm, as they enable rapid creation and dissemination of hateful content that promotes conspiracy theories and fuels violence. The misuse of AI tools here directly leads to violations of rights and harm to communities, fulfilling the criteria for an AI Incident.
Thumbnail Image

AI Tools 'Exploited' For Racist European City Videos

2025-10-13
International Business Times
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (chatbots and generative AI) used to create harmful content that is actively spreading extremist narratives and racist conspiracy theories. The harm is realized as these videos have gained millions of views, influencing public opinion and potentially inciting violence and social division, which constitutes harm to communities and violations of rights. The AI's role is pivotal as it enables rapid, scalable production of such content despite moderation attempts. Therefore, this qualifies as an AI Incident due to the direct and ongoing harm caused by the AI-generated content.
Thumbnail Image

European far-right figures exploit AI videos to fuel racist, anti-Islam sentiment online

2025-10-13
TRT World
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI tools being exploited to create and disseminate extremist propaganda that fuels racist and anti-Islam sentiment, which constitutes harm to communities. The AI-generated videos are used to promote false and harmful narratives, contributing to radicalisation and social harm. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm to communities by spreading hate and misinformation. The involvement of AI in generating the content and the resulting social harm is clear and direct.
Thumbnail Image

AI tools misused to create racist videos targeting European cities - SUCH TV

2025-10-13
SUCH TV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (AI chatbots and generative AI) being used maliciously to create racist videos that propagate extremist narratives and conspiracy theories. These videos have been widely disseminated and have contributed to social harm by promoting hate and potentially inciting violence, which constitutes harm to communities. The AI systems' misuse directly leads to this harm, fulfilling the criteria for an AI Incident. The article details realized harm rather than just potential harm, and the AI's role is pivotal in generating and spreading the harmful content.
Thumbnail Image

AI tools 'exploited' for racist European city videos

2025-10-13
RTL Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots and generative AI) used to create harmful content that spreads racist and extremist narratives. The harm is realized as these videos promote hate, misinformation, and conspiracy theories that can fuel violence and social division, constituting harm to communities and violations of rights. The AI systems' outputs are central to the incident, as they enable rapid production and dissemination of such content despite moderation attempts. Therefore, this qualifies as an AI Incident due to the direct role of AI in causing social harm through misuse and exploitation.
Thumbnail Image

AI tools exploited for racist European city videos

2025-10-13
anews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (chatbots and generative AI) used to create harmful content that spreads extremist and racist narratives. The use of AI-generated videos to promote conspiracy theories and racist stereotypes is directly linked to harm to communities, fulfilling the criteria for an AI Incident. The harm is realized as these videos have gained millions of views and are used by influential far-right figures to stoke fear and hate. The event is not merely a potential risk but an ongoing incident of harm caused by AI misuse.
Thumbnail Image

AI tools weaponised by Europe's far-right to spread Islamophobic vision of the future | | AW

2025-10-14
AW
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (generative AI tools and chatbots) used to create harmful, extremist videos that spread Islamophobic and racist conspiracy theories. The widespread sharing and amplification of these videos on social media platforms have directly led to harm to communities by promoting hate, disinformation, and potentially inciting violence. The AI systems' role is pivotal as they enable rapid, scalable production of such harmful content despite moderation attempts. Therefore, this qualifies as an AI Incident due to realized harm to communities and violation of rights through the spread of extremist and hateful content facilitated by AI.