AI-Generated Fake Rabbis Spread Antisemitism on TikTok

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A coordinated network of at least 49 TikTok accounts used generative AI to create fake rabbis who spread antisemitic stereotypes and conspiracy theories. These AI-generated avatars amassed over 950,000 followers and 10 million likes, amplifying hate and misinformation by impersonating credible Jewish voices and deceiving audiences.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly describes AI-generated fake accounts used to spread antisemitic content, which is a clear violation of human rights and causes harm to communities. The AI system's role in generating and disseminating this content is pivotal to the harm occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.[AI generated]
AI principles
Respect of human rightsFairness

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
PsychologicalHuman or fundamental rightsPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fake AI 'rabbis' being used to spread antisemitic tropes on TikTok, study claims

2026-05-06
The Times of Israel
Why's our monitor labelling this an incident or hazard?
The event explicitly describes AI-generated fake accounts used to spread antisemitic content, which is a clear violation of human rights and causes harm to communities. The AI system's role in generating and disseminating this content is pivotal to the harm occurring. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

Fake 'AI Rabbis' flood TikTok with antisemitic content, new study finds

2026-05-06
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake personas and content that spread antisemitic tropes, leading to harm to communities through the normalization of hate and potential incitement of violence. The AI-generated content is actively used and has already caused harm, fulfilling the criteria for an AI Incident. The involvement of AI in creating and amplifying this harmful content is explicit and central to the harm described.
Thumbnail Image

Report reveals: Fake AI 'rabbis' spread antisemitism on TikTok

2026-05-06
Arutz Sheva Israel News
Why's our monitor labelling this an incident or hazard?
The report explicitly identifies AI-generated avatars used to spread antisemitic narratives on a large scale, causing harm to communities by normalizing hate and inciting violence. The AI system's role is pivotal in creating fabricated identities that deceive users and amplify harmful content. The harm is realized and ongoing, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI generated 'rabbis' created to spread anti-semitic troupes on TikTok, study finds

2026-05-06
i24NEWS English
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake avatars and identities to spread antisemitic messages, which directly leads to harm to communities by promoting hate and misinformation. The coordinated nature and wide reach of these AI-generated accounts demonstrate a clear link between AI use and realized harm. The use of AI-generated content to impersonate religious figures and spread hostile narratives constitutes a violation of rights and harms communities, meeting the definition of an AI Incident.
Thumbnail Image

Report exposes network of fake AI 'rabbis' promoting antisemitism on TikTok

2026-05-07
Jewish News Syndicate
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic avatars and identities used maliciously to spread antisemitic narratives on a large scale, directly causing harm to communities by promoting hatred and inciting violence. The AI-generated content is central to the harm, as it enables the deception and amplification of hateful messages to impressionable audiences. This meets the criteria for an AI Incident due to the realized harm to communities and violations of rights stemming from the AI system's use.
Thumbnail Image

AI-Generated 'Rabbis' on TikTok Push Antisemitism, Generate Over 10 Million Likes, Report Reveals

2026-05-06
The Algemeiner
Why's our monitor labelling this an incident or hazard?
The event explicitly involves generative AI systems creating synthetic videos impersonating religious figures to spread antisemitic content. The widespread dissemination and engagement with this content on TikTok constitute a direct harm to communities by promoting hatred and potentially inciting violence. The use of AI to generate and amplify this harmful content meets the criteria for an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities. The report also references real-world consequences and calls for mitigation, reinforcing the materialized harm.