AI-Powered Russian Disinformation Network Expands with Hundreds of Fake News Sites

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Russian influence network CopyCop, led by US fugitive John Mark Dougan and supported by the Kremlin, uses Meta's Llama 3 AI models to generate and disseminate pro-Russian propaganda via over 300 fake news websites. This AI-driven campaign targets Western democracies, spreading disinformation and undermining public trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of AI systems (self-hosted, uncensored LLMs based on Meta's Llama 3) to generate fake news and deepfakes as part of a disinformation campaign. This campaign has already produced hundreds of fake news websites spreading false political content, which constitutes harm to communities and democratic processes. The AI system's outputs are central to the incident, making this an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential.[AI generated]
AI principles
AccountabilityFairnessHuman wellbeingRespect of human rightsSafetyTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicGovernment

Harm types
Public interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Russian fake-news network back in action with 200+ new sites

2025-09-18
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (self-hosted, uncensored LLMs based on Meta's Llama 3) to generate fake news and deepfakes as part of a disinformation campaign. This campaign has already produced hundreds of fake news websites spreading false political content, which constitutes harm to communities and democratic processes. The AI system's outputs are central to the incident, making this an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential.
Thumbnail Image

Russia-linked troll farm led by Florida ex-cop is powered by Meta's Llama 3

2025-09-18
Cybernews
Why's our monitor labelling this an incident or hazard?
The article explicitly states that CopyCop uses AI-powered large language models (Meta's Llama 3) to create fake news and propaganda, which is then spread through inauthentic websites. This AI-generated disinformation campaign supports pro-Kremlin narratives and targets pro-Western leadership, causing harm to communities by spreading false information and undermining democratic processes. The AI system's development and use are directly linked to the realized harm, fulfilling the criteria for an AI Incident under the OECD framework.
Thumbnail Image

Russian Fake-News Network CopyCop Added 200+ New Websites to Targets US, Canada and France

2025-09-18
Cyber Security News
Why's our monitor labelling this an incident or hazard?
The article explicitly states that CopyCop uses AI-driven content generation via self-hosted large language models to produce pro-Russian and anti-Western narratives. This AI-generated disinformation is actively disseminated through over 300 fake websites, directly harming democratic societies by poisoning the information environment and undermining trust. The harm to communities and violation of rights through misinformation is realized and ongoing. The AI system's use in generating and scaling this harmful content is central to the incident, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Hundreds of new sites tapped for Russian disinformation campaign

2025-09-18
SC Media
Why's our monitor labelling this an incident or hazard?
The use of AI to generate fictional content for a coordinated disinformation campaign directly leads to harm to communities by spreading false narratives and manipulating public opinion, which fits the definition of an AI Incident. The AI system's role in crafting the content is pivotal to the harm caused, and the harm is realized as the campaign is active and ongoing.