
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The Russian influence network CopyCop, led by US fugitive John Mark Dougan and supported by the Kremlin, uses Meta's Llama 3 AI models to generate and disseminate pro-Russian propaganda via over 300 fake news websites. This AI-driven campaign targets Western democracies, spreading disinformation and undermining public trust.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (self-hosted, uncensored LLMs based on Meta's Llama 3) to generate fake news and deepfakes as part of a disinformation campaign. This campaign has already produced hundreds of fake news websites spreading false political content, which constitutes harm to communities and democratic processes. The AI system's outputs are central to the incident, making this an AI Incident rather than a hazard or complementary information. The harm is realized and ongoing, not merely potential.[AI generated]