
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Authorities uncovered a network of fake websites, operated from South Asia, spreading AI-generated, sensationalist content targeting Paris mayoral candidates ahead of the 2026 municipal elections. The campaign, primarily for profit rather than political motives, disseminated misleading material via Facebook and fake media sites, causing limited but real engagement.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate harmful content that is actively disseminated, constituting a direct harm to communities through misinformation and manipulation during an election, which is a violation of rights and harms societal trust. The AI system's use in generating and spreading this content directly leads to harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the content is already being spread and engagement has occurred. Therefore, this is classified as an AI Incident.[AI generated]