AI-Generated Disinformation Targets Paris Municipal Election Candidates

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Authorities uncovered a network of fake websites, operated from South Asia, spreading AI-generated, sensationalist content targeting Paris mayoral candidates ahead of the 2026 municipal elections. The campaign, primarily for profit rather than political motives, disseminated misleading material via Facebook and fake media sites, causing limited but real engagement.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves the use of AI systems to generate harmful content that is actively disseminated, constituting a direct harm to communities through misinformation and manipulation during an election, which is a violation of rights and harms societal trust. The AI system's use in generating and spreading this content directly leads to harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the content is already being spread and engagement has occurred. Therefore, this is classified as an AI Incident.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Municipales 2026 : un réseau de faux sites visant des candidats détecté... des contenus "très trash", "générés par IA" diffusés

2026-02-27
midilibre.fr
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate harmful content that is actively disseminated, constituting a direct harm to communities through misinformation and manipulation during an election, which is a violation of rights and harms societal trust. The AI system's use in generating and spreading this content directly leads to harm, fulfilling the criteria for an AI Incident. The harm is realized (not just potential), as the content is already being spread and engagement has occurred. Therefore, this is classified as an AI Incident.
Thumbnail Image

Première ingérence étrangère dans la campagne des municipales à Paris

2026-02-27
20minutes
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions the use of AI-generated content as part of a foreign interference campaign targeting election candidates, which is a direct use of AI systems to produce harmful content. The harm is realized as the content is already being disseminated and engaging users, thus impacting the community and democratic process. The involvement of AI in generating misleading or manipulative content that affects elections fits the definition of an AI Incident, as it causes harm to communities. The motive being commercial does not negate the harm caused. Hence, this is classified as an AI Incident.
Thumbnail Image

Municipales 2026 : des faux sites créés en Asie du Sud détournent l'image des candidats à la mairie de Paris

2026-02-27
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI-generated content used maliciously to spread misinformation and manipulate public perception during an election, which is a clear harm to communities and democratic rights. The AI system's use is central to the incident, as it generates the misleading content. The harm is realized, not just potential, as the content has been disseminated and engagement has occurred. Therefore, this qualifies as an AI Incident.