AI-Generated Disinformation Becomes Routine, Undermining Public Trust

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

European media watchdogs report a sharp rise in AI-generated disinformation, including deepfakes and manipulated content, now integrated into daily news flows. These AI tools are increasingly used to spread false narratives and discredit authentic evidence, causing widespread confusion and harm to public perception and trust.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems generating manipulated audiovisual content used for disinformation, which is actively occurring and verified by fact-checking organizations. The harm is to communities through misinformation and manipulation of public perception, fulfilling the criteria for harm to communities under AI Incident definition. The AI system's use in creating and spreading false content directly leads to this harm. Hence, this is not a potential hazard or complementary information but a clear AI Incident.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainability

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interestPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

De la excepción a la rutina: la IA se vuelve un motor de desinformación

2026-05-03
Diario de Navarra
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating manipulated audiovisual content used for disinformation, which is actively occurring and verified by fact-checking organizations. The harm is to communities through misinformation and manipulation of public perception, fulfilling the criteria for harm to communities under AI Incident definition. The AI system's use in creating and spreading false content directly leads to this harm. Hence, this is not a potential hazard or complementary information but a clear AI Incident.
Thumbnail Image

De la excepción a la rutina: la IA se convirtió en un motor de desinformación

2026-05-03
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems generating synthetic audiovisual content used to deceive and misinform the public, which is a direct use of AI causing harm to communities by spreading disinformation. The harm is realized and ongoing, not merely potential. The AI systems involved are generative models producing deepfakes and manipulated media, which are central to the incident. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by AI-generated disinformation affecting societal perception and trust.
Thumbnail Image

De la excepción a la rutina: La IA se vuelve un motor de desinformación

2026-05-03
Última Hora
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating synthetic audiovisual content used to spread false information and manipulate public opinion, which has already occurred and is documented by fact-checking organizations. The harms are direct and ongoing, affecting communities and public trust, fitting the definition of an AI Incident. The presence of AI systems is clear from the description of generative models producing realistic fake content. The harm is realized, not just potential, as evidenced by fact-checking data and examples of disinformation campaigns. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

De la excepción a la rutina: la IA se vuelve un motor de desinformación

2026-05-03
La Capital MdP
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating synthetic content (videos, images, audio) used to spread false information that alters public perception and political discourse. The harm to communities through misinformation and manipulation is occurring, not hypothetical. The involvement of AI in the creation and dissemination of this content is clear and central to the harm described. Hence, this is an AI Incident as per the definitions, since AI use has directly led to harm to communities through disinformation.
Thumbnail Image

¿Motor de desinformación? Advierten sobre uso de la IA para manipular hechos reales y transformar la narrativa

2026-05-04
Noticias de Venezuela y el Mundo - Caraota Digital
Why's our monitor labelling this an incident or hazard?
The event involves AI systems that generate synthetic audiovisual content (deepfakes, manipulated videos) used deliberately to misinform and manipulate public opinion. This use of AI has directly led to harm to communities by spreading false narratives and creating confusion about real events, which fits the definition of an AI Incident under harm category (d) - harm to communities. The article describes realized harm through the active dissemination of AI-generated disinformation, not just potential harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

De la excepción a la rutina: la IA se vuelve un motor de desinformación - Proceso Digital

2026-05-03
Proceso Hn
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems generating synthetic audiovisual content used for disinformation, which is actively spreading and causing harm by altering public perception and trust. The harm to communities through misinformation and manipulation is a recognized form of AI Incident under the framework. The involvement of AI is clear and direct, and the harm is realized, not just potential. Hence, the classification as AI Incident is appropriate.