AI-Generated Deepfakes Fuel Propaganda for Burkina Faso Junta Leader

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated videos and audio featuring celebrities like R. Kelly, Beyoncé, and the Pope have been widely circulated online to promote Burkina Faso's junta chief, Captain Ibrahim Traoré. This disinformation campaign uses deepfakes to glorify the leader, manipulate public opinion, and suppress dissent, causing harm through widespread misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The use of AI to generate fake videos and audio that spread misleading or false information about a political figure constitutes harm to communities by spreading disinformation and manipulating public perception. The AI system's use here directly leads to this harm through the creation and dissemination of false content. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation affecting communities and political discourse.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomyRespect of human rightsAccountabilityRobustness & digital security

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General publicCivil society

Harm types
Public interestHuman or fundamental rightsReputationalPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Fake AI videos of R. Kelly, pope spread cult of Burkina junta chief

2025-07-17
Toronto Sun
Why's our monitor labelling this an incident or hazard?
The use of AI to generate fake videos and audio that spread misleading or false information about a political figure constitutes harm to communities by spreading disinformation and manipulating public perception. The AI system's use here directly leads to this harm through the creation and dissemination of false content. Therefore, this qualifies as an AI Incident due to the realized harm caused by AI-generated misinformation affecting communities and political discourse.
Thumbnail Image

Fake AI videos of R. Kelly, pope spread cult of Burkina junta chief

2025-07-17
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to create deepfake videos and audio that spread disinformation and propaganda, which is a direct use of AI technology leading to harm to communities through misinformation and political destabilization. The AI-generated content is actively disseminated and has real-world impacts on public perception and political dynamics, fulfilling the criteria for an AI Incident due to realized harm to communities and potential violations of rights related to truthful information access.
Thumbnail Image

Fake AI videos of R. Kelly, pope spread cult of Burkina junta chief

2025-07-17
Vanguard
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and audio used as propaganda, which qualifies as an AI system generating content that influences virtual environments (public opinion and social media). The disinformation campaign is actively spreading false narratives that support a military junta, contributing to political destabilization and social harm in West Africa. This constitutes harm to communities and a violation of rights to truthful information, fitting the definition of an AI Incident. The AI system's use is central to the harm, as the synthetic media would not exist without it, and the harm is ongoing and realized, not merely potential.
Thumbnail Image

Comment la junte militaire au Burkina Faso utilise l'IA, Beyoncé et le pape

2025-07-17
20minutes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated videos as part of a coordinated disinformation campaign by the military junta. The AI system's outputs (deepfake videos) are directly used to spread false narratives that glorify the junta leader and suppress opposing voices, which harms communities by distorting information and undermining democratic discourse. The campaign also involves repression of journalists and dissent, implicating violations of rights. The harm is realized and ongoing, not merely potential, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Comment Beyoncé, R.Kelly et le pape sont détournés avec de l'IA pour nourrir le culte du chef de la junte burkinabè

2025-07-18
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate manipulated content (deepfakes or similar AI-generated media) that supports a political cult and disinformation campaigns. This use of AI directly contributes to harm to communities by spreading false narratives and obscuring critical issues such as violence and governance failures, which aligns with harm category (d) - harm to communities. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm through misinformation and social disruption.
Thumbnail Image

Fake AI Video Shows R. Kelly Praising Burkina Faso Junta Leader

2025-07-17
Channels Television
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and audio altering celebrities' faces and voices to produce false praise for a junta leader, which is a clear use of AI systems for generating misleading content. This disinformation campaign is actively spreading and influencing public perception, which constitutes harm to communities and political stability (harm category d). The AI system's use is central to the creation and dissemination of this harmful content, making this an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Beyoncé et le pape détournés pour nourrir le culte du chef de la junte du Burkina

2025-07-17
TV5MONDE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content (videos and images) as part of a disinformation campaign that supports a military junta in Burkina Faso. This campaign has been ongoing for weeks, with viral videos spreading false narratives that glorify the junta leader and distract from ongoing violence and repression. The AI system's outputs (deepfake videos) are directly used to manipulate public opinion and suppress dissent, which harms communities and violates rights. Therefore, this qualifies as an AI Incident because the AI-generated misinformation is actively causing harm, not merely posing a potential risk.
Thumbnail Image

Des vidéos de Beyoncé et du pape détournées pour nourrir le culte de la personnalité d'Ibrahim Traoré

2025-07-17
JeuneAfrique.com
Why's our monitor labelling this an incident or hazard?
The use of AI-generated synthetic videos to spread false narratives and propaganda directly contributes to harm by misleading populations, supporting authoritarian control, and repressing opposing voices. The AI system's use in generating and disseminating these videos is central to the incident, fulfilling the criteria of an AI Incident due to realized harm to communities and violations of rights. The article details ongoing harm rather than potential harm, so it is not a hazard or complementary information.
Thumbnail Image

AI videos of R Kelly, Pope Leo used to spread 'cult' of Burkina Faso leader - Jamaica Observer

2025-07-17
Jamaica Observer
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and audio being used to spread disinformation and propaganda, which is a direct use of AI systems. The harm is realized as these videos are widely shared and contribute to the spread of false narratives that support an authoritarian leader, thereby harming communities and potentially violating rights related to truthful information and political freedom. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation and manipulation.
Thumbnail Image

Fake AI videos of R. Kelly, pope spread cult of Burkina junta chief

2025-07-17
eNCAnews
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos and audio that falsely depict celebrities praising a political leader, which is a clear use of AI systems to create misleading content. This disinformation campaign can harm communities by spreading false narratives and manipulating public opinion, fulfilling the harm to communities criterion. Since the harm is occurring through the dissemination of these AI-generated fake videos, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake AI videos of R. Kelly, pope spread cult of Burkina junta chief

2025-07-17
The Anniston Star
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos being used as propaganda to promote a personality cult around Burkina Faso's junta chief. This use of AI to create and disseminate false content that influences public perception and political discourse is a direct harm to communities, fulfilling the criteria for an AI Incident.
Thumbnail Image

Beyoncé, le pape et R. Kelly chantent les louanges du chef de la junte au Burkina Faso : une vaste campagne de désinformation inonde les réseaux sociaux

2025-07-17
CharenteLibre.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content as part of a disinformation campaign that has been actively spreading false narratives and manipulated media to glorify a military leader and suppress opposing voices. This campaign has real-world consequences, including repression of dissent, misinformation affecting public opinion, and potential regional destabilization. The AI system's role in generating synthetic videos and images is pivotal to the harm caused, fulfilling the criteria for an AI Incident involving violations of rights and harm to communities.
Thumbnail Image

Comment l'IA détourne l'image de Beyoncé et du pape pour nourrir le culte du chef de la junte du Burkina

2025-07-18
LaProvence.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated content to create manipulated images and videos of public figures, which are then used in a disinformation campaign. This use of AI directly leads to harm by spreading false narratives that impact communities and political discourse. Therefore, this qualifies as an AI Incident due to realized harm caused by AI-generated misinformation and identity usurpation.
Thumbnail Image

Burkina Faso : des vidéos de stars générées par l'IA détournées à la gloire du capitaine Ibrahim Traoré | TV5MONDE Afrique

2025-07-18
TV5MONDE Afrique
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated videos used in a disinformation campaign that glorifies a military leader and suppresses opposing voices, which constitutes harm to communities and violations of rights. The AI system's role in generating realistic fake content is pivotal to the incident. The harm is realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to significant harm through misinformation and repression.