AI-Generated Deepfake Video Falsely Claims Coup in France, Causes International Alarm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI-generated deepfake video falsely depicting a coup in France went viral, amassing over 12 million views and causing widespread misinformation. The video alarmed international leaders and led to political concern, with President Macron criticizing Meta for initially refusing to remove the content despite official requests.[AI generated]

Why's our monitor labelling this an incident or hazard?

Facebook's AI-driven content recommendation and moderation systems played a pivotal role in the viral spread of false information about a coup d'état, which caused real-world harm by alarming foreign officials and the public. The refusal to remove the content despite its falsehood indicates a failure in the AI system's moderation function, leading to harm to communities and public trust. The event meets the criteria for an AI Incident because the AI system's use directly and indirectly led to harm through misinformation dissemination and inadequate content moderation.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interestPsychologicalReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Macron, le chef d'État africain et le "fake news" coup d'État

2025-12-16
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through Facebook's content moderation algorithms, which are AI-based. The false information spreading is a harm to communities (misinformation), but the article does not state that the AI system caused or failed in a way that directly or indirectly led to harm; rather, it reports on the refusal to remove content based on platform rules. There is no new incident of harm caused by AI malfunction or misuse described, nor a plausible future harm scenario beyond the existing misinformation problem. The main focus is on the societal challenge and platform response, making it Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Un putsch à l'Elysée ? Selon Macron, Facebook a refusé de retirer la fake news

2025-12-16
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
Facebook's AI-driven content recommendation and moderation systems played a pivotal role in the viral spread of false information about a coup d'état, which caused real-world harm by alarming foreign officials and the public. The refusal to remove the content despite its falsehood indicates a failure in the AI system's moderation function, leading to harm to communities and public trust. The event meets the criteria for an AI Incident because the AI system's use directly and indirectly led to harm through misinformation dissemination and inadequate content moderation.
Thumbnail Image

Un coup d'État en France ? La fake news que Facebook refuse de retirer

2025-12-17
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, indicating the involvement of an AI system in creating realistic fake content. The spread of this misinformation has already occurred, with 13 million views, causing public alarm and political concern, which constitutes harm to communities and potentially threatens democratic processes. The AI system's use in generating and disseminating this fake news is directly linked to these harms, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un " coup d'État " en France ? Ce que l'on sait de la vidéo virale générée par IA dénoncée par Emmanuel Macron

2025-12-17
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a realistic fake video (deepfake) that falsely reports a coup d'état, which has been widely viewed and shared, causing misinformation and public concern. This misinformation harms communities by spreading false narratives that can disrupt social and political stability. The harm is realized, not just potential, as millions have viewed the content and it has caused concern even among foreign officials. The refusal of the platform to remove the content despite official requests further compounds the harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Ein Putsch in Frankreich? Facebook will das virale Video nicht entfernen

2025-12-17
Yahoo!
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake news video that has been widely viewed and is spreading misinformation about a coup in France. This misinformation can harm communities by undermining trust, causing social unrest, and interfering with democratic processes. The harm is realized as the video has already gone viral and caused concern at the highest political levels. The AI system's role in creating the misleading content is pivotal to the incident. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Fransa'da darbe haberi uluslararası karmaşa çıkardı! Macron'u aradılar: Ülkenizde neler oluyor?

2025-12-17
Hürriyet
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as the creator of a deepfake video falsely reporting a coup in France. The video has been viewed millions of times, spreading misinformation that has caused confusion and concern internationally, including among foreign officials. This misinformation constitutes harm to communities by undermining public trust and democratic stability, fulfilling the criteria for harm (d). The AI system's use in generating and disseminating this content directly led to these harms. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Macron gelen telefonla şaşkına döndü: Fransa'yı karıştıran darbe videosu

2025-12-17
Yeni Akit Gazetesi
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, indicating the involvement of an AI system in its creation. The harm is realized as the misinformation has caused confusion, concern among political leaders, and a threat to democratic stability, which fits the definition of harm to communities. The event is not merely a potential risk but an actual incident of AI-generated disinformation causing societal harm. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake news sur un coup d'État en France : pourquoi Meta a refusé de retirer cette vidéo générée par IA?

2025-12-17
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating fake news directly leads to harm to communities by spreading misinformation, which is a recognized form of harm under the framework. The refusal or delay by Meta to remove the AI-generated video contributes indirectly to the harm by allowing the misinformation to persist. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated false content and its societal impact.
Thumbnail Image

"Qu'est-ce qu'il se passe chez vous ?" : un président africain croit à un coup d'État en France et contacte Emmanuel Macron

2025-12-17
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is reasonably inferred as the video is described as 'manifestly generated by AI.' The event involves the use of AI to create false content that misleads people, including a president who believed the misinformation. This constitutes harm to communities by spreading false information that can cause confusion and distrust. Since the misinformation is actively spreading and causing real-world misunderstanding, it qualifies as an AI Incident due to harm to communities.
Thumbnail Image

The coup video that has stirred up the giant country of Europe.

2025-12-17
Haberler.com
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a video falsely depicting a coup, which has been widely viewed and believed, causing misinformation and concern at high political levels. This disinformation harms communities by undermining trust and democratic processes, fitting the definition of harm to communities. The AI system's role in creating and spreading the video is pivotal. The harm is realized, not just potential, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un chef d'État africain inquiet d'un supposé coup d'État en France contre Emmanuel Macron

2025-12-17
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
An AI system was involved in generating a false video that spread misinformation about a coup d'état, causing concern and confusion. The harm is indirect and potential, as the misinformation threatens democratic security and public trust but has not yet resulted in direct or realized harm. The event fits the definition of an AI Hazard because it plausibly could lead to significant harm (disruption of democratic processes and public order) if such misinformation spreads unchecked. There is no indication of actual injury, rights violations, or infrastructure disruption at this time, so it is not an AI Incident. The focus is on the risk and societal impact of AI-generated disinformation, not on a completed harm event.
Thumbnail Image

Macron calls for stricter regulation after Facebook refuses to remove fake coup video

2025-12-17
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI-generated video spreading false information that has been widely disseminated, causing concern among the public and foreign diplomats, which is a harm to communities. The AI system's use in creating the video is central to the incident. The refusal by Facebook to remove the content despite official requests indicates the harm is ongoing. This meets the criteria for an AI Incident as the AI system's use has directly led to harm through misinformation and social disruption.
Thumbnail Image

Vidéo. VIDEO Nous avons retrouvé l'auteur de la vidéo virale qui annonce un faux coup d'Etat en France

2025-12-17
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and falsely announces a coup d'Etat, which is a serious misinformation event. The spread of such false information can harm communities by undermining trust, causing confusion, and potentially inciting unrest. The involvement of AI in generating the video and the resulting misinformation harm fits the definition of an AI Incident, as the AI system's use has directly led to harm to communities.
Thumbnail Image

Fransa'da darbe mi oldu? Macron yapay zeka ürünü videoya karşı uyardı

2025-12-17
Sabah
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and spreading false information about a coup, which is a form of harm to communities through misinformation. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in the form of social misinformation and confusion. Although the article emphasizes the warning and removal request, the underlying event is the dissemination of harmful AI-generated content that has already caused confusion and concern, thus qualifying as an AI Incident.
Thumbnail Image

Ein Putsch in Frankreich? Macron bekommt besorgte Anrufe

2025-12-17
Euronews Deutsch
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video spreading false information about a coup, which has been widely viewed. This misinformation could plausibly lead to harm to communities and political stability, especially given the context of upcoming elections. Since no actual harm is reported as having occurred yet, but the risk is credible and significant, this qualifies as an AI Hazard rather than an AI Incident. The refusal of the platform to remove the content further supports the potential for harm.
Thumbnail Image

" Il y a eu un coup d'État en France " : Emmanuel Macron victime d'une fake news, Facebook refuse de retirer la vidéo

2025-12-17
Ouest France
Why's our monitor labelling this an incident or hazard?
The event explicitly mentions a video generated by artificial intelligence spreading false information about a coup d'état, which has been widely viewed and is causing harm to the community by undermining public trust and democratic processes. The AI system's role in generating the fake news is central to the harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to communities. The refusal of Facebook to remove the video further compounds the harm. Hence, this is classified as an AI Incident.
Thumbnail Image

VIDÉO. " Ces gens-là se moquent de nous " : Emmanuel Macron réagit à une vidéo IA qui annonce un coup d'État en France

2025-12-17
Ouest France
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating false video content that has been widely disseminated, causing harm by spreading misinformation that threatens democratic stability and public trust. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities and the political environment. The refusal of the platform to remove the video exacerbates the harm. Therefore, this is classified as an AI Incident.
Thumbnail Image

Un coup d'État en France ? La fake news que Facebook refuse de retirer

2025-12-17
euronews
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake news video that falsely announces a coup d'état, which has been widely disseminated and viewed. This misinformation poses a clear harm to communities by undermining public trust and potentially destabilizing political processes. The involvement of AI in creating and spreading this harmful content meets the criteria for an AI Incident, as the harm is realized and the AI system's role is pivotal in causing it.
Thumbnail Image

Fake video claiming 'coup in France' goes viral - not even Macron could immediately get it removed

2025-12-17
France 24
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated fake content that spread misinformation about a coup, causing alarm including to foreign leaders. The AI system's use in creating and disseminating this false content directly led to harm by misleading millions and threatening democratic discourse. The platform's initial refusal to remove the content exacerbated the harm. This fits the definition of an AI Incident because the AI system's use directly led to harm to communities and democratic processes through misinformation.
Thumbnail Image

Fake video claiming 'coup in France' goes viral - not even Macron could immediately get it removed

2025-12-17
France 24
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, indicating the involvement of an AI system in content creation. The spread of this misinformation caused alarm and confusion, which constitutes harm to communities. The difficulty in removing the video from platforms further exacerbated the impact. Therefore, this event meets the criteria for an AI Incident due to the direct harm caused by the AI-generated misinformation.
Thumbnail Image

Vidéo IA sur un prétendu "coup d'État" en France : Macron fustige Meta

2025-12-17
France 24
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a false video that misled people, including a foreign leader, causing misinformation and potential harm to social stability and public trust. This constitutes harm to communities and a violation of rights to accurate information. Since the harm is realized and directly linked to the AI-generated content, this qualifies as an AI Incident.
Thumbnail Image

Macron accuse Facebook de laisser circuler une fake news de coup d'état

2025-12-16
20minutes
Why's our monitor labelling this an incident or hazard?
Facebook's platform uses AI systems for content recommendation and moderation. The false video was widely disseminated, accumulating millions of views, and despite being flagged, Facebook refused to remove it citing their rules. This shows the AI system's role in the spread and persistence of harmful misinformation, which caused real-world concern and harm to communities and diplomatic relations. Hence, the event meets the criteria for an AI Incident as the AI system's use directly led to harm through misinformation dissemination.
Thumbnail Image

Fransa'da darbe haberleri ortalığı karıştırdı! Afrika'dan gelen mesaj Macron'u şaşırttı: Ülkenizde neler oluyor?

2025-12-17
CNN Türk
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a fabricated video that misled millions into believing a coup had occurred in France, causing confusion and concern at high political levels. This misinformation is a clear harm to communities and democratic processes, fulfilling the criteria for an AI Incident. The event involves the use and dissemination of AI-generated content causing direct harm, not merely a potential risk or complementary information.
Thumbnail Image

Yapay zeka Fransa'da darbe yaptı | Video

2025-12-17
CNN Türk
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video falsely showing a coup, which was widely believed and caused social and political unrest. This misinformation harms communities by spreading false narratives that undermine democratic institutions and public trust. The harm is indirect but significant, fitting the definition of an AI Incident due to violation of societal trust and potential disruption to democratic governance.
Thumbnail Image

" Il y a eu un coup d'État en France " : un président africain croit à une fake news à base d'IA et alerte Emmanuel Macron

2025-12-16
Le Parisien
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as generating a fake video spreading false information about a coup d'état. This misinformation caused harm by alarming a foreign head of state and threatening public security and democratic stability, which fits the definition of harm to communities and a violation of the right to truthful information. The AI system's use directly led to this harm. Hence, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fake-News über Staatsstreich: Präsident Macron zürnt Facebook

2025-12-17
heise online
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating a fake video (deepfake) that has been widely disseminated, causing misinformation and political destabilization concerns. This constitutes harm to communities and democratic processes, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as evidenced by the reactions of President Macron and other political figures, and the widespread viewing of the video. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Ils se moquent de nous": cette fake news à 20 millions de vues fait enrager Emmanuel Macron

2025-12-17
RMC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake news video that has been widely viewed and has caused harm by spreading misinformation about a serious political event. This misinformation threatens public trust, political stability, and democratic processes, which are harms to communities and potentially violations of rights. The AI system's use in creating and disseminating this content is central to the harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

VIDEO. Un président africain affolé par un faux coup d'État en France : l'incroyable anecdote racontée par Macron

2025-12-17
Ladepeche.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic fake video (deepfake) that falsely reports a coup d'état, which was widely viewed and caused real-world concern, including misleading a foreign president. This misinformation harms communities by spreading false narratives that can destabilize political institutions and public trust. The AI system's use directly caused this harm. The refusal of the platform to remove the content further exacerbates the impact. Hence, this is an AI Incident due to realized harm caused by AI-generated misinformation.
Thumbnail Image

Fransa videosu olay oldu milyonlarca insan gerçek sandı Macron uyardı

2025-12-17
İnternethaber
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and has caused real-world harm by misleading millions, including foreign officials, about a serious political event that did not occur. This constitutes harm to communities through misinformation and disinformation, fitting the definition of an AI Incident. The involvement of the AI system in generating the deceptive content and its role in the harm is clear and direct.
Thumbnail Image

Plattform verweigert Löschung: Macron rügt Facebook wegen KI-Fake-Video über Staatsstreich in Frankreich

2025-12-17
Der Tagesspiegel
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI-generated fake video spreading false information about a coup, which has been viewed millions of times and caused significant public and political concern. The misinformation harms communities by destabilizing public debate and democratic sovereignty, fulfilling the harm criteria. The platform's refusal to remove the content despite requests further contributes to the harm. Hence, the AI system's use has directly led to harm, qualifying this as an AI Incident.
Thumbnail Image

Une vidéo truquée virale montre Emmanuel Macron renversé par un colonel fictif, le président français demande à Facebook son retrait, en vain - RTBF Actus

2025-12-17
RTBF
Why's our monitor labelling this an incident or hazard?
The viral video is an AI-generated deepfake, an AI system producing false content that misleads the public about a serious political event. The misinformation has already spread widely (12 million views), causing concern domestically and internationally, which constitutes harm to communities. The refusal of Facebook to remove the content exacerbates the issue. The AI system's use directly led to the harm, fulfilling the criteria for an AI Incident.
Thumbnail Image

"La chute de Macron" : la fausse vidéo d'un coup d'État que le Président n'arrive pas à retirer de Facebook

2025-12-17
actu.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating or enabling the creation of a realistic fake video (deepfake) that falsely depicts a coup d'État, which is spreading widely and causing public concern and political repercussions. The harm is realized as misinformation undermining democratic discourse and public security, fulfilling the criteria of harm to communities. The AI system's role in producing and enabling the spread of this false content is pivotal. The refusal of the platform to remove the content exacerbates the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un " coup d'État " en France ? Ce que l'on sait de cette vidéo virale dénoncée par Macron

2025-12-17
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake video that falsely reports a coup d'état, which has been widely viewed and shared, causing misinformation and public concern. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities through the spread of false information. The article explicitly states the video is AI-generated and that it has caused significant misinformation harm, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fransızları şaşkına çeviren olay! Macron "darbe"yi telefonla öğrendi

2025-12-17
Ak�am
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video of a coup, which was widely viewed and believed by the public and foreign officials, causing misinformation and social harm. This meets the criteria for an AI Incident because the AI-generated content directly led to harm to communities through misinformation and social disruption. The event is not merely a potential hazard or complementary information but a realized harm caused by AI misuse or malfunction (in this case, misuse or malicious use of AI-generated content).
Thumbnail Image

"Coup d'État" en France : même Emmanuel Macron n'a pas réussi à faire retirer cette fausse information largement relayée sur Facebook

2025-12-17
Femme Actuelle
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos that spread false information about a coup, which has been widely viewed and shared, causing harm to the community by undermining public trust and democratic processes. The AI-generated content is central to the harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation is actively influencing public perception and causing concern at the highest political levels. The involvement of AI in creating the misleading videos and the resulting social harm justifies classification as an AI Incident.
Thumbnail Image

"Cher président qu'est-ce qu'il se passe chez vous ?": la drôle de vidéo reçue par Emmanuel Macron

2025-12-16
DH.be
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a false video that caused panic and misinformation about a serious political event. This misinformation led to social harm by creating confusion and fear, fulfilling the criteria for harm to communities. Since the harm has already occurred due to the circulation of the AI-generated video, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Afrikalı liderden Macron'a ''darbe'' telefonu

2025-12-17
Star.com.tr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated video (deepfake) that falsely shows a coup in France, which has been widely viewed and believed by many, including an African leader. This misinformation constitutes harm to communities by spreading false information and threatening democratic stability. The AI system's role in generating the video is central to the incident, fulfilling the criteria for an AI Incident due to realized harm caused by AI-generated disinformation.
Thumbnail Image

Emmanuel Macron face à un coup d'Etat ? "Je suis très inquiet..."

2025-12-17
Closermag.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-generated video spreading false information that caused real-world harm by creating public and international concern. The AI system's use in generating the misleading video and its massive dissemination on a social media platform directly contributed to harm to communities and potential disruption of public order and diplomatic relations. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm.
Thumbnail Image

"Ces gens-là se moquent de nous" : la colère d'Emmanuel Macron contre une vidéo annonçant un faux coup d'État en France

2025-12-17
RTL.fr
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake video, which was then widely disseminated on social media, causing misinformation and public concern. The harm is realized as the false information threatens democratic stability and public trust, fitting the definition of harm to communities and violation of rights. The event is not merely a potential risk but an actual incident with significant impact. Hence, it qualifies as an AI Incident.
Thumbnail Image

Fake news sur un putsch en France: un président africain s'inquiète pour Macron - La Nouvelle Tribune

2025-12-17
La Nouvelle Tribune
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating a fake video that falsely depicted a coup in France. This misinformation was widely disseminated on social media, causing public alarm and diplomatic concern, which constitutes harm to communities and democratic stability. The AI-generated content directly led to this harm, fulfilling the criteria for an AI Incident. The refusal of the platform to remove the content exacerbated the impact. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Un coup d'État en France ? " : un président africain aurait appelé Emmanuel Macron après avoir vu une fausse vidéo sur Facebook

2025-12-17
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and has been massively shared, causing real-world concern and misinformation harm. This fits the definition of an AI Incident because the AI system's use has directly led to harm to communities by spreading false information that affects public perception and international relations. The refusal of Facebook to remove the content despite its falsehood further exacerbates the harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Fausses vidéos d'un coup d'État en France, Emmanuel Macron dénonce l'inaction des réseaux sociaux qui " nous mettent en danger "

2025-12-16
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating false video content (deepfakes) that have been widely disseminated, causing harm to communities by spreading misinformation that threatens public safety and democratic stability. The harm is realized, not just potential, as evidenced by the president's concern and the large viewership of the fake videos. The AI system's use in creating these videos is central to the incident. Hence, it meets the criteria for an AI Incident under the framework.
Thumbnail Image

Afrika ülkesinden bir lider aradı, Macron çıldırdı! "Ülkenizde darbe mi oldu?"

2025-12-17
Türkiye
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video falsely showing a coup in France. The video spread widely on social media, misleading millions and causing diplomatic alarm, which is a clear harm to communities and political stability. The AI-generated misinformation directly led to confusion and international tension, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a realized harm caused by AI misuse.
Thumbnail Image

Fake news sur un putsch en France: un président africain s'inquiète pour Macron

2025-12-17
Malijet - L'actualité malienne au quotidien
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake video (deepfake) that falsely depicted a coup in France. This misinformation was actively spread on social media, causing alarm and concern at the highest political levels, demonstrating harm to communities and political stability. The harm is realized, not just potential, as the video caused confusion and fear. Therefore, this qualifies as an AI Incident due to the direct role of AI-generated content in causing significant societal harm.
Thumbnail Image

" Il y a eu un coup d'État en France " : un président africain croit à une fake news à base d'IA et alerte Emmanuel Macron

2025-12-16
Fdesouche
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video spreading false information about a coup d'état, which directly caused misinformation and alarm to a foreign head of state. This misinformation harms societal trust and community stability, fitting the definition of harm to communities. The harm is realized, not just potential, so it is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Réseaux sociaux. Un " coup d'État en France " ? Cette vidéo générée par IA ulcère Emmanuel Macron

2025-12-17
Le Bien Public
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated deepfake content, which is an AI system producing misleading outputs. The widespread dissemination and belief in the false coup announcement constitute harm to communities through misinformation and social disruption. The AI system's use in generating and spreading this content directly contributes to this harm. Hence, this qualifies as an AI Incident under the definition of harm to communities caused by AI-generated misinformation actively spreading and misleading the public.
Thumbnail Image

Un président africain croit à une fausse vidéo sur un coup d'Etat en France et appelle Macron

2025-12-17
Tchadinfos.com
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as generated by AI and has been viewed by millions, spreading false information about a coup d'Etat. This misinformation has caused real-world consequences, including a foreign president being misled and contacting the French president, creating diplomatic confusion and social chaos. The harm to communities through misinformation and disruption is direct and materialized. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Réseaux sociaux. Un " coup d'État en France " ? Cette vidéo générée par IA ulcère Emmanuel Macron

2025-12-17
Vosges Matin
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, fulfilling the AI system involvement criterion. The use of this AI system to create and disseminate false information has directly led to harm to communities by spreading misinformation that causes confusion and undermines public trust. The harm is realized, not just potential, as evidenced by the reactions of the public and political figures. Hence, this event meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Macron'dan Yapay Zeka Uyarısı: Dezenformasyon Uluslararası Tehdit - Haber Aktüel

2025-12-17
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI-supported video that created a false perception of a coup, which was widely disseminated and believed by many, including international leaders. This misinformation has caused harm to communities by spreading false information and undermining democratic processes. The AI system's role in generating the deceptive video is central to the incident. Hence, it meets the criteria for an AI Incident due to realized harm linked to AI-generated disinformation.
Thumbnail Image

الذكاء الاصطناعي: فيديو يزعم الإطاحة بالرئيس الفرنسي إيمانويل ماكرون

2025-12-17
مانكيش نت
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is reasonably inferred as the video is AI-generated content spreading false information about a coup. The harm is realized as misinformation can disrupt communities and trust in political institutions, fulfilling the harm to communities criterion. Therefore, this event qualifies as an AI Incident due to the direct role of AI in causing harm through misinformation dissemination.
Thumbnail Image

"Ces gens-là se moquent de nous", s'insurge Emmanuel Macron au sujet d'une fausse information générée par l'IA qui met en scène un coup d'État en France

2025-12-17
Franceinfo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a realistic fake video spreading false information about a coup, which has been viewed millions of times and caused significant social disruption and political concern. This meets the criteria for harm to communities and potential risk to public order (harm category d). The AI system's use directly contributed to this harm. Although the video was eventually removed by the account owner, copies still circulate, and the platform's refusal to remove it earlier contributed to the ongoing harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Macron'dan yapay zeka uyarısı: Ülkemde darbe olmuş gibi gösterildi

2025-12-17
F5Haber
Why's our monitor labelling this an incident or hazard?
An AI-generated video falsely showing a coup in France has been viewed by millions, causing many to believe a serious political event occurred, which constitutes harm to communities and democracy. The AI system's role in generating this deceptive content is pivotal to the harm caused. The event involves the use of AI (deepfake video generation) leading directly to misinformation and societal harm, meeting the criteria for an AI Incident.
Thumbnail Image

Macron fordert striktere Regeln für soziale Medien

2025-12-17
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI system causing direct or indirect harm, nor does it describe a particular AI incident or hazard. Instead, it highlights political and governance responses to misinformation spread on social media, which may involve AI-driven content recommendation algorithms, but the AI system itself is not the central subject, nor is a specific harm or plausible future harm from AI detailed. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI-related challenges in misinformation.
Thumbnail Image

Meta refused to take down a fake video about a coup in France with 12 million views

2025-12-17
BGNES: Breaking News, Latest News and Videos
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to create a fake video that misled millions, including political figures, thus causing harm to communities by spreading misinformation and undermining democratic stability. The AI-generated content's dissemination and Meta's initial refusal to remove it directly contributed to this harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Macron'dan, Fransa'da darbe olmuş gibi gösteren yapay zeka ürünü videosu üzerine açıklama

2025-12-17
Sputnik Türkiye
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and has caused real-world harm by misleading millions into believing a coup occurred in France, which is a clear harm to communities and public trust. The AI system's role in generating this deceptive content is pivotal to the incident. The refusal of the platform to remove the video despite requests further compounds the harm. Therefore, this event meets the criteria for an AI Incident as the AI system's use has directly led to significant harm.
Thumbnail Image

Deepfake videos spark unrest in Europe

2025-12-18
Chosun.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake videos (an AI system) that have directly caused harm by spreading false claims about coups, defaming political leaders, and influencing election outcomes. The harms include social unrest, misinformation, and political destabilization, which fall under harm to communities and violations of democratic rights. The involvement of AI in creating these synthetic videos is clear, and the harm is realized, not just potential. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Macron crítica a Facebook por no retirar una noticia falsa sobre un golpe de Estado en Francia

2025-12-17
RFI
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a false video about a coup, which was widely viewed and caused alarm, including to a foreign president. This misinformation harms public trust and social stability, constituting harm to communities. Facebook's refusal to remove the content means the harm is ongoing. The AI system's use directly led to this harm, meeting the criteria for an AI Incident.
Thumbnail Image

¿Un golpe de Estado en Francia? Las 'fake news' que Facebook no borra

2025-12-17
Euronews Español
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as generated by AI, fulfilling the AI system involvement criterion. The use of this AI-generated video has directly led to harm by spreading false information about a coup, which threatens public order and political stability, thus harming communities. The harm is realized and ongoing, not merely potential. The refusal of Facebook to remove the video exacerbates the harm. Hence, this event meets the definition of an AI Incident.
Thumbnail Image

Así se difundió el falso golpe de Estado generado con IA que sacudió a Francia

2025-12-17
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the false coup video was generated with AI and was widely disseminated, causing confusion and concern at the highest political levels. The AI system's use directly led to the spread of misinformation, which harms communities by undermining trust and democratic stability. The harm is realized, not just potential, as millions viewed the false content and a foreign leader was misled. This fits the definition of an AI Incident due to violation of societal trust and harm to communities caused by AI-generated misinformation.
Thumbnail Image

Macron rügt Facebook wegen Fake-Video über Putsch

2025-12-18
Kronen Zeitung
Why's our monitor labelling this an incident or hazard?
The article discusses the potential harms caused by AI-driven social media algorithms and fake accounts but does not report a specific AI incident or harm that has already occurred. It focuses on the need for transparency and faster action against fake accounts, which is a governance and societal response to AI-related risks. Therefore, it fits best as Complementary Information, providing context and policy response rather than describing a direct AI Incident or Hazard.
Thumbnail Image

"Je ne sais pas pourquoi je l'ai fait": l'auteur de la vidéo du faux coup d'Etat en France espérait "percer" sur les réseaux

2025-12-19
BFMTV
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake video that caused widespread misinformation and social disruption, which is a harm to communities. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the video was viewed millions of times and cited by the President as an example of problematic AI-generated misinformation. Therefore, this is classified as an AI Incident.
Thumbnail Image

Así se difundió el falso golpe de Estado generado con IA que sacudió a Francia - La Tercera

2025-12-18
LA TERCERA
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a realistic fake video that falsely reported a coup in France, which was widely viewed and caused real-world confusion and concern. This constitutes harm to communities and democratic processes, fulfilling the criteria for an AI Incident. The AI system's use directly led to the dissemination of false information causing harm, not merely a potential or future risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Yapay zekayla üretilen "Fransa'da darbe" videosu paniğe sebep oldu, Afrikalı Devlet Başkanı Macron'u aradı

2025-12-18
birgun.net
Why's our monitor labelling this an incident or hazard?
The video was explicitly produced using AI and spread on social media, leading to false beliefs about a coup in France. This misinformation caused real-world concern and panic, including from foreign leaders, demonstrating direct harm to communities through disinformation. The AI system's role in generating the video is pivotal to the incident. Hence, it meets the criteria for an AI Incident due to realized harm from AI-generated misinformation.
Thumbnail Image

Macron critica a redes sociales por desinformación y pide regulación urgente

2025-12-16
El Nacional
Why's our monitor labelling this an incident or hazard?
The article does not report a specific AI Incident or AI Hazard but rather discusses the political and regulatory response to the risks posed by social media algorithms that can spread misinformation and political interference. The AI systems (algorithms) are implicated in the spread of false content, but the article's main focus is on calls for transparency and regulation, not on a particular incident of harm caused by AI or a direct AI malfunction. Therefore, this is Complementary Information providing context on governance and societal responses to AI-related harms in social media.
Thumbnail Image

Macron: "Las redes sociales se burlan de nosotros y nos ponen en peligro"

2025-12-16
ABC Digital
Why's our monitor labelling this an incident or hazard?
The article involves AI-generated content (an AI system) that has been used to create false political information with millions of views, which poses a risk to public order and democratic sovereignty. Although the harm is not described as fully materialized (no direct physical harm or confirmed disruption), the content has already caused confusion and political risk, indicating harm to communities and democratic processes. The refusal of Facebook to remove the content despite government intervention shows a failure to mitigate this harm. Therefore, this event describes an AI Incident because the AI system's use has directly led to harm in the form of misinformation and political destabilization. The article also discusses governance responses, but the primary focus is on the harm caused by the AI-generated content.
Thumbnail Image

Video falso de un golpe de Estado en Francia pone al presidente Macron en choque directo con Facebook - La Cuarta

2025-12-18
La Cuarta
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate a deepfake video, which is an AI system generating false content. The use of this AI-generated video directly led to harm by spreading misinformation that misled millions, including foreign leaders, thus harming communities and democratic processes. This fits the definition of an AI Incident because the AI system's use directly caused significant harm. The article describes realized harm, not just potential harm, so it is not an AI Hazard. It is not merely complementary information or unrelated news, as the AI-generated content caused actual harm.
Thumbnail Image

Macron crítica a Facebook por no retirar una noticia falsa sobre un golpe de Estado en Francia

2025-12-17
Acento
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, indicating the involvement of an AI system in creating misleading content. The false information has been disseminated widely (13 million views), causing harm to public discourse and potentially to societal stability, which qualifies as harm to communities. The refusal of Facebook to remove the content despite requests further exacerbates the harm. Therefore, this event constitutes an AI Incident due to the realized harm caused by the AI-generated misinformation and the platform's role in its persistence.
Thumbnail Image

Putsch in Frankreich? Wut auf Facebook wegen KI-Videos | Heute.at

2025-12-18
Heute.at
Why's our monitor labelling this an incident or hazard?
The article explicitly describes the use of AI to generate fake videos that falsely claim a military coup in France, which were widely viewed and spread misinformation. This misinformation led to public confusion, political statements from the French president, and concerns about democratic stability, constituting harm to communities. The AI system's use in creating these videos directly led to this harm. The event is not merely a potential risk but a realized incident of harm caused by AI-generated content. Hence, it fits the definition of an AI Incident.
Thumbnail Image

Coup d'Etat fictif en France : " Un président africain m'a alors contacté ", révèle Emmanuel Macron

2025-12-18
Senenews - Actualité Politique, Économie, Sport au Sénégal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI-generated deepfake video that falsely reports a coup d'état, which has been widely disseminated and caused real-world confusion and concern among political leaders. The AI system's role in generating and spreading this misinformation directly led to harm in the form of political destabilization and misinformation affecting communities and international relations. This meets the criteria for an AI Incident as the harm is realized and directly linked to the AI system's use.
Thumbnail Image

Macron face à Meta : le géant refuse de censurer une fausse vidéo de coup d'État

2025-12-18
Génération-NT
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a false video that has been widely disseminated, causing misinformation harm and public confusion at a national and international level. The harm includes disruption to societal trust and potential political destabilization, which fits the harm to communities category. The AI system's use in creating the video is central to the incident, and the harm is realized, not just potential. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Virales Video auf Facebook bringt Macron zur Weißglut 😡

2025-12-18
SWR3.de
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated (deepfake) and is spreading false information that has already caused harm by misleading millions of viewers and alarming political figures. The AI system's role in creating and disseminating this harmful content is direct and pivotal. The harm is to communities through misinformation and potential disruption of public order, fitting the definition of an AI Incident. The article does not merely warn about potential harm but reports on actual harm caused by the AI-generated video.
Thumbnail Image

Une fake news générée par IA affole les réseaux et inquiète jusqu'à l'Élysée - Siècle Digital

2025-12-18
Siècle Digital
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating a fake news video that has been widely viewed and caused real-world concern and misinformation. This constitutes harm to communities and potentially to democratic stability, fulfilling the criteria for an AI Incident. The AI system's use in creating and disseminating false content directly led to this harm, not merely a potential or future risk. Therefore, this is classified as an AI Incident.
Thumbnail Image

Macron exige mayor control para las redes sociales

2025-12-16
La RepúblicaEC
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI-generated content (an AI-created video) that has already caused harm by spreading misinformation with millions of views, potentially disrupting public order and democratic stability. The AI system's outputs (the generated video) have directly led to harm in the form of misinformation and political interference, which aligns with harm to communities and violation of democratic rights. Although the article focuses on calls for regulation and transparency, the described AI-generated misinformation is an actual incident causing harm, not just a potential hazard. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Facebook en ébullition : colère, fake news et poussée de l'extrême droite

2025-12-18
L'ADN
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit in the generation of the fake coup video and the use of AI-generated images and videos for political manipulation. The harm includes misinformation (harm to communities), political polarization, and the spread of extremist content, which are direct harms caused or amplified by AI systems. The article also notes the platform's algorithmic recommendation system promoting emotionally charged and extremist content, further contributing to harm. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to AI system use and malfunction (inadequate content moderation and amplification of harmful content).
Thumbnail Image

Coup d'Etat fictif en France : Emmanuel Macron révèle" Un président africain m'a alors contacté "

2025-12-19
Senegal Direct
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a deepfake video spreading false information about a coup in France. This misinformation directly caused harm by misleading a foreign head of state and potentially destabilizing political relations. The harm to communities and political order is clear and realized, not just potential. Hence, this event meets the criteria for an AI Incident as the AI-generated content directly led to harm.
Thumbnail Image

Vidéo générée par l'IA annonçant un faux coup d'État en France: "ces gens-là se moquent de nous", s'insurge Emmanuel Macron

2025-12-18
Franceinfo
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system generating synthetic video content that falsely reports a coup d'état, which has been widely disseminated and caused public alarm. This constitutes harm to communities through misinformation and social disruption, fulfilling the criteria for an AI Incident. The AI system's use directly led to the harm by producing and spreading false information. The refusal of the platform to remove the content initially and the ongoing circulation of copies further exacerbate the harm. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta refuses Macron's request to remove AI video claiming a coup in France

2025-12-18
Cybernews
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating synthetic video content that falsely claims a coup, which has been widely disseminated and caused concern among political leaders and the public. This misinformation harms communities by undermining trust and potentially destabilizing democratic processes, fitting the harm to communities category. The AI system's use directly led to this harm. Meta's refusal to remove the content despite reports further compounds the issue. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Un coup d'Etat en France ?! Non, une vidéo IA et même Macron s'est fait avoir

2025-12-18
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The video was generated by an AI system (deepfake technology) and has directly caused harm by spreading false information that misled millions, including government officials. This constitutes harm to communities and a breach of informational integrity, fitting the definition of an AI Incident. The event involves the use of AI-generated content that has already caused significant societal harm, not just a potential risk, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Colpo di stato in Francia": il video fake che ha fatto infuriare Macron

2025-12-18
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated (deepfake), which qualifies as an AI system's output. The false announcement of a coup d'état is a significant harm to communities and political order, fulfilling the harm to communities criterion. The incident involves the use of AI to create and spread misinformation that has already caused harm, not just a potential risk. Therefore, this event qualifies as an AI Incident due to the realized harm from AI-generated misinformation affecting societal stability and public trust.
Thumbnail Image

Finto colpo di Stato in Francia: Macron si infuria

2025-12-19
Tgcom24
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fake video that falsely claimed a coup had occurred, which was viewed over 13 million times and caused real-world concern, including a foreign official being misled. This constitutes harm to communities through misinformation and social disruption. Therefore, this event qualifies as an AI Incident because the AI-generated content directly led to harm by spreading false narratives and causing confusion and concern among the public and officials.
Thumbnail Image

Paris taken by a coup: how an AI video caused Macron a major headache

2025-12-19
Euronews English
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system (Sora 2, an AI video generation technology) used to create a realistic but fake video. The video caused harm by spreading false information about a coup, which is a violation of rights related to truthful information and harms communities by threatening democratic stability. The harm is realized, not just potential, as the video was widely viewed and caused concern among international leaders and the public. Therefore, this qualifies as an AI Incident due to the direct link between AI-generated content and harm to communities and democratic processes.
Thumbnail Image

Paris pris par un coup d'État : Macron perturbé par une vidéo d'IA

2025-12-19
euronews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned (Sora 2) used to create a deepfake video that spread false information about a coup, which was widely viewed and caused real-world concern and disruption. The harm is realized and significant, affecting public trust and democratic stability, which falls under harm to communities. The AI system's use directly led to this harm. Although the creator did not intend to misinform, the impact was materialized. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Colpo di stato in Francia? Macron: fake news da togliere, no di Meta

2025-12-17
euronews
Why's our monitor labelling this an incident or hazard?
The video was explicitly generated by an AI system and disseminated on a social media platform, leading to widespread misinformation about a coup in France. This misinformation has caused public alarm and political concern, which constitutes harm to communities. The AI system's role in creating and spreading this false content is pivotal. The event describes actual harm occurring, not just potential harm, so it meets the criteria for an AI Incident rather than an AI Hazard or Complementary Information.
Thumbnail Image

Francia, "colpo di Stato": il video e la furia di Macron | Libero Quotidiano.it

2025-12-18
Quotidiano Libero
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as generated by artificial intelligence and is spreading false information about a coup, which is a clear harm to communities through misinformation. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in the form of misinformation and social disruption. The refusal of the social network to remove the video further exacerbates the issue.
Thumbnail Image

Despite making only €7 from viral French 'coup' video, Burkinabe teen has no regrets

2025-12-20
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly states that the teenager used artificial intelligence to create a fake news video that falsely depicted a coup in France. The video went viral, causing misinformation and public confusion, which is a form of harm to communities. The involvement of AI in generating the misleading content directly led to this harm. Although the monetary gain was minimal, the social and political impact, including the French President's reaction, confirms the harm. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Finto colpo di Stato in Francia: Macron contro Facebook. Il video-fake creato da un adolescente del Burkina Faso scatena il panico

2025-12-18
QuotidianoNet
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly mentioned as used to create a realistic fake video that spread widely and caused panic and political disruption. The harm is realized and significant, affecting public trust, political stability, and democratic discourse, which falls under harm to communities. The refusal of the platform to remove the content despite official requests exacerbates the impact. Hence, the AI system's use directly led to harm, meeting the criteria for an AI Incident.
Thumbnail Image

"Je l'ai juste créé (...) pour percer" : qui est à l'origine de la vidéo IA annonçant un faux coup d'État en France ?

2025-12-19
RTL.fr
Why's our monitor labelling this an incident or hazard?
The video was explicitly created using AI technology (an AI-generated virtual journalist) and was widely shared, causing misinformation and diplomatic alarm. The harm is realized as the false video misled millions, including political leaders, and provoked an official response. The AI system's use directly led to harm to communities by spreading false narratives, fulfilling the criteria for an AI Incident under harm to communities. The event is not merely a potential hazard or complementary information but a clear incident of AI misuse causing harm.
Thumbnail Image

" Je l'ai juste créée comme ça " : l'adolescent qui se cache derrière la vidéo du faux coup d'État en France a été retrouvé

2025-12-19
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system used to generate a realistic fake video spreading false information about a coup d'État, which caused significant social disruption and misinformation. The harm to communities is direct and materialized, as evidenced by the viral spread and political attention. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The subsequent removal and apology do not negate the occurrence of harm but rather are responses to it.
Thumbnail Image

Faux coup d'État en France : l'auteur du canular avoue avoir voulu "percer" sur internet

2025-12-19
Planet
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the false video content. The use of AI to create realistic but false news led to widespread misinformation, which caused harm by sowing doubt at the highest levels of government and among the public, as well as diplomatic concern. This meets the criteria for harm to communities and disruption of political stability, thus constituting an AI Incident. The harm is realized, not just potential, as the misinformation was widely viewed and had tangible effects.
Thumbnail Image

Burkinabe teen behind viral French 'coup' video has no regrets

2025-12-19
RTL Today
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create a fake news video that falsely depicted a coup in France. The video went viral, causing public alarm and drawing official attention, including from the French President. The harm here is the spread of misinformation, which is a form of harm to communities and societal trust. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. Although the creator's intent was financial gain, the realized harm from the AI-generated disinformation is clear and significant.
Thumbnail Image

Burkinabe teen behind viral French 'coup' video has no regrets | FOX 28 Spokane

2025-12-19
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system to generate fake news content that was widely disseminated, causing misinformation and social harm. The harm includes violation of informational integrity and potential harm to communities by spreading false political events. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The article describes actual harm occurring, not just potential harm, and the AI system's role is pivotal in creating the false video content.
Thumbnail Image

How an AI Video fooled millions into believing France fell - Break the Fake - TVP WORLD

2025-12-20
Democratic Underground
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a realistic fake video that caused misinformation to spread widely, misleading the public about a serious political event. This constitutes harm to communities by undermining democratic stability and public trust. Since the AI-generated content directly led to this harm, the event qualifies as an AI Incident.
Thumbnail Image

AI-Generated Fake Video Claiming Macron's Ouster Sparks Controversy | Sada Elbalad

2025-12-21
see.news
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a fake video that falsely claims a coup, which spread widely and caused real-world concern, including among government officials. The misinformation harms communities by undermining trust and democratic processes, fitting the definition of harm to communities. The AI system's use directly led to this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI生成法國政變假新聞點閱破千萬 馬克宏促臉書下架遭拒 | 聯合新聞網

2025-12-18
UDN
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating fake news content that has been widely viewed and caused real-world confusion and potential harm to public safety and democratic order. The AI-generated misinformation has directly led to harm to communities and the political environment, fulfilling the criteria for an AI Incident. The refusal of the platform to remove the harmful AI-generated content exacerbates the issue. Therefore, this is classified as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

假新聞稱法國政變引發混亂 馬克宏要求臉書下架遭拒 | 國際焦點 | 國際 | 經濟日報

2025-12-17
Udnemoney聯合理財網
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake news video that falsely depicted a coup in France. This misinformation has directly led to harm by causing confusion among political leaders and potentially destabilizing public order and democratic processes, which constitutes harm to communities and public safety. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI-generated disinformation.
Thumbnail Image

(影)馬克宏遭政變下台? AI假影片引1千3百萬人瘋看 Meta拒下架引爭議 | 國際 | Newtalk新聞

2025-12-18
新頭殼 Newtalk
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a highly realistic fake video (deepfake) that falsely claims a political coup, which has been widely disseminated and believed, causing misinformation harm to communities and potentially undermining democratic institutions. The harm is realized as the misinformation is actively spreading and influencing public perception. Meta's refusal to remove the content despite official denials further contributes to the harm. This fits the definition of an AI Incident because the AI-generated content directly led to harm to communities and public order through misinformation.
Thumbnail Image

假新聞稱法國政變引發混亂 馬克宏要求臉書下架遭拒 | 國際 | 中央社 CNA

2025-12-17
Central News Agency
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake news video that falsely depicted a coup in France. This misinformation caused real-world harm by misleading political leaders and potentially destabilizing public order, fulfilling the criteria for harm to communities and public safety. The AI-generated content's role is pivotal in causing this harm, making this an AI Incident rather than a hazard or complementary information. The article details actual harm occurring, not just potential harm, and the AI system's involvement is clear and direct.
Thumbnail Image

臉書AI假影片稱法國政變 馬克宏促下架遭Meta拒絕

2025-12-18
公共電視
Why's our monitor labelling this an incident or hazard?
The event involves AI systems explicitly generating fake video content that falsely reports a military coup, which is a clear case of AI-generated misinformation causing harm to communities by spreading false information that disrupts social order and public trust. The harm is realized as the videos garnered millions of views and caused international concern. The AI system's use in creating and disseminating these videos directly led to this harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

假新聞稱法國政變引發混亂 馬克宏要求臉書下架遭拒 - 民視新聞網

2025-12-17
民視新聞網
Why's our monitor labelling this an incident or hazard?
The AI system's use in generating the fake news video directly led to harm by spreading misinformation that caused confusion and risk to public safety and democratic discourse, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the misinformation was widely viewed and influenced a foreign leader's perception. The refusal of the platform to remove the content exacerbates the harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by AI-generated misinformation.
Thumbnail Image

臉書AI假影片宣稱「法國政變」 馬克宏要求下架遭拒 - 民視新聞網

2025-12-18
民視新聞網
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating fake videos that falsely depict a political coup, which is misinformation causing harm to communities by disrupting social stability and public safety. The harm is realized as the misinformation was widely viewed (12 million views) and even caused concern among foreign leaders. The AI system's use in creating and disseminating this false content directly led to these harms. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI偽造法國政變影片瘋傳千萬次!法國馬克宏震怒、Meta拒下架引爆爭議 | 鉅亨網 - 美股雷達

2025-12-20
Anue鉅亨
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly used to generate a fake video that falsely reports a military coup, which was viewed millions of times and caused real-world panic and diplomatic concerns. The AI-generated disinformation directly harmed communities by spreading false information that disrupted social stability and international relations. The harm is realized and significant, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to communities and public order.
Thumbnail Image

Faux putsch à l'Elysée : Le lycéen burkinabè auteur de la vidéo IA espérait faire de l'argent

2025-12-19
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly involved in generating the fake video, which was disseminated widely and caused misinformation harm to communities. The harm is realized, not just potential, as the video reached millions and was publicly addressed by a national leader. The use of AI to create and spread false information that disrupts social trust and political discourse fits the definition of an AI Incident under harm to communities. The event is not merely a potential hazard or complementary information but a clear case of AI misuse causing harm.
Thumbnail Image

Qui est derrière la vidéo du faux coup d'État en France générée par l'IA ?

2025-12-19
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
An AI system was used to create a realistic fake video, which was then shared widely, constituting misinformation that harms communities by spreading false narratives. This meets the criteria for an AI Incident because the AI-generated content directly led to harm through misinformation and social disruption.
Thumbnail Image

L'auteur de la fausse vidéo de putsch en France a fait ça pour l'argent

2025-12-19
20minutes
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system generating a fake video that spread misinformation widely, which is a form of harm to communities and public trust. The harm is realized as the video gained millions of views and was even referenced by a political leader, indicating significant impact. The AI system's use in creating and disseminating this false content directly led to the harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

" Faire peur aux gens " : le lycéen derrière la vidéo virale sur le faux coup d'État en France explique son geste

2025-12-19
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used to generate a fake video that spread false information about a coup d'État in France. The harm caused is the dissemination of misinformation (harm to communities), which is a recognized form of harm under the framework. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized incident of harm caused by AI-generated misinformation.
Thumbnail Image

Faux putsch en France : le lycéen burkinabè auteur de la vidéo s'explique - La Nouvelle Tribune

2025-12-19
La Nouvelle Tribune
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI to create deceptive content that was widely disseminated and caused real-world confusion and concern, including political and social impacts. The AI system's use directly led to harm to communities by spreading misinformation and influencing public perception. The harm is realized, not just potential, as evidenced by the public and political reactions. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La vidéo du faux coup d'état en France, qui a passablement énervé Emmanuel Macron, a été faite, en réalité, par un lycéen de 17 ans, du Burkina Faso et lui a rapporté... 7 euros !

2025-12-20
Jean Marc Morandini
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a fake video spreading false information about a coup in France. The video was widely disseminated, causing social disruption and concern at the highest political level. This misinformation harms communities by spreading false narratives and undermining trust in information. The harm is realized, not just potential, as the video was viewed millions of times and discussed publicly. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI-generated disinformation.
Thumbnail Image

Fransa'da "darbe" iddiası taşıyan sahte video ortalığı karıştırdı

2025-12-18
TRT haber
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated, indicating the involvement of an AI system in content creation. The harm caused includes misinformation leading to public panic, international diplomatic concern, and threats to democratic discourse, which fall under harm to communities and violation of rights. The AI system's use in producing and spreading the false video directly caused these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

'Darbe' videosu Fransa'yı karıştırdı! Macron harekete geçti

2025-12-18
Haber7.com
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and falsely depicts a serious event (a coup) that did not occur, leading to public confusion and diplomatic concern. This constitutes harm to communities and democratic processes, fitting the definition of an AI Incident where the AI system's use has directly led to harm. The involvement of AI in generating the misleading content and its widespread impact justifies classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fransa'da 'darbe' iddiası taşıyan video ülkeyi fena karıştırdı | Avrupa Haberleri

2025-12-18
Yeni Şafak
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as AI-generated and falsely claims a coup, leading to widespread misinformation and public disturbance. This constitutes harm to communities and democracy, fulfilling the criteria for an AI Incident. The AI system's use in producing the deceptive content directly caused the harm, and the event is not merely a potential risk but a realized incident of AI-driven disinformation.
Thumbnail Image

Macron'a hangi lider telefon açtı: Yapay zeka darbeye karıştı... Ortalığı karıştıran video

2025-12-19
Haber Sitesi ODATV
Why's our monitor labelling this an incident or hazard?
The video was explicitly created using AI to generate false news content, which was widely viewed and caused real-world consequences including panic and diplomatic concern. The AI system's use directly led to harm by spreading misinformation and undermining democratic discourse, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The involvement of AI in the creation and dissemination of the false video is clear and central to the harm caused.
Thumbnail Image

'Fransa'da darbe' videosu ortalığı karıştırdı! Macron'dan açıklama...

2025-12-18
Aydınlık
Why's our monitor labelling this an incident or hazard?
The video is explicitly described as fully AI-generated, indicating the involvement of an AI system in content creation. The widespread dissemination of this false information has directly led to public panic and confusion, which constitutes harm to communities. The refusal by Meta to remove the content exacerbates the situation, indirectly contributing to the harm. Therefore, this event meets the criteria of an AI Incident because the AI system's use has directly or indirectly led to harm to communities through misinformation and social disruption.