French AI Chatbot Mistral Amplifies State-Sponsored Disinformation

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A NewsGuard report found that Mistral AI's chatbot, Le Chat, frequently repeats false information from Russian, Chinese, and Iranian state propaganda campaigns. In tests, the chatbot relayed disinformation in over 50% of cases, raising concerns about its vulnerability to and amplification of harmful misinformation.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions an AI system (the chatbot 'Le Chat' by Mistral) that is relaying disinformation, which constitutes harm to communities through misinformation and propaganda. This is a direct link between the AI system's use and realized harm, fitting the definition of an AI Incident. The involvement is in the use of the AI system to spread false information, causing harm to communities.[AI generated]
AI principles
Robustness & digital securitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
Public interest

Severity
AI incident

AI system task:
Interaction support/chatbotsContent generation


Articles about this incident or hazard

Thumbnail Image

L'IA française Mistral accusée de relayer de la désinformation russe, iranienne et chinoise

2026-04-28
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the chatbot 'Le Chat' by Mistral) that is relaying disinformation, which constitutes harm to communities through misinformation and propaganda. This is a direct link between the AI system's use and realized harm, fitting the definition of an AI Incident. The involvement is in the use of the AI system to spread false information, causing harm to communities.
Thumbnail Image

La start-up française d'intelligence artificielle Mistral relaie de la désinformation russe, chinoise ou iranienne

2026-04-28
Franceinfo
Why's our monitor labelling this an incident or hazard?
The chatbot is an AI system that generates conversational responses. The report shows that it repeated false information from disinformation campaigns, which is a direct use of the AI system leading to harm by spreading misinformation. This harm affects communities and public discourse, fitting the definition of harm to communities under AI Incident criteria. The involvement is through the AI system's use, and the harm is realized, not just potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

" Vecteur de propagande " : L'IA française Mistral répète de la désinformation russe, iranienne ou chinoise, selon une étude

2026-04-28
Le Parisien
Why's our monitor labelling this an incident or hazard?
The AI system (Le Chat de Mistral) is explicitly mentioned and is shown to have repeated false information from state actors, which constitutes a violation of rights related to truthful information and causes harm to communities through misinformation. The harm is realized as the chatbot has already disseminated false claims, making this an AI Incident rather than a potential hazard or complementary information. The AI's role is pivotal as it directly repeats and amplifies disinformation.
Thumbnail Image

La star française de l'intelligence artificielle, Mistral, relaie de la désinformation russe

2026-04-28
LesEchos.fr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Mistral chatbot) that is actively repeating and amplifying disinformation, which is a direct harm to communities by spreading false information and undermining trust. The AI system's outputs have directly led to the dissemination of harmful misinformation, fulfilling the criteria for an AI Incident under harm to communities. The article provides evidence of the AI system's involvement in the harm, not just a potential risk, and thus it is not merely a hazard or complementary information. The harm is realized and ongoing, making this an AI Incident.
Thumbnail Image

Mistral AI piégé par la propagande russe : le chatbot français répète les fake news dans plus d'un cas sur deux

2026-04-28
01net
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Mistral AI chatbot) that is used publicly and repeats false information from propaganda campaigns. The AI's use directly leads to harm to communities by spreading misinformation, fulfilling the criteria for an AI Incident. The harm is not potential but ongoing, as the chatbot repeats falsehoods in a majority of cases tested. The involvement stems from the AI system's use and its training methodology, which allows it to absorb and reproduce false information. This meets the definition of an AI Incident because the AI system's use has directly led to harm to communities through misinformation dissemination.