
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
A NewsGuard report found that Mistral AI's chatbot, Le Chat, frequently repeats false information from Russian, Chinese, and Iranian state propaganda campaigns. In tests, the chatbot relayed disinformation in over 50% of cases, raising concerns about its vulnerability to and amplification of harmful misinformation.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the chatbot 'Le Chat' by Mistral) that is relaying disinformation, which constitutes harm to communities through misinformation and propaganda. This is a direct link between the AI system's use and realized harm, fitting the definition of an AI Incident. The involvement is in the use of the AI system to spread false information, causing harm to communities.[AI generated]