
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Investigative journalist Christo Grozev warns that disinformation campaigns by Russia, Iran, and China are increasingly targeting AI systems to manipulate public opinion and influence election outcomes in Bulgaria. These efforts aim to exploit AI-generated content, posing new risks to democratic processes and societal stability.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being influenced by disinformation campaigns, which could plausibly lead to significant societal harm such as manipulation of election outcomes and public opinion. However, it does not describe any realized harm or incident where AI systems have already caused such effects. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The discussion is forward-looking and warns about potential misuse and influence on AI outputs, which aligns with the concept of plausible future harm.[AI generated]