AI Systems Targeted in Disinformation Campaigns Ahead of Bulgarian Elections

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Investigative journalist Christo Grozev warns that disinformation campaigns by Russia, Iran, and China are increasingly targeting AI systems to manipulate public opinion and influence election outcomes in Bulgaria. These efforts aim to exploit AI-generated content, posing new risks to democratic processes and societal stability.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI systems being influenced by disinformation campaigns, which could plausibly lead to significant societal harm such as manipulation of election outcomes and public opinion. However, it does not describe any realized harm or incident where AI systems have already caused such effects. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The discussion is forward-looking and warns about potential misuse and influence on AI outputs, which aligns with the concept of plausible future harm.[AI generated]
AI principles
Democracy & human autonomyTransparency & explainability

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicGovernment

Harm types
Public interestHuman or fundamental rights

Severity
AI hazard

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Христо Грозев: Най-опасната пропаганда вече няма да е в TikTok, а в отговорите на ИИ

2026-03-29
Petel.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being influenced by disinformation campaigns, which could plausibly lead to significant societal harm such as manipulation of election outcomes and public opinion. However, it does not describe any realized harm or incident where AI systems have already caused such effects. Therefore, it fits the definition of an AI Hazard rather than an AI Incident. The discussion is forward-looking and warns about potential misuse and influence on AI outputs, which aligns with the concept of plausible future harm.
Thumbnail Image

Христо Грозев: Най-опасната пропаганда вече няма да е в TikTok, а в AI

2026-03-29
Fakti.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems as targets of disinformation campaigns and warns about the new phase of influence where AI-generated or AI-targeted content could manipulate AI outputs, potentially affecting election outcomes. This constitutes a plausible risk of harm to communities and democratic processes in the future. Since no actual harm or incident has yet occurred or been documented, and the focus is on potential threats and ongoing influence efforts, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Христо Грозев: Най-опасната пропаганда вече няма да е в TikTok, а в отговорите на AI

2026-03-29
bTV Новините
Why's our monitor labelling this an incident or hazard?
The article centers on the potential misuse of AI systems for spreading disinformation and influencing AI-generated outputs, which could plausibly lead to harm such as manipulation of public opinion and election interference. Since no actual harm or incident has been reported as having occurred, but the risk is credible and ongoing, this fits the definition of an AI Hazard. The presence of AI systems is reasonably inferred from the discussion of AI agents providing answers influenced by manipulated content. The article also describes societal/governance responses but these are secondary to the main focus on the potential threat. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Христо Грозев: Най-опасната пропаганда вече няма да е в TikTok, а в AI

2026-03-29
epicenter.bg
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems being used as agents of disinformation, with campaigns designed to influence AI outputs that affect public opinion and elections. This use of AI directly leads to harm to communities by spreading false information and manipulating democratic processes. The harm is realized and ongoing, not merely potential, and the AI system's role is pivotal in enabling this harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.