AI-Generated Fake Soldier Images Used in Russian Disinformation Campaigns Against Ukraine

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

AI-generated images of supposed Ukrainian soldiers and civilians are being spread on social media by bots to manipulate emotions, boost engagement, and facilitate Russian disinformation and fraud. Ukrainian authorities warn these posts are part of an information warfare campaign, exploiting public trust and enabling harmful narratives and scams.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions AI-generated photos being used to spread false narratives and manipulate social media users, which is a direct use of AI systems (generative AI) leading to harm in the form of misinformation and potential fraud. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and potential violations of rights through manipulation and fraud. The involvement of bots spreading these posts further supports the AI system's role in causing harm.[AI generated]
AI principles
AccountabilityTransparency & explainabilityRobustness & digital securityHuman wellbeingRespect of human rightsDemocracy & human autonomySafety

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defenceFinancial and insurance services

Affected stakeholders
General public

Harm types
Public interestReputationalPsychologicalEconomic/PropertyHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationGoal-driven organisationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

У ЦПД пояснили, в чому небезпека фото нібито військових ЗСУ у соцмережах

2024-09-28
Украинская сеть новостей
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated photos being used to spread false narratives and manipulate social media users, which is a direct use of AI systems (generative AI) leading to harm in the form of misinformation and potential fraud. This fits the definition of an AI Incident as the AI system's use has directly led to harm to communities and potential violations of rights through manipulation and fraud. The involvement of bots spreading these posts further supports the AI system's role in causing harm.
Thumbnail Image

ЦПД застерігає про небезпечне поширення у соцмережах фото нібито українських військових

2024-09-28
УКРІНФОРМ
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems generating images that are used in social media posts to manipulate public opinion and spread disinformation, which constitutes harm to communities. The use of bots to spread these posts further indicates AI-driven dissemination. The harms include informational manipulation, potential fraud, and facilitation of hostile information campaigns, all of which are realized harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm to communities and potential violations of rights through misinformation and fraud.
Thumbnail Image

У ЦПД пояснили, як росія використовує ШІ та соцмережі в інформаційній війні

2024-09-28
InternetUA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images being used in social media posts that are spread by bots to influence public sentiment and promote harmful narratives in an ongoing information war. This use of AI-generated content directly leads to harm by manipulating communities and enabling fraud, which fits the definition of an AI Incident. The AI system's use in generating deceptive images and the resulting misinformation and scams constitute realized harm, not just potential harm.
Thumbnail Image

Мило, але небезпечно: що криється за популярними фото військових у соцмережах . Життя - Новини Рівного та області -- Рівне Вечірнє

2024-09-28
Рівне Вечірнє
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to generate fake images that are disseminated via social media bots, leading to misinformation and manipulation of public opinion. This directly harms communities by spreading disinformation and enabling scams, fulfilling the criteria for an AI Incident. The AI system's use in generating deceptive content and its role in manipulation and fraud constitute realized harm, not just potential risk.
Thumbnail Image

У мережі ширяться небезпечні дописи із фото українських військових, згенерованих ШІ, - ЦПД

2024-09-28
detector.media
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images used in social media posts that are part of a disinformation campaign, which is actively occurring and causing harm by manipulating public perception and potentially facilitating fraud. The AI system's involvement in generating these images and the subsequent use of these posts to spread harmful narratives and malicious links directly leads to harm to communities and informational integrity. This meets the criteria for an AI Incident as the harm is realized and the AI system's role is pivotal in the event.
Thumbnail Image

Центр протидії дезінформації нарешті долучився до боротьби з ШІ-вітаннями

2024-09-27
057.ua - Сайт города Харькова
Why's our monitor labelling this an incident or hazard?
The event involves AI systems generating synthetic images and content that are actively disseminated by bots to influence social media users emotionally and manipulate public opinion. The content is used in an information warfare context, potentially aiding hostile actors and enabling fraud. These harms are realized and ongoing, meeting the criteria for an AI Incident due to direct harm to communities and violation of rights through misinformation and manipulation.
Thumbnail Image

Військові розповіли, чим може бути небезпечне "привітання" неіснуючих людей з ШІ

2024-09-27
Комментарии Украина
Why's our monitor labelling this an incident or hazard?
The use of AI-generated images to create fake social media posts that manipulate public sentiment and spread disinformation constitutes harm to communities and a violation of trust. The involvement of AI in generating these images and the resulting misinformation and potential fraud meets the criteria for an AI Incident, as the AI system's use has directly led to harm through manipulation and possible scams.
Thumbnail Image

У ЦПД пояснили, як росія використовує ШІ та соцмережі в інформаційній війні | УНН

2024-09-27
unn.ua
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated images used in social media posts that are part of a disinformation campaign by Russia against Ukraine. The AI system's use in generating these images and the subsequent spread by bots directly contributes to harm to communities by promoting harmful narratives and enabling fraud. This constitutes an AI Incident because the harm is realized and the AI system's involvement is pivotal in causing misinformation and manipulation.