AI-Generated Fake Images of Donald Trump’s Arrest Spread Misinformation Online

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

British journalist Eliot Higgins used the AI tool Midjourney to create and share realistic but fake images depicting Donald Trump’s arrest, which went viral on social media. The incident led to widespread misinformation, prompting Midjourney to ban Higgins and restrict related content to curb further harm to public discourse.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly states that AI-generated images were created and shared, leading to widespread misinformation about a significant political event. The AI system's use directly contributed to the spread of false narratives, which can harm public discourse and trust, thus meeting the criteria for an AI Incident. The harm is realized (not just potential), as the misinformation has already circulated widely and influenced public perception. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
AccountabilityTransparency & explainabilityDemocracy & human autonomySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
General public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Les dessous de l'infox, la chronique - Fausse arrestation de Donald Trump, l'intelligence artificielle détournée

2023-03-24
RFI
Why's our monitor labelling this an incident or hazard?
The article explicitly states that AI-generated images were created and shared, leading to widespread misinformation about a significant political event. The AI system's use directly contributed to the spread of false narratives, which can harm public discourse and trust, thus meeting the criteria for an AI Incident. The harm is realized (not just potential), as the misinformation has already circulated widely and influenced public perception. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Midjourney bannit le journaliste à l'origine des fausses images de l'arrestation de Donald Trump

2023-03-22
BFMTV
Why's our monitor labelling this an incident or hazard?
The AI system (Midjourney) was used to create false images that could mislead the public, constituting harm to communities through misinformation. The ban and content restrictions are responses to this misuse. Since the false images were generated and shared, causing potential harm, this qualifies as an AI Incident due to the direct role of the AI system in producing misleading content that harms public discourse and trust.
Thumbnail Image

Ces images de l'arrestation de Donald Trump ont été créées par une IA

2023-03-23
L'Éclaireur Fnac
Why's our monitor labelling this an incident or hazard?
An AI system (Midjourney) was used to generate images that are false but do not cause direct or indirect harm as defined by the framework. The event involves the use of AI to create misleading content, but there is no indication that this has led to injury, rights violations, disruption, or other significant harms. The platform's response to restrict content and ban the user is a governance action. Therefore, this is Complementary Information about AI use and governance rather than an AI Incident or Hazard.
Thumbnail Image

0

2023-03-25
developpez.net
Why's our monitor labelling this an incident or hazard?
The event clearly involves the use of an AI system (Midjourney v5) to generate synthetic images. The images are false and realistic, circulating widely on social media, which can mislead people and contribute to misinformation, a form of harm to communities. Although the article reports increased threats and preparations for unrest, it does not confirm that harm has already occurred due to these images. Therefore, the AI system's use could plausibly lead to an AI Incident but has not yet directly or indirectly caused realized harm. This fits the definition of an AI Hazard rather than an AI Incident. The discussion about labeling and platform policies further supports the potential for harm but does not indicate harm has materialized yet.
Thumbnail Image

Des images truquées par l'IA de l'arrestation imaginaire de Donald Trump circulent sur Twitter, montrant Donald Trump résistant à l'arrestation et traîné par la police

2023-03-25
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Midjourney v5) used to generate manipulated images. The images are fake but realistic and have been widely circulated, potentially misleading people and contributing to social tensions. Although no direct harm has yet occurred, the article notes increased threats and preparations for unrest, indicating a credible risk of harm to communities. The AI system's use in generating misleading content that could incite violence or social disruption fits the definition of an AI Hazard, as the harm is plausible but not yet realized. There is no indication that the AI system malfunctioned or was misused beyond its intended function, and no direct harm has materialized, so it is not an AI Incident. The article is not primarily about responses or governance measures, so it is not Complementary Information. Hence, the classification is AI Hazard.