Trump Shares AI-Generated Image Targeting Biden and Family

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Donald Trump posted an AI-generated image on Truth Social depicting Joe Biden asleep in the Oval Office and his son Hunter using drugs, alongside other political figures. The manipulated image, widely shared online, raises concerns about AI-driven misinformation and reputational harm in U.S. political discourse.[AI generated]

Why's our monitor labelling this an incident or hazard?

An AI system was used to generate a fabricated image involving public figures, which is a direct use of AI to create misleading content. This can cause harm to communities by spreading misinformation and potentially violating reputations, which falls under harm to communities. Since the image is actively used by a prominent figure to attack others, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in generating harmful content that impacts social and political communities.[AI generated]
AI principles
Transparency & explainabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
GovernmentGeneral public

Harm types
ReputationalPublic interest

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

Trump se burla de Biden con una imagen hecha por IA que muestra a su hijo esnifando cocaína

2026-05-07
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create manipulated images used for political mockery and criticism. While the content is misleading and could harm reputations or public trust, the article does not document any direct or indirect harm that has materialized from this use. Therefore, it does not meet the threshold for an AI Incident. However, since the use of AI-generated manipulated images in political contexts could plausibly lead to harm such as misinformation or social discord, it qualifies as an AI Hazard. The article primarily reports on the use of AI-generated content with potential for harm rather than actualized harm or a governance response, so it is not Complementary Information. It is not unrelated because AI systems are clearly involved in generating the images.
Thumbnail Image

La polémica imagen con IA que Donald Trump usó para humillar al hijo de Joe Biden

2026-05-07
www.eluniversal.com.co
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to create a false image used for political attack, which involves an AI system. The use of AI-generated manipulated content can cause harm to communities through misinformation and reputational damage. However, the article does not describe any direct or indirect harm that meets the definitions of injury, rights violations, or significant harm. There is no indication of legal breaches or physical harm. The event is primarily about the political and social implications of AI-generated misinformation, which is a broader societal issue. Thus, it fits best as Complementary Information, providing insight into AI's role in political discourse and misinformation without a specific incident of harm or a plausible future hazard being reported.
Thumbnail Image

Trump ataca a Biden, Obama y Hillary Clinton con una imagen generada por IA

2026-05-07
Prensa Libre
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a fabricated image involving public figures, which is a direct use of AI to create misleading content. This can cause harm to communities by spreading misinformation and potentially violating reputations, which falls under harm to communities. Since the image is actively used by a prominent figure to attack others, the harm is realized rather than potential. Therefore, this qualifies as an AI Incident due to the direct role of AI in generating harmful content that impacts social and political communities.
Thumbnail Image

"El 'Dormilón Joe' Biden": la imagen hecha con IA con la que Donald Trump se burla de su antecesor

2026-05-07
Prensa Libre
Why's our monitor labelling this an incident or hazard?
An AI system was used to generate a manipulated image with false and defamatory content targeting political figures. The use of AI-generated misinformation in a public and influential context can be considered harm to communities through the spread of false information and political manipulation. Since the harm is occurring through the dissemination of this AI-generated content, this qualifies as an AI Incident under the definition of harm to communities caused by AI-generated misinformation actively spreading false narratives.
Thumbnail Image

Donald Trump publicó una imagen de IA que muestra a Hunter Biden consumiendo droga

2026-05-07
LaPatilla.com
Why's our monitor labelling this an incident or hazard?
The AI system was used to generate a false image, which is a misuse of AI technology for political purposes. This constitutes a potential risk of harm to communities through misinformation and reputational damage. However, since the article does not document actual harm occurring or legal violations, and the main focus is on the publication and political use of the AI-generated image, this event is best classified as Complementary Information. It provides context on AI misuse in political communication but does not describe a specific AI Incident or an imminent AI Hazard.
Thumbnail Image

Trump publica imagen creada con IA donde muestra a Joe Biden dormido y a su hijo Hunter consumiendo drogas - La Opinión

2026-05-07
La Opinión Digital
Why's our monitor labelling this an incident or hazard?
An AI system was explicitly used to generate a manipulated image with false and defamatory content targeting public figures. The harm caused is reputational and social, affecting communities and potentially violating rights. Since the AI-generated content is actively disseminated and causes harm, this qualifies as an AI Incident under the framework, as the AI system's use directly led to harm to communities and individuals' rights.
Thumbnail Image

Trump critica a Biden con polémica imagen de IA de Hunter Biden en el Despacho Oval

2026-05-07
El Nacional
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI to generate a controversial image used for political criticism. While the image is misleading and could contribute to misinformation, the article does not document any realized harm such as health injury, rights violations, or operational disruption. The event is about the dissemination of AI-generated content and its political impact, which is a societal and governance-related issue. Since no specific harm or credible imminent harm is described, it does not meet the threshold for AI Incident or AI Hazard. Instead, it enriches understanding of AI's role in political discourse, fitting the definition of Complementary Information.