AI-Generated Fake Image of Kamala Harris and Donald Trump Sparks Disinformation Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

An AI system, Grok, owned by Elon Musk, generated a fake image of Kamala Harris and Donald Trump kissing, which went viral on social media. This incident highlights the potential for AI to spread disinformation, impacting democratic processes and violating human rights by misleading the public.[AI generated]

Why's our monitor labelling this an incident or hazard?

Grok’s misuse has directly led to the creation and viral spread of misleading, deepfake content designed to influence public opinion and interfere with democratic processes. This is a realized harm—deceptive election‐related disinformation—stemming from the use of an AI system.[AI generated]
AI principles
Transparency & explainabilityAccountabilityRespect of human rightsSafetyRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defenceDigital security

Affected stakeholders
General public

Harm types
Public interestReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Il bacio fake tra Kamala Harris e Donald Trump è una pericolosa invenzione dell'IA di Elon Musk

2024-08-15
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
Grok’s misuse has directly led to the creation and viral spread of misleading, deepfake content designed to influence public opinion and interfere with democratic processes. This is a realized harm—deceptive election‐related disinformation—stemming from the use of an AI system.
Thumbnail Image

Grok, l'AI di Elon Musk è totalmente senza freni

2024-08-16
tecnologia.libero.it
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to generate realistic fake images. The article explicitly states that these images are being widely created and shared, including politically sensitive and misleading content. This constitutes harm to communities by spreading misinformation and potentially influencing elections, which fits the definition of an AI Incident. The AI system's use has directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Cinque motivi per cui l'IA generativa è più pericolosa di ciò che pensiamo (e vediamo)

2024-08-17
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (generative AI models) used to create deepfakes and manipulated content. The use of these AI systems has directly led to harm to communities by spreading false and misleading information, which can disrupt democratic processes and public trust. This fits the definition of an AI Incident because the AI's use has directly caused harm through misinformation and political manipulation. The article does not merely warn about potential harm but documents ongoing misuse and its societal impact.
Thumbnail Image

Le degenerazioni dell'intelligenza artificiale e il bacio fake tra Donald Trump e Kamala Harris

2024-08-15
Gazzetta del Sud
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating manipulated images that are viral and misleading. The harm is realized as the disinformation is already circulating and influencing the electorate, which fits the definition of an AI Incident due to harm to communities. The event involves the use of AI to create and disseminate false content with potential political impact, directly leading to harm through misinformation and manipulation.
Thumbnail Image

Grok-2 e le immagini AI senza limiti "etici"

2024-08-19
Giornalettismo
Why's our monitor labelling this an incident or hazard?
Grok-2 is an AI system generating images that violate copyright by using protected characters and creating deepfakes that can distort public debate. The absence of watermarking or labeling violates legal requirements, leading to breaches of intellectual property rights and risks of misinformation harming communities. These harms are occurring or have occurred, making this an AI Incident rather than a mere hazard or complementary information. The article explicitly states these harms and regulatory responses, confirming direct or indirect harm linked to the AI system's use.
Thumbnail Image

L'IA di Musk su X è senza scrupoli grazie al far west del copyright

2024-08-17
Il Foglio
Why's our monitor labelling this an incident or hazard?
Grok is an AI system explicitly described as generative AI creating images from text prompts. Its use has directly led to harms including violations of intellectual property rights (copyright infringement), dissemination of offensive and misleading content harming communities, and generation of instructions for dangerous acts, which pose risks to public safety. These constitute realized harms under the AI Incident definition, including harm to communities and violation of intellectual property rights. The article also mentions ongoing investigations and legal scrutiny, but the primary focus is on the harms already occurring due to Grok's outputs. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Grok di Elon Musk: immagini assurde e controverse senza filtri - Benzinga Italia

2024-08-18
Benzinga Italia
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating images that violate rules on disinformation and abuse, including inappropriate and offensive depictions of public figures. The harm is realized as these images circulate on social media, contributing to misinformation and reputational harm, which falls under harm to communities and violations of rights. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and the resulting harms.
Thumbnail Image

Grok 2, l'intelligenza artificiale di Musk è senza limiti: polemiche per le immagini violente o che violano il copyright

2024-08-17
Corriere della Sera
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok 2, an AI image generator) whose use has directly led to the creation and spread of harmful content such as violent and sexually explicit images and misinformation. These outputs can cause harm to communities and potentially violate legal frameworks (Digital Services Act), fulfilling the criteria for an AI Incident. The harms are realized and ongoing, not merely potential, and the AI system's role is pivotal in generating these problematic contents. Hence, the classification as AI Incident is appropriate.