Elon Musk's Grok AI Generates Controversial Images on X

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk's AI chatbot, Grok, has sparked controversy by allowing users to create and share potentially defamatory images of public figures on the social media platform X. These images, depicting figures like Barack Obama and Donald Trump in inappropriate scenarios, raise concerns about misinformation and reputational harm, especially with the upcoming US presidential election.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Grok) is explicitly mentioned as generating images that are offensive and misleading, depicting public figures in inappropriate ways. This constitutes a violation of rights (such as reputational harm and potential privacy violations) and harm to communities through the spread of harmful misinformation and offensive content. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.[AI generated]
AI principles
AccountabilitySafetyTransparency & explainabilityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingGovernment, security, and defence

Affected stakeholders
General publicOther

Harm types
ReputationalPublic interest

Severity
AI incident

Business function:
Other

AI system task:
Content generationInteraction support/chatbots

In other databases

Articles about this incident or hazard

Thumbnail Image

روبوت إيلون ماسك يفجر طوفاناً... سياسيات عاريات وصور مسيئة

2024-08-15
���� ���
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating images that are offensive and misleading, depicting public figures in inappropriate ways. This constitutes a violation of rights (such as reputational harm and potential privacy violations) and harm to communities through the spread of harmful misinformation and offensive content. Since the AI system's use has directly led to these harms, this qualifies as an AI Incident under the framework.
Thumbnail Image

سياسيات عاريات وصور مسيئة.. روبوت إيلون ماسك يفجر طوفاناً

2024-08-15
albiladpress.com
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating harmful and offensive images, including false and misleading depictions of public figures, which can cause reputational harm and spread misinformation. The system also disregards copyright rules, violating intellectual property rights. These harms fall under violations of rights and harm to communities. Since the harm is occurring and directly linked to the AI system's outputs, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

روبوت Grok على إكس يتسبب بضجة - قبلة حميمية تجمع تايلور سويفت ودونالد ترامب | البوابة

2024-08-15
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The Grok robot is an AI system capable of generating images, including manipulated and offensive content. The event reports that these AI-generated images have been widely disseminated, causing public outrage and potential harm to the reputations and rights of the individuals depicted. This constitutes harm to communities and individuals, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as the offensive images have already spread and caused disruption and distress.
Thumbnail Image

ترامب يقود طائرة وسياسيات عاريات.. إيلون ماسك يثير الجدل بعد تحديثات Grok

2024-08-15
aleqaria.com.eg
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful images that have been widely disseminated, causing reputational harm and spreading misinformation. The images include politically sensitive and sexually explicit content, which can harm individuals and communities. Additionally, the system does not block requests that violate copyright, indicating a breach of intellectual property rights. These harms have materialized, making this an AI Incident rather than a hazard or complementary information. The involvement of the AI system in producing and disseminating these harmful images directly leads to the harms described.
Thumbnail Image

Grok AI Elon Musk Hasilkan Gambar Tanpa 'Sensor', Obama Jadi Korban

2024-08-16
CNNindonesia
Why's our monitor labelling this an incident or hazard?
Grok is an AI system whose use (and misuse) has directly led to reputational and misinformation harms by generating false or sexualized images of real individuals. This constitutes realized harm through its outputs, so it qualifies as an AI Incident.
Thumbnail Image

Chatbot Grok Milik Elon Musk Banjiri X dengan Gambar AI Kontroversial

2024-08-17
TEMPO.CO
Why's our monitor labelling this an incident or hazard?
Grok—an AI system for generating images from text—is actively being used to create and share defamatory, extremist, pornographic, and violent images of real individuals and cultural landmarks. These outputs represent realized harms (reputational injury, misinformation/deepfake risk, potential incitement or community harm) caused by the AI’s misuse and policy‐circumvention. Regulators are investigating the platform for AI safety violations. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Hati-hati, Chatbot Grok Biang Misinformasi

2024-08-15
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating images from text prompts, explicitly described as producing misleading and false images of public figures, which constitutes misinformation. The misinformation is actively occurring and has attracted regulatory attention, indicating harm to communities and possibly violating rights related to reputation and truthful information. The AI system's use has directly led to these harms, meeting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Foto 5 Warga Israel yang Menari di Depan Insiden 9/11 Ternyata Hasil Rekayasa AI

2024-08-17
SINDOnews Tekno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system generating fabricated images that misrepresent a historical event, which could potentially cause misinformation-related harm. However, the article does not document any realized harm or incidents resulting from these images, only their creation and viral spread. Therefore, it does not meet the threshold for an AI Incident. It also does not primarily focus on potential future harm or warnings about such harm, so it is not an AI Hazard. The article mainly provides information about the AI-generated images and the launch of new AI models, which fits the definition of Complementary Information as it enhances understanding of AI capabilities and their societal implications without reporting a specific harm event.
Thumbnail Image

Schockierende Bilder auf X: Neue Regeln sollen Musks KI-Chatbot bremsen

2024-08-16
N-tv
Why's our monitor labelling this an incident or hazard?
This piece centers on the introduction of restrictions and safeguards (a governance response) following prior misuse of the AI system to produce violent or misleading images. It provides contextual updates on measures taken to mitigate potential harms rather than describing a new incident or highlighting a plausible future hazard. Therefore, it is Complementary Information.
Thumbnail Image

Der Tag: KI-Bot von Elon Musk bekommt nun doch Grenzen

2024-08-16
N-tv
Why's our monitor labelling this an incident or hazard?
The article discusses the introduction of usage limits on an AI system (Grok) to prevent generation of harmful or inappropriate content. There is no indication that harm has already occurred or that the AI system malfunctioned causing harm. Instead, the focus is on the mitigation of potential harms through content moderation. This fits the definition of Complementary Information, as it provides an update on societal and governance responses to AI-related risks rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Musks KI-Chatbot bekommt Zügel angelegt

2024-08-16
GMX News
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot with FLUX.1 model) is clearly involved in generating images, including potentially harmful or misleading content. However, the article focuses on the imposition of restrictions to prevent harmful outputs and the broader concerns about AI-generated fakes influencing public opinion, without reporting any actual harm or incident caused by the AI outputs. The mention of legal actions and advertiser reactions further supports this as a governance and societal response to AI risks. Thus, it fits the definition of Complementary Information, not an AI Incident or AI Hazard.
Thumbnail Image

Elon Musks KI-Chatbot Grok bekommt Zügel angelegt

2024-08-16
Die Presse
Why's our monitor labelling this an incident or hazard?
The AI system (FLUX.1) was used to generate images that included shocking and potentially defamatory depictions of real individuals, which constitutes harm to communities and individuals by spreading misinformation and damaging reputations. The article states that such images were generated before restrictions were applied, indicating realized harm rather than just potential harm. Therefore, this event meets the criteria for an AI Incident because the AI system's use directly led to harm. The subsequent imposition of content restrictions is a response to this harm but does not negate the incident itself.
Thumbnail Image

Elon Musk: KI-Chatbot Grok kriegt Einschränkungen

2024-08-16
Nau
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) generating images that could cause harm by spreading misinformation or defamatory content, which is a recognized form of harm to communities and individuals. However, the article focuses on the imposition of restrictions to prevent such harms and the potential risks rather than describing realized harm. Therefore, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has been reported as having occurred yet. The lawsuits mentioned are related to business conflicts and do not constitute AI incidents themselves.
Thumbnail Image

Musks KI-Chatbot bekommt Zügel angelegt

2024-08-16
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for image generation and discusses societal and legal issues arising from its use, including potential misinformation and copyright concerns. However, it does not report any actual harm or incident caused by the AI system. The legal actions and advertiser withdrawals are responses to perceived risks rather than evidence of realized harm. Thus, the event does not meet the criteria for an AI Incident or AI Hazard but fits the definition of Complementary Information, as it provides updates and context on AI system deployment and related governance issues.
Thumbnail Image

Musks KI-Chatbot bekommt Zügel angelegt

2024-08-16
finanzen.ch
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) used to generate images, including problematic content that could influence public opinion and cause reputational harm. The AI system's outputs have led to direct concerns about misinformation and manipulation, which are harms to communities. The restrictions imposed indicate recognition of these harms. Therefore, this qualifies as an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Elon Musks KI-Chatbot Grok bekommt Zügel angelegt

2024-08-16
finanzen.at
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used for image generation, indicating AI system involvement. However, no direct or indirect harm resulting from the AI system's use is described; the concerns about misinformation and copyright infringement are potential risks rather than realized harms. Musk's lawsuits relate to advertising withdrawals, which are a business dispute rather than an AI-related harm. The main focus is on societal and governance responses and the broader context of AI-generated content risks. Hence, the event fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

X (Twitter): Musks KI-Chatbot bekommt Zügel angelegt

2024-08-16
Teltarif
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot using AI image generation) and discusses the potential for harm through misleading or harmful images, which could influence public opinion or violate rights. However, the article does not report any realized harm or incidents caused by the AI system, only the potential for such harm and the platform's mitigation measures. Therefore, this is best classified as Complementary Information, as it provides updates on governance and risk management related to AI-generated content, rather than reporting an AI Incident or AI Hazard.
Thumbnail Image

Musks KI-Chatbot bekommt Zügel angelegt

2024-08-16
Südtirol News
Why's our monitor labelling this an incident or hazard?
The AI system (Grok's image generation model) is explicitly involved, and its use initially allowed generation of harmful content (shocking images of politicians and celebrities). However, the article states that restrictions were implemented to prevent such outputs, indicating mitigation rather than ongoing harm. There is no report of actual harm occurring, only concerns about potential influence on public opinion and legal issues. The article also discusses broader societal and legal responses, including lawsuits and advertiser reactions. Since no direct or indirect harm has materialized as per the article, and the focus is on mitigation and governance, the classification is Complementary Information.
Thumbnail Image

Taylor Swift in Dessous, Obama beim Drogenkonsum? Musks Bild-KI werden Zügel angelegt

2024-08-16
Donaukurier
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating images of real people in harmful or misleading contexts, which can influence public opinion and cause reputational harm. The article states that such images were generated without restrictions initially, indicating the AI's use directly led to these harms. The concerns about election interference and misinformation are consistent with harm to communities. The subsequent imposition of restrictions is a response to these harms but does not negate the fact that the AI system's use has already caused an incident. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk lenkt ein: KI-Chatbot Grok bekommt die Zügel angelegt

2024-08-16
Der Landbote
Why's our monitor labelling this an incident or hazard?
The AI system (FLUX.1) was used to generate images of real people in harmful contexts (drug use, weapons), which can mislead and manipulate public opinion, especially in a sensitive political period. This is a direct link to harm to communities and potential violation of rights to truthful information. The event reports that such images were generated before restrictions were applied, indicating realized harm rather than just potential. Therefore, this is an AI Incident. The involvement of the AI system in producing harmful content and the resulting societal risks meet the criteria for an AI Incident. The later restrictions are a response and do not change the classification.
Thumbnail Image

Drogenkonsum und Waffen: Musks KI-Chatbot wird beschränkt

2024-08-16
RGA - Remscheider General Anzeiger
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok chatbot using FLUX.1) that initially allowed generation of potentially harmful images but was then restricted to prevent such outputs. While there is concern about misinformation and influence on public opinion (harm to communities), the article does not describe any realized harm or incidents caused by the AI outputs. The focus is on the potential for harm and the implementation of content restrictions to mitigate risks. Therefore, this qualifies as an AI Hazard, as the AI system's use could plausibly lead to harm (e.g., misinformation, reputational damage) but no incident has yet occurred according to the article.
Thumbnail Image

Musks KI-Chatbot bekommt Zügel angelegt - m&k

2024-08-17
m&k
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly involved as it generates images from text prompts, which is an AI application. The initial generation of shocking images involving real persons could plausibly lead to reputational harm and misinformation, but the article does not state that such harm has materialized beyond concerns and legal disputes. The main focus is on the response to these issues, including the imposition of content restrictions and legal actions by Musk against companies allegedly boycotting advertising. Since no direct or indirect harm from the AI system's outputs is clearly reported as having occurred, and the article mainly discusses mitigation and legal responses, this event is best classified as Complementary Information.