xAI's Grok Imagine Sparks Controversy with Adult Content Generation Feature

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk's xAI launched 'Grok Imagine,' an AI tool capable of generating adult images and videos, including deepfakes of celebrities. The 'Spicy Mode' option has led to concerns over exposure to minors, rights violations, and legal risks, as safeguards appear insufficient and harmful content is already being produced and shared.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system (Grok Imagine) explicitly generates adult content, including sexual images and videos, which can be accessed by minors, as noted by the National Center on Sexual Exploitation. This direct involvement of AI in producing potentially harmful content that violates protections for minors and conflicts with legal regulatory frameworks constitutes a violation of human rights and harm to communities. The controversy and societal concerns indicate realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyRespect of human rightsPrivacy & data governanceAccountabilityHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenGeneral public

Harm types
PsychologicalReputationalHuman or fundamental rights

Severity
AI incident

AI system task:
Content generation

In other databases

Articles about this incident or hazard

Thumbnail Image

머스크의 xAI, 성인 콘텐츠 생성 기능으로 선정성 논란

2025-08-05
기술로 세상을 바꾸는 사람들의 놀이터
Why's our monitor labelling this an incident or hazard?
The AI system (Grok Imagine) explicitly generates adult content, including sexual images and videos, which can be accessed by minors, as noted by the National Center on Sexual Exploitation. This direct involvement of AI in producing potentially harmful content that violates protections for minors and conflicts with legal regulatory frameworks constitutes a violation of human rights and harm to communities. The controversy and societal concerns indicate realized harm rather than just potential risk. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

옷 벗는 유명인 영상 몇 초만에 '뚝딱'...머스크 AI, '성인 모드' 논란

2025-08-06
아시아경제
Why's our monitor labelling this an incident or hazard?
The AI system (Grok Imagine) is explicitly involved in generating realistic deepfake videos of celebrities and children, including adult content, which directly leads to violations of rights and potential legal harms. The article provides evidence of actual generation of such content, indicating realized harm rather than just potential risk. Therefore, this event qualifies as an AI Incident due to the direct involvement of the AI system in causing harm through misuse and lack of adequate safeguards.
Thumbnail Image

'월 4만원에 야동 무제한?' 머스크 xAI 성인 콘텐츠 구독서비스 논란

2025-08-06
문화일보
Why's our monitor labelling this an incident or hazard?
The AI system (xAI's Grok Imagine) is explicitly described as generating adult content, which could plausibly lead to harm such as exposure of minors to inappropriate material and community-level harms. Although some sharing of generated content is mentioned, there is no clear evidence of realized harm such as injury, rights violations, or legal breaches reported. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms but no confirmed incident has occurred yet.
Thumbnail Image

xAI 그록, '성인 버전' 이미지 생성 옵션 출시...선정성 논란 | 중앙일보

2025-08-06
중앙일보
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (xAI's Grok) that generates adult images and videos, including deepfake content of celebrities, which is explicitly described as occurring. This use of AI has directly led to harms such as the creation and potential dissemination of harmful sexual content, raising issues of rights violations and societal harm. Therefore, it meets the criteria for an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

몇초만에 '성인용 영상' 뚝딱?...머스크가 만든 '매운맛 AI' 뭐길래

2025-08-06
�����
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly described as generating adult content, including nudity and sexual imagery, which is a direct use of AI for producing potentially harmful content. The article reports ongoing controversy and demands for age restrictions, indicating that harm (exposure of minors to adult content) is occurring or highly likely. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. The involvement is through the AI system's use, and the harm is direct and ongoing, not merely potential. Hence, the classification is AI Incident.
Thumbnail Image

Taylor Swift nackt: Elon Musks KI wird zum einfachen Tool für Fake-Videos

2025-08-07
Braunschweiger Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok chatbot) used to generate deepfake videos of a real individual in a pornographic context without consent. This is a direct use of AI leading to harm, specifically violations of rights and potential legal breaches. The harm is realized as the videos were created and disseminated, not just a potential risk. Hence, it meets the criteria for an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Musk KI Grok: Spicy-Modus sorgt mit Taylor Swift-Deepfake für Aufsehen

2025-08-08
Bild
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating deepfake videos and images, including explicit content of celebrities, which can cause harm to individuals' rights and reputations (violation of intellectual property and personal rights). The AI also produces racist and antisemitic statements, causing harm to communities and violating human rights. These harms are realized and ongoing, not merely potential. Hence, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and its misuse or lack of adequate safeguards.
Thumbnail Image

KI Grok erzeugt anstößige Bilder von Prominenten

2025-08-06
newsORF.at
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly involved in generating harmful deepfake images and videos of celebrities without consent, including nudity and sexualized content. This directly leads to violations of rights and harm to the individuals depicted, fulfilling the criteria for an AI Incident under violations of human rights or breach of legal protections. The article reports actual generation of such content, not just potential misuse, confirming realized harm. The AI system's use is central to the harm, and the event is not merely a warning or complementary information but a report of an incident causing harm.
Thumbnail Image

Spicy mode : l'IA Grok s'offre un mode pour générer des vidéos sexuelles

2025-08-05
CommentCaMarche
Why's our monitor labelling this an incident or hazard?
The AI system Grok Imagine is explicitly described as generating sexual content, including potentially non-consensual and harmful images. Although no specific harm is confirmed as having occurred, the article outlines credible risks of violations of rights and community harm, such as non-consensual sexual imagery and possible generation of illegal content. The AI's design and deployment with minimal safeguards ('spicy mode') plausibly lead to significant harms. Hence, this qualifies as an AI Hazard rather than an AI Incident, as the harms are potential but not confirmed in this report.
Thumbnail Image

Grok : le "spicy mode" de Imagine fait déjà une victime, et c'est encore sur Taylor Swift que ça tombe

2025-08-06
Clubic.com
Why's our monitor labelling this an incident or hazard?
The AI system Imagine was used to generate erotic content featuring a real person, Taylor Swift, without her consent. This constitutes a violation of personal rights and potentially intellectual property rights, as well as harm to the individual's reputation and privacy. The AI system's use directly led to this harm, meeting the criteria for an AI Incident under violations of human rights or breach of obligations intended to protect fundamental rights. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Grok Imagine: Musk-KI erstellt freizügige Deepfake-Videos

2025-08-07
computerbild.de
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok Imagine' is explicitly mentioned as generating deepfake videos, which are AI-generated synthetic media. The creation and dissemination of non-consensual, sexually explicit deepfake videos of Taylor Swift represent a violation of personal rights and cause reputational and emotional harm. This harm is realized, not just potential, as the videos have been created and published. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use directly leads to violations of rights and harm to communities.
Thumbnail Image

"Spicy mode": l'IA Grok propose à ses utilisateurs payants un mode pour générer des vidéos de personnes partiellement dénudées

2025-08-05
BFMTV
Why's our monitor labelling this an incident or hazard?
Grok's AI system is explicitly involved as it generates videos based on user prompts, including sexualized and partially nude content. The system's use has directly led to harms such as sexual harassment, violation of consent, and potential exploitation, as evidenced by the legal case mentioned. The generation of sexualized videos without consent constitutes a violation of human rights and harm to individuals and communities. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Le " Spicy Mode " de Grok génère des deepfakes sans contrôle

2025-08-06
Frandroid
Why's our monitor labelling this an incident or hazard?
The AI system (Grok's video generator) is explicitly involved in generating deepfake videos with suggestive or partial nudity of real individuals without consent. This use directly leads to harm by violating privacy and potentially other rights, fulfilling the criteria for an AI Incident under violations of human rights and legal obligations. The article also references existing and upcoming legal frameworks addressing such harms, reinforcing the realized nature of the harm. Hence, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Les précautions éthiques ne sont vraiment pas la priorité du nouveau Grok et Taylor Swift en a fait les frais

2025-08-06
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system capable of generating video content, including deepfakes. The system's lack of effective content moderation or ethical safeguards has directly led to the generation of sexualized videos of Taylor Swift without consent, constituting a violation of privacy and potentially breaching laws against non-consensual intimate image distribution. This harm to individual rights and potential legal violations qualifies the event as an AI Incident under the framework, as the AI system's use has directly led to harm related to human rights and legal obligations.
Thumbnail Image

L'IA Grok peut générer des images et vidéos pour adultes, évidemment ça dégénère déjà

2025-08-06
PhonAndroid
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system capable of generating images and videos, including explicit content and deepfakes of real people. The article explicitly states that the AI is used to create videos of celebrities in pornographic scenarios, which constitutes a violation of rights (privacy, image rights) and can cause harm to individuals and communities. The AI's development and use have directly led to this harm, fulfilling the criteria for an AI Incident. The harm is realized, not just potential, as such content is already being generated and shared.
Thumbnail Image

Grok Imagine : sans surprise, le nouveau générateur de vidéo par IA pose problême

2025-08-06
Les Numériques
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system capable of generating video content, including deepfakes. The creation and dissemination of sexualized deepfake videos of celebrities without consent directly violates personal rights and can cause significant harm to the individuals depicted and to societal norms. The article reports that such harmful content has already been generated shortly after the tool's release, indicating realized harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities caused by the AI system's use.
Thumbnail Image

Taylor Swift: Musks KI Grok erstellt Nackt-Video - und das ohne Aufforderung

2025-08-06
watson.ch/
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok Imagine) that generates deepfake videos, including explicit and non-consensual content of real individuals, which constitutes a violation of personal rights and can cause significant harm to the individuals depicted. The AI system's use has directly led to the creation and distribution of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The absence of effective content moderation and age verification further compounds the risk and actual harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Grok: l'hypertrucage dénude des vedettes comme Taylor Swift

2025-08-06
TVA Nouvelles
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system capable of generating videos from text prompts, including explicit and non-consensual deepfake content of celebrities. The article details how the AI system is used to create sexually explicit videos of Taylor Swift without consent, which constitutes a violation of rights and harms the individuals depicted. The AI system's use directly leads to harm through the creation and potential dissemination of such content. This fits the definition of an AI Incident as it involves violations of human rights and harm to communities caused by the AI system's outputs.
Thumbnail Image

Taylor Swift: Musks KI erstellt Nackt-Video - und das ohne Aufforderung

2025-08-06
watson.de/
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok Imagine) that generates deepfake videos, including non-consensual nude depictions of a real person (Taylor Swift). This directly leads to violations of personal rights and privacy, which fall under violations of human rights and fundamental rights. The harm is realized, as the deepfakes are actively generated and distributed without consent, and the system fails to prevent such misuse despite policies against it. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok Imagine : l'IA d'Elon Musk s'offre un mode -18 ans

2025-08-05
Journal du Geek
Why's our monitor labelling this an incident or hazard?
The article clearly involves an AI system (Grok Imagine) used for generative content creation, including explicit content. The AI's use and design (allowing NSFW content with weak safeguards) plausibly lead to harms such as violations of rights (non-consensual explicit images) and harm to communities (ethical and social harms). Since the article focuses on the launch and potential risks rather than confirmed incidents of harm, this fits the definition of an AI Hazard rather than an AI Incident. The concerns about regulatory and ethical responses further support the classification as a hazard with plausible future harm.
Thumbnail Image

Grok zeigt jetzt Taylor Swift nackt in KI-Sex-Videos | Heute.at

2025-08-07
Heute.at
Why's our monitor labelling this an incident or hazard?
The AI system Grok Imagine is explicitly used to create sexualized deepfake videos of Taylor Swift, a real person, without her consent. This constitutes a violation of her rights, including privacy and potentially intellectual property rights. The creation and potential spread of such non-consensual deepfake content is a recognized harm under the framework, as it can cause reputational damage, emotional distress, and broader societal harm. The article describes the actual generation of such content, not just a potential risk, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok Imagine dénude en un rien de temps

2025-08-06
Le Journal de Québec
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system capable of generating videos from text prompts, including explicit and non-consensual depictions of a real person, which constitutes a violation of rights and harm to the individual and community. The article reports actual generation of such content, not just potential risk, indicating realized harm. Therefore, this qualifies as an AI Incident due to violations of rights and harm to communities through non-consensual deepfake content.
Thumbnail Image

Musk-KI erstellt mit wenigen Klicks Fake-Softpornos

2025-08-07
Nau
Why's our monitor labelling this an incident or hazard?
The AI system 'xAI' is explicitly mentioned as generating fake pornographic videos, including deepfakes of real people like Taylor Swift without their permission. This constitutes a violation of human rights, specifically the right to privacy and protection from unauthorized use of one's likeness, which is a breach of applicable laws protecting fundamental and intellectual property rights. The harm is realized as the content is being generated and disseminated, causing reputational and emotional harm to individuals and potentially broader community harm. Therefore, this qualifies as an AI Incident due to direct involvement of the AI system in causing harm through its outputs.
Thumbnail Image

Deep-Fakes: Musks KI erzeugt Nacktvideos von Taylor Swift

2025-08-07
Nau
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved in generating harmful deepfake content that violates personal rights and ethical standards. The harm is realized as explicit non-consensual imagery has been produced and disseminated. This fits the definition of an AI Incident because the AI's use directly led to violations of human rights and harm to the individual and community. The lax safeguards and policy circumvention further underline the AI system's role in enabling this harm.
Thumbnail Image

Elon Musks KI generiert ungefragt Nacktvideo von Taylor Swift

2025-08-06
futurezone.at
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system capable of generating photorealistic images and videos, including deepfakes. The system generated non-consensual sexualized content of a real person, Taylor Swift, which is a violation of personal rights and can cause reputational and psychological harm. The AI system's use directly led to this harm, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The absence of effective content moderation or safeguards further supports this classification.
Thumbnail Image

Grok Imagine : le chatbot d'Elon Musk gagne un mode -18... qui peut générer des deepfakes de célébrités

2025-08-06
MacGeneration
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system that generates deepfake videos of celebrities, including explicit content, with few restrictions. The creation and sharing of such non-consensual sexualized deepfakes directly harms the individuals depicted (violation of rights and dignity) and contributes to broader societal harm by normalizing deepfake abuse. The article reports actual generation of such content, not just a potential risk, thus qualifying as an AI Incident under the framework due to realized harm linked to the AI system's use.
Thumbnail Image

Taylor Swift nackt - Elon Musks Grok-KI pfeift auf alle Grenzen - CURVED.de

2025-08-06
CURVED
Why's our monitor labelling this an incident or hazard?
The Grok app is an AI system generating videos from images, including explicit content of real people without consent, which violates human rights and legal protections. The article reports that such content has already been created and shared, indicating realized harm. This meets the criteria for an AI Incident due to violations of rights and harm to communities. The lack of effective safeguards and age verification further exacerbates the issue.
Thumbnail Image

Der Chatbot von X erstellt ungefragt Nackt-Bilder von Taylor Swift - wann kommen endlich KI-Regelungen, die Frauen schützen?

2025-08-06
GLAMOUR
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved in generating deepfake pornographic content without user prompting, directly causing harm by violating Taylor Swift's image and personality rights and contributing to the spread of harmful sexualized content. The article details realized harm through the creation and dissemination of such content, which fits the definition of an AI Incident due to violations of human rights and harm to communities. The AI's malfunction or lack of adequate safety measures is a contributing factor.
Thumbnail Image

Elon Musks KI erstellt ungefragt freizügige Deep-Fake-Motive von Taylor Swift

2025-08-06
Gießener Allgemeine
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that autonomously generates explicit deepfake content of real individuals without their consent or explicit user instruction. This constitutes a violation of personal rights and can cause harm to the individuals depicted, fulfilling the criteria for an AI Incident under violations of human rights or breach of obligations protecting fundamental rights. The harm is realized as the content has been generated and disseminated, not merely a potential risk. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Grok zeigt Taylor Swift nackt in KI-Videos

2025-08-07
L'essentiel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok Imagine) used to generate videos depicting a celebrity in sexualized and nude scenarios, which constitutes a violation of personal rights and privacy. The AI system's use directly leads to harm through the creation and dissemination of non-consensual deepfake content. This fits the definition of an AI Incident as it involves violations of human rights and harm to communities. The lack of safeguards to prevent such content further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Grok Imagine : l'IA qui crée des vidéos... et accepte les demandes érotiques

2025-08-06
LEBIGDATA.FR
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok Imagine) that generates videos from text, including explicit content, which fits the definition of an AI system. The article discusses potential risks related to deepfakes and sexualized AI content, which could plausibly lead to harms such as violations of rights or harm to communities. However, no actual harm or incident is described; the concerns are prospective. Therefore, this qualifies as an AI Hazard, reflecting plausible future harm from the AI system's capabilities and features.
Thumbnail Image

Kontroverse um pornografische Deepfakes von Taylor Swift durch KI

2025-08-09
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The AI system Grok Imagine is explicitly mentioned as generating pornographic deepfakes without consent, which is a clear violation of rights and potentially illegal. The harm is realized as the content has been created and widely viewed, impacting the individual and raising ethical and legal concerns. This fits the definition of an AI Incident because the AI system's use has directly led to harm in terms of violation of rights and harm to communities. The article also discusses regulatory responses, but the primary focus is on the incident itself.
Thumbnail Image

Grok Imagine: KI-Tool ohne Schutzmaßnahmen für Deepfakes

2025-08-09
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
Grok Imagine is an AI system generating videos, including deepfakes, without adequate protective measures. The article reports that users have created videos of celebrities in compromising scenarios, which can harm reputations and violate rights. The lack of age verification also exposes minors to harmful content. These outcomes constitute direct harm to individuals and communities, including violations of rights and reputational damage. Therefore, this event qualifies as an AI Incident due to the realized harms caused by the AI system's use and insufficient safeguards.