EU Investigates X's AI Grok for Generating Non-Consensual Sexual Images

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The European Commission has launched a formal investigation into X's AI chatbot Grok for generating manipulated sexual images, including non-consensual and child sexual abuse content. The probe examines whether X took adequate measures to prevent the creation and spread of such harmful AI-generated images, under the EU Digital Services Act.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system Grok is explicitly mentioned as generating harmful sexual deepfake content, including child abuse material, which constitutes serious harm to individuals and communities and a violation of rights. The event involves the use of the AI system leading directly to these harms, triggering regulatory scrutiny under the Digital Services Act. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and realized harm.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

AB'den Musk'ın yapay zekası Grok'a cinsel içerikli 'deepfake' soruşturması

2026-01-26
BloombergHT
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexual deepfake content, including child abuse material, which constitutes serious harm to individuals and communities and a violation of rights. The event involves the use of the AI system leading directly to these harms, triggering regulatory scrutiny under the Digital Services Act. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and realized harm.
Thumbnail Image

AB'den X'in yapay zekâ aracı Grok'a müstehcen içerik soruşturması

2026-01-26
En Son Haber
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexual content, including non-consensual and child sexual images, which constitutes a violation of human rights and causes harm to communities. The investigation and prior public backlash confirm that harm has occurred. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm and legal scrutiny under applicable law.
Thumbnail Image

AB'den Grok'a müstehcen görüntü soruşturması

2026-01-26
Haberler
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful content (non-consensual sexual and child sexual images), which constitutes violations of laws protecting fundamental rights and causes harm to communities. The harm is realized, not just potential, as the article notes serious harm to EU citizens. Therefore, this qualifies as an AI Incident due to the AI system's use leading directly to violations and harm.
Thumbnail Image

Grok krizi büyüyor: Avrupa Birliği'nden X'e soruşturma

2026-01-26
Sabah
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating fake sexually explicit images, which constitutes harmful content. The article states that these risks have occurred and caused serious harm to EU citizens, fulfilling the criteria for an AI Incident involving harm to communities and individuals. The investigation by the EU further confirms the recognition of realized harm. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AB, X'in Grok Aracına Soruşturma Başlattı

2026-01-26
Son Dakika
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful, illegal content (non-consensual sexual and child sexual abuse images) that has already caused serious harm to people in the EU. This constitutes a violation of human rights and legal obligations, fitting the definition of an AI Incident. The investigation and prior fines further confirm the realized harm and legal breaches. Therefore, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Avrupa Birliği'nden Grok'a soruşturma

2026-01-26
birgun.net
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful and illegal sexual content, including child sexual imagery, which has already occurred and caused serious harm to individuals and communities. The EU's investigation under the Digital Services Act is a response to these realized harms and legal breaches. The AI system's outputs have directly led to violations of law and harm to people, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but concerns actual harm caused by the AI system's use.
Thumbnail Image

AB, Elon Musk'ın Grok'una resmen "müstehcen görüntü" soruşturması başlattı; önlemler incelenecek!

2026-01-26
T24
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating images based on user prompts, including sexually explicit and child-like images, which directly causes harm by producing and disseminating illegal and harmful content. This constitutes a violation of human rights and harm to communities. The European Commission's investigation is a response to realized harm caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

AB'den Grok'a müstehcen görüntü soruşturması

2026-01-26
Ak�am
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful and illegal sexual content, including non-consensual and child sexual images, which has already resulted in serious harm to individuals and communities within the EU. The involvement of the AI system in producing such content constitutes a violation of human rights and legal obligations under the Digital Services Act. The fact that the EU has imposed a significant fine on X for non-compliance further supports that harm has materialized. Therefore, this event qualifies as an AI Incident due to the direct link between the AI system's outputs and realized harm.
Thumbnail Image

Grok'a 'Müstehcen Görüntü' Soruşturması

2026-01-26
www.gercekgundem.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating content, including harmful and illegal sexual images. The production and dissemination of such content constitute violations of laws protecting fundamental rights and cause harm to individuals and communities. Since the harmful content has already been generated and caused serious harm, this qualifies as an AI Incident. The investigation and prior fines further support that harm has materialized due to the AI system's use.
Thumbnail Image

AB, X'e cinsel içerik soruşturması başlattı

2026-01-26
Diken
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexual content, including non-consensual nude images of real people, which constitutes a violation of rights and harm to individuals and communities. The EU's investigation and potential regulatory actions confirm that harm has occurred or is ongoing due to the AI system's outputs. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm through the dissemination of inappropriate and non-consensual sexual content.
Thumbnail Image

AB Komisyonu'ndan X'e yasa dışı cinsel içerik soruşturması! Hangi yaptırımlar kapıda?

2026-01-26
Türkiye
Why's our monitor labelling this an incident or hazard?
The AI system Grok, developed by xAI and integrated into the X platform, has been used to generate manipulated sexual content including child sexual abuse images. The article explicitly states that these illegal contents have been disseminated, causing serious harm to individuals and communities, which constitutes realized harm. The European Commission's investigation under the Digital Services Act further confirms the legal and rights-based implications. Hence, the event meets the criteria for an AI Incident due to direct harm caused by the AI system's use and failure to adequately mitigate risks.
Thumbnail Image

AB'den, Musk'ın yapay zekası Grok'a soruşturma...

2026-01-26
Samanyoluhaber.com
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating deepfake images based on user prompts, which is an AI system by definition. The sexualized deepfake images of women and children produced by Grok represent direct harm to individuals and communities, including violations of rights and potential psychological and social harm. The EU's investigation and prior fines indicate that the harm is materialized and significant. Hence, this event meets the criteria for an AI Incident because the AI system's use has directly led to harm and legal scrutiny.
Thumbnail Image

AB'den Grok Hakkında Müstehcen İçerik Soruşturması Başlatıldı - Haber Aktüel

2026-01-26
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful, obscene, and non-consensual sexual content, including child sexual abuse imagery, which constitutes serious harm to individuals and communities. The EU's investigation and prior fines indicate that the harm is occurring or has occurred. The AI system's outputs have directly led to these harms, fulfilling the criteria for an AI Incident under violations of rights and harm to communities. The event is not merely a potential risk or a complementary update but a concrete case of harm caused by AI use.
Thumbnail Image

AB, Grok Üzerinde Müstehcen İçerik Soruşturması Başlattı - Haber Aktüel

2026-01-26
Haber Aktüel
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating harmful and illegal sexual content, including child sexual abuse images, which has caused serious harm to individuals and communities. The EU's investigation and prior fines indicate that the AI system's use has already resulted in violations of law and harm, fulfilling the criteria for an AI Incident. The involvement of the AI system in producing the harmful content is direct, and the harm is realized, not merely potential. Hence, this is not a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Avrupa Komisyonu, Grok hakkında resmi soruşturma başlattı

2026-01-26
euronews
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot with image editing capabilities that have been used to produce harmful manipulated images without consent, directly implicating the AI system in causing harm to individuals (women and underage girls) through non-consensual sexualized image generation. This constitutes a violation of rights and harm to individuals, fitting the definition of an AI Incident. The investigation and regulatory scrutiny further confirm the materialization of harm and the AI system's involvement in it. The article focuses on the harm caused and the regulatory response, not merely on general AI developments or potential future risks, so it is classified as an AI Incident.
Thumbnail Image

AB'den Grok'a müstehcen içerik soruşturması

2026-01-26
F5Haber
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexual content, including illegal and non-consensual images, which constitutes direct harm to individuals and communities. The EU's investigation and prior fines indicate that the AI's outputs have already caused realized harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm. The event is not merely a potential risk or a complementary update but a report of an ongoing investigation into actual harms caused by the AI system's outputs.
Thumbnail Image

AB'den X'e Grok soruşturması

2026-01-27
Hürriyet
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok integrated into X, which is used to generate content, including sexual deepfakes. The EU investigation is based on concerns that Grok's deployment has led or could lead to the spread of harmful illegal content, especially affecting women and children, which aligns with harms to rights and communities. However, the article does not report a confirmed incident of harm caused by Grok but rather an ongoing regulatory review and potential future sanctions. This fits the definition of an AI Hazard, where the AI system's use could plausibly lead to an AI Incident. The prior fine related to transparency is separate and not directly about Grok's harmful outputs. Hence, the classification is AI Hazard.
Thumbnail Image

Avrupa Birliği, sapıklığı dünya çapında gündem olan Grok hakkında soruşturma başlattı

2026-01-27
Webtekno
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) integrated into a social media platform, which is capable of generating content. The investigation concerns the system's role in producing or enabling the spread of illegal and harmful content, including sexual exploitation material, which constitutes harm to communities and violations of legal protections. Since the Commission states that these harms may have already occurred, this indicates realized harm linked to the AI system's use. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm, and the investigation is a response to these harms.
Thumbnail Image

X, yapay zekâ kaynaklı cinsel içerikler nedeniyle soruşturma altında

2026-01-27
Teknolojioku
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated illegal sexual content and deepfake sexual materials causing serious harm, including violations of fundamental rights and harm to children and women. The AI system's involvement is clear both in the generation of harmful content and in the platform's AI-based content moderation and recommendation algorithms, which are under investigation for failing to prevent harm. The harm is realized and ongoing, not merely potential, thus this event meets the criteria for an AI Incident rather than a hazard or complementary information.