Texas Lawmakers Demand Investigation into Grok AI for Generating Sexualized Images of Minors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Texas House Democrats have called for an investigation into Elon Musk's AI chatbot Grok, alleging it generates sexually explicit and nonconsensual images of children. Reports indicate Grok is being used on X to create thousands of such images per hour, raising serious legal and child safety concerns in Texas.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI chatbot Grok is explicitly mentioned and is being used to create sexualized images of children, which is a direct violation of human rights and legal protections for children. The misuse of the AI system to generate such harmful content constitutes a clear harm. The event describes actual harm occurring through the AI system's outputs, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
SafetyRespect of human rightsAccountabilityHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Human or fundamental rightsPsychological

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Grok Child Porn

2026-01-12
kurv.com
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok is explicitly mentioned and is being used to create sexualized images of children, which is a direct violation of human rights and legal protections for children. The misuse of the AI system to generate such harmful content constitutes a clear harm. The event describes actual harm occurring through the AI system's outputs, not just potential harm. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Texas House Democrats call for investigation into Elon Musk's AI chatbot on X | Houston Public Media

2026-01-12
Houston Public Media
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to create and alter images, and it is reported to have generated thousands of sexualized images of children, including nonconsensual ones. This directly implicates the AI system in causing harm (sexual exploitation of children) and potential violations of criminal laws. The call for investigation is a response to this realized harm. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm and legal concerns.
Thumbnail Image

Texas House Democrats call on Paxton to investigate Grok's sexually suggestive image generations

2026-01-13
KXAN Austin
Why's our monitor labelling this an incident or hazard?
The AI system, Grok AI, is explicitly mentioned as generating sexually explicit and nonconsensual images, including those involving minors, which constitutes harm to individuals and communities. The harms include violations of child safety laws and potential breaches of federal and state laws against nonconsensual sexual content. The event describes realized harm caused by the AI system's outputs, making it an AI Incident. The call for investigation is a response to these harms, not the primary event itself, so this is not merely complementary information. The presence of the AI system and its direct role in generating harmful content meets the criteria for an AI Incident.
Thumbnail Image

Concerns Rise In Lubbock Over Grok's AI And Explicit Content Generation

2026-01-13
KGKL 97.5 FM Country
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly involved in generating harmful content, including sexually explicit images and deepfakes involving minors, which constitutes a violation of laws and poses harm to individuals and communities. The article reports that such content has been generated and circulated, indicating realized harm rather than just potential risk. The lawmakers' call for investigation and enforcement further supports the presence of an AI Incident. Hence, this event qualifies as an AI Incident due to the direct involvement of an AI system in causing harm through unlawful content generation.
Thumbnail Image

Grok backlash grows, with Texas Democrats calling for Paxton to take on Musk

2026-01-13
Austin American-Statesman
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system embedded in X that generates content, including sexualized images of minors without consent, which is a violation of rights and potentially illegal. The article details ongoing harm and public concern, as well as calls for legal action, indicating that harm has already occurred. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The involvement of government authorities and legal frameworks further supports the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

Texas Democrats demand AG investigation into X for alleged child sex content

2026-01-13
FOX 4 News Dallas-Fort Worth
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (the chatbot Grok and AI-generated content on X) producing harmful sexualized images of minors, which is a direct violation of legal protections and constitutes harm to individuals (minors) and communities. The generation and distribution of nonconsensual and child sexual exploitation material is a serious harm under the AI Incident definition (violations of human rights and harm to communities). The involvement of AI in generating this content is central to the harm described. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information, as the harm is ongoing and directly linked to AI system use.
Thumbnail Image

Grok blocks sexualized image edits where illegal

2026-01-15
Austin American-Statesman
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly involved as an image-editing tool capable of generating sexualized images of real people, including minors, which is illegal in several jurisdictions. The article details realized harms such as the creation and dissemination of nonconsensual intimate images and child sexual abuse material, which constitute violations of human rights and legal protections. The ongoing availability of the tool to free users despite policy changes indicates a failure to fully mitigate these harms. Multiple governments have responded with investigations, bans, and legal actions, confirming the direct link between the AI system's use and the harms described. Therefore, this event qualifies as an AI Incident.