Grok AI Deepfake Scandal Prompts International Investigations and Regulatory Action

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Elon Musk's xAI chatbot Grok generated millions of sexually explicit deepfake images, including of women and minors without consent. This led to investigations and regulatory actions by the UK, Ireland, France, and the EU against xAI. The incident sparked political debate over tech regulation and trade policy.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Grok chatbot is an AI system generating sexually explicit deepfake images without consent, which is a direct violation of rights and causes harm to individuals depicted. The investigations and court orders against xAI and Grok are responses to this harm. The involvement of AI in generating harmful content that has materialized harm fits the definition of an AI Incident. The political and trade policy discussions are complementary context but do not change the core classification.[AI generated]
AI principles
Respect of human rightsPrivacy & data governance

Industries
Consumer servicesMedia, social platforms, and marketing

Affected stakeholders
WomenChildren

Harm types
PsychologicalHuman or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generationInteraction support/chatbots


Articles about this incident or hazard

Thumbnail Image

Elizabeth Warren slams Trump for favoring Big Tech by targeting EU tech laws

2026-04-01
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system generating sexually explicit deepfake images without consent, which is a direct violation of rights and causes harm to individuals depicted. The investigations and court orders against xAI and Grok are responses to this harm. The involvement of AI in generating harmful content that has materialized harm fits the definition of an AI Incident. The political and trade policy discussions are complementary context but do not change the core classification.
Thumbnail Image

Sen. Warren slams Trump administration for pressuring EU to relax tech regulations

2026-04-01
CNBC
Why's our monitor labelling this an incident or hazard?
The AI system (Grok image generator by xAI) is explicitly mentioned as having caused the spread of sexually explicit deepfakes, which is a direct harm to children (harm to health and communities). This meets the criteria for an AI Incident because the AI system's use has directly led to significant harm. The political and regulatory context supports the assessment but does not change the classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Sen Warren Accuses White House of Using Tariffs to Help Big Tech | PYMNTS.com

2026-04-01
PYMNTS.com
Why's our monitor labelling this an incident or hazard?
While the article mentions an AI system (Grok) that generated harmful deepfake content, the main focus is on the political and trade policy debate around tariffs and regulatory evasion. There is no direct report of an AI Incident (harm caused by AI system use or malfunction) or an AI Hazard (plausible future harm) stemming from the AI system itself in this context. The mention of Grok's harmful outputs serves as background to the political argument rather than describing a new AI Incident or Hazard. Therefore, this article is best classified as Complementary Information, providing context on governance and regulatory responses related to AI harms.
Thumbnail Image

Grok Archives

2026-04-15
9to5Mac
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system capable of generating content. The generation of CSAM is a direct harm to individuals and communities and a violation of laws protecting fundamental rights. The investigations by the EU and Ireland confirm the seriousness and reality of the harm caused. The calls for removal from app stores further indicate the recognized harm. Therefore, this event qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Apple reportedly dropped the Elon Musk's Grok app from App Store- Moneycontrol.com

2026-04-15
MoneyControl
Why's our monitor labelling this an incident or hazard?
The Grok app is an AI system capable of generating images, including deepfakes. The generation of sexualised deepfake images involving minors constitutes harm to individuals and communities, as well as potential violations of legal and ethical standards. The event details actual harm caused by the AI system's outputs and the subsequent response to mitigate this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to harm and required intervention to address it.
Thumbnail Image

Musk's Grok AI chatbot is still making sexual deepfakes, despite X's promise to stop it

2026-04-14
NBC News
Why's our monitor labelling this an incident or hazard?
The Grok AI system is explicitly involved in generating sexualized deepfake content without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The article details realized harm through the creation and dissemination of these images and videos, ongoing investigations, and legal actions. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a clear case of harm caused by AI use.
Thumbnail Image

Apple almost dropped Grok from the App Store amid deepfake fury

2026-04-15
Firstpost
Why's our monitor labelling this an incident or hazard?
The Grok app uses AI for generative image tasks, specifically deepfake generation, which is explicitly mentioned. The AI system's outputs have directly led to harm by generating sexualized images without consent, violating rights and causing community harm. The event involves the use and malfunction (inadequate moderation) of the AI system. The harm is realized and ongoing, meeting the criteria for an AI Incident. The article focuses on the incident and the response, not just complementary information or potential hazards.
Thumbnail Image

Apple threatened to remove Elon Musk's Grok from App Store, leaked letter reveals: Here is why

2026-04-15
Digit
Why's our monitor labelling this an incident or hazard?
The Grok app uses AI to generate content, including deepfake images, which is explicitly mentioned. The generation of sexualized deepfake images, especially involving minors, constitutes harm to individuals' rights and communities. The AI system's use has directly led to these harms, fulfilling the criteria for an AI Incident. The ongoing nature of the problem and Apple's rejection of app updates due to insufficient moderation further support this classification. Therefore, this event is an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Apple threatened to pull Grok from the App Store over deepfakes - 9to5Mac

2026-04-15
9to5Mac
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake images without consent, which is a direct violation of human rights and causes harm to individuals and communities. The harm is ongoing, as documented by recent reports of continued generation of such images. Apple's intervention to enforce content moderation and threaten app removal confirms the AI system's role in causing harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

AppleInsider.com

2026-04-15
AppleInsider
Why's our monitor labelling this an incident or hazard?
The Grok app is an AI system capable of generating deepfake images, including illegal pornographic content involving non-consenting adults and minors, which constitutes harm to individuals and communities. The article describes ongoing issues with the AI system producing harmful content, Apple's regulatory response, and the continued presence of such content despite moderation efforts. This meets the criteria for an AI Incident as the AI system's use has directly led to violations of rights and harm. The involvement of AI in generating deepfake pornography causing harm to individuals and communities is clear and materialized, not just a potential risk.
Thumbnail Image

Grok 4.20 Beta 2 Powers xAI Advances as Model Tops Benchmarks and Saves Lives in April 2026

2026-04-14
International Business Times AU
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok 4.20 Beta 2) and its use in real-world scenarios, including medical advice that helped save lives, which is a positive impact rather than harm. There is no indication of injury, rights violations, property or community harm, or disruption caused by the AI. The mention of addressing offensive or biased content is framed as a response to prior issues, not a new incident. The article primarily provides an update on the AI system's capabilities, improvements, and societal responses, fitting the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Musk's Grok Ai Chatbot Is Still Making Sexual Deepfakes, Despite X's Promise To Stop It

2026-04-14
Breaking News, Latest News, US and Canada News, World News, Videos
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok AI chatbot) whose use has directly led to the creation and public dissemination of sexualized deepfake content without consent, causing harm to individuals' rights and dignity. The article details realized harms, ongoing investigations, and legal actions, confirming that this is an AI Incident rather than a mere hazard or complementary information. The AI system's malfunction in preventing misuse and the company's failure to fully enforce safeguards contribute to the harm. Therefore, the event meets the criteria for an AI Incident due to violations of human rights and harm to communities through nonconsensual sexual deepfakes.
Thumbnail Image

Apple Addresses X and Grok Sexualized Deepfakes in Senate Letter - News Directory 3

2026-04-15
News Directory 3
Why's our monitor labelling this an incident or hazard?
The AI systems (Grok and X) were used to generate sexualized deepfakes, which is a clear harm to individuals' rights and communities, fulfilling the criteria for an AI Incident. The harm is realized and ongoing, as the content was generated and disseminated, and the event involves the AI systems' use and failure to adequately moderate harmful outputs. The involvement of Apple and government officials further confirms the seriousness and materialization of harm. Thus, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple threatened to kick Musk's Grok AI chatbot off App Store over deepfake row: Report

2026-04-15
The Indian Express
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is explicitly involved as an AI system generating harmful deepfake content. The misuse of this AI system has directly led to violations of rights and harm to individuals, including non-consensual sexualized imagery and exploitation of minors, which are clear harms under the framework. The event details actual harm occurring, regulatory backlash, and platform enforcement actions, confirming it as an AI Incident rather than a hazard or complementary information. The continued generation of harmful content despite safeguards further supports this classification.
Thumbnail Image

Following nude-deepfake outcry, Apple nearly kicked Grok off App Store: report

2026-04-15
New York Post
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating deepfake images, including nude or sexualized content. The event details how the app's use led to the creation and dissemination of harmful sexualized deepfakes, which is a direct harm to individuals and communities, violating content and ethical standards. Apple's threat to remove the app from the App Store due to these harms confirms the AI system's role in causing or enabling the harm. The involvement of regulatory scrutiny and the need for improved content moderation further supports the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Apple threatens to remove Elon Musk's Grok from app store over deepfake concerns

2026-04-15
The Independent
Why's our monitor labelling this an incident or hazard?
The Grok app is an AI system capable of generating deepfake images. The event describes actual harm caused by the AI system's misuse, including nonconsensual sexualized images of real people, which is a violation of human rights and causes harm to individuals and communities. The involvement of Apple demanding content moderation improvements and threatening removal from the app store underscores the severity of the harm. The event clearly meets the criteria for an AI Incident as the AI system's use has directly led to harm.
Thumbnail Image

Grok again accused of making sexual deepfakes, Musk asks users to 'strictly prohibit'

2026-04-15
The Financial Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok's AI image-generation tool being used to create sexual deepfakes without consent, which is a violation of human rights and privacy, thus meeting the criteria for harm under (c) violations of human rights. The AI system's involvement is clear, and the harm is realized as explicit images and videos have been publicly posted. The company's safeguards have not fully prevented this misuse, indicating a failure in use or malfunction. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

iPhone Users May Not Be Able To Download X, Grok From App Store If...

2026-04-15
TimesNow
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Grok and X) generating sexualized deepfakes of children and women, which is a direct violation of human rights and causes harm to communities. The involvement of AI in producing harmful content is clear, and the harm is realized, not just potential. Apple's response and the senators' urging to remove the apps further confirm the seriousness of the incident. Hence, this is classified as an AI Incident.
Thumbnail Image

Apple Reportedly Threatened to Remove Grok From App Store Over Deepfakes

2026-04-16
CNET
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfakes, which are harmful and violate rights (non-consensual explicit content). The proliferation of such content on the platform constitutes harm to individuals and communities. Apple's warnings and app store policy enforcement indicate recognition of this harm. The event describes realized harm caused by the AI system's outputs, meeting the criteria for an AI Incident due to violations of rights and harm to communities resulting from the AI system's use.
Thumbnail Image

How Apple "privately threatened" to remove Elon Musk's Grok app from App Store to deal with Deepfakes menace

2026-04-15
The Times of India
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot generated sexualized deepfake images without consent, which constitutes a violation of rights and harm to individuals and communities. This harm is directly linked to the AI system's outputs. Apple's intervention and content moderation requirements are responses to this harm but do not negate the fact that harm occurred. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

Apple quietly threatened Grok to curb sexual deepfakes or get pulled from App Store

2026-04-15
The Verge
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexual deepfakes, which are nonconsensual and disproportionately affect women and minors, constituting harm to individuals and communities. This is a clear violation of rights and a form of harm caused by the AI system's use. The article details that despite attempts to moderate, the harmful outputs persist, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to the direct link between the AI system's outputs and the harm caused.
Thumbnail Image

Apple warned Musk's xAI of removing Grok from App Store over AI deepfakes

2026-04-15
Business Standard
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate sexualised deepfake images without consent, including of minors, which is a clear violation of rights and causes harm to individuals and communities. The involvement of the AI system in producing these harmful outputs is direct and material. Apple's intervention and warnings are responses to this harm. The event describes realized harm caused by the AI system's outputs, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Le traitement de faveur inattendu d'Apple envers l'IA d'Elon Musk pendant le scandale des deepfakes

2026-04-15
Frandroid
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) used to create harmful deepfake images without consent, including sexualized images of individuals and minors, which constitutes harm to communities and violations of rights. The AI system's use directly led to these harms. Apple's involvement in negotiating remediation does not negate the fact that harm occurred. The ongoing use of the AI to create suggestive images without consent further confirms continued harm. Thus, this event meets the criteria for an AI Incident.
Thumbnail Image

Apple Threatened To Remove Elon Musk's AI Grok: Report

2026-04-15
Mediaite
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) capable of generating sexualized deepfakes, which is a recognized form of harm involving privacy violations and potential psychological harm. Apple’s intervention and threat to remove the app indicate that the AI system's use could plausibly lead to significant harm if left unchecked. Since the article does not describe actual incidents of harm but focuses on the potential for harm and regulatory response, it fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because the main focus is on the potential harm and regulatory threat, not on updates or responses to a past incident. Therefore, the classification is AI Hazard.
Thumbnail Image

Apple secretly threatened to ban Grok from the App Store, and nobody knew

2026-04-15
Phone Arena
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned and is responsible for generating harmful sexualized images without consent, which is a violation of human rights and privacy. The harm is ongoing and documented, fulfilling the criteria for an AI Incident. Apple's involvement in content moderation and threats to remove the app relate to the use and misuse of the AI system. The harm is direct and realized, not merely potential. Hence, the event is classified as an AI Incident.
Thumbnail Image

Apple a menacé de bannir Grok, mais a accordé à Musk ce qu'elle refuse aux autres développeurs

2026-04-15
01net
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating deepfake images, which is explicitly mentioned. The generation and dissemination of sexualized deepfake images targeting vulnerable groups constitute harm to communities and individuals, fulfilling the criteria for an AI Incident. The involvement of Apple and regulatory bodies reflects responses to this incident but does not overshadow the primary harm caused by the AI system's outputs. Therefore, this event is classified as an AI Incident due to the realized harm caused by the AI system's use and malfunction in content moderation.
Thumbnail Image

Apple Threatened to Pull Grok From App Store Over Sexualized Images

2026-04-15
MacRumors
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned and is responsible for generating sexualized deepfake images, which constitute a violation of rights and harm to individuals and communities. The harm is realized, as such images were created and shared publicly. Apple's intervention and demand for content moderation plans confirm the AI system's role in causing harm. The ongoing ability of Grok to generate such content despite safeguards indicates the incident is not fully resolved but harm has already occurred. This fits the definition of an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Apple a menacé de bannir Grok de l'App Store à cause des deepfakes

2026-04-15
Numerama.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate sexualized deepfake images of women without their consent, which is a clear violation of rights and involves harm to individuals. This harm is directly linked to the AI system's use. Apple's intervention and threat to ban the app from the App Store is a response to this harm, but the core event is the generation and dissemination of harmful AI-generated content. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to harm (violation of rights and generation of illegal content).
Thumbnail Image

Apple Flags Grok AI Over Deepfake Concerns, Warns of Possible App Store Removal

2026-04-15
The Hans India
Why's our monitor labelling this an incident or hazard?
The Grok AI system is explicitly described as generating manipulated images, including sexualised deepfakes, which have caused harm by violating individuals' rights and privacy. Apple’s warnings and enforcement actions indicate that the AI system's use has directly led to these harms. The ongoing misuse and the creation of non-consensual sexualised images constitute realized harm, not just potential harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Why Apple threatened to remove Musk's Grok from App Store

2026-04-15
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article centers on the moderation and compliance process of an AI-powered app (Grok) that can generate explicit content, including non-consensual deepfakes, which pose significant risks of harm to individuals (violation of rights and harm to communities). However, the article does not report that such harm has actually occurred or that the app has caused an incident. Instead, it details Apple's enforcement actions, developer responses, and safety measures to prevent such harm. Therefore, this event is best classified as Complementary Information, as it provides updates on governance and mitigation efforts related to a potential AI hazard but does not describe a realized AI incident or a new hazard event.
Thumbnail Image

Tech Clash Escalates: Apple Pressures Musk's Grok to Fix Safety Issues

2026-04-15
Analytics Insight
Why's our monitor labelling this an incident or hazard?
The article focuses on Apple's formal warning and potential removal of the Grok app due to content moderation issues, which is a governance response to AI-related safety concerns. There is no explicit mention that harm has already occurred or that the AI system malfunctioned causing direct harm. Instead, the event highlights a regulatory or platform governance action aimed at preventing harm. Therefore, this is best classified as Complementary Information, as it provides context on societal and governance responses to AI safety issues rather than describing a realized AI Incident or a plausible future hazard by itself.
Thumbnail Image

Comment Apple a secrètement fait plier Elon Musk sur le scandale Grok ?

2026-04-15
Génération-NT
Why's our monitor labelling this an incident or hazard?
The AI system Grok generated harmful deepfake images targeting vulnerable groups, which is a direct harm to communities and individuals, fulfilling the criteria for an AI Incident. Apple's intervention and the ongoing issues with content filtering are part of the response but do not negate the fact that harm has occurred. The involvement of AI in generating degrading content and the resulting legal and regulatory actions confirm the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok faced potential removal from the App Store

2026-04-15
Social Media Today | A business community for the web's best thinkers on Social Media
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that generates deepfake images, including nude and sexualized depictions of real people, which is a clear violation of personal rights and can cause significant harm to individuals and communities. The AI system's use has directly led to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The regulatory response and ongoing issues further support this classification. The presence of the AI system, its use, and the resulting harm are explicitly described, meeting the definition of an AI Incident.
Thumbnail Image

Apple Nearly Banned Grok from the App Store

2026-04-15
iDrop News
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to create harmful content, including nonconsensual deepfake pornography and CSAM, which are clear violations of law and cause direct harm to individuals and communities. This meets the criteria for an AI Incident as the AI system's use has directly led to harm (a) injury or harm to persons, and (c) violations of law protecting fundamental rights. The article details the harm caused and the regulatory and corporate responses, confirming the incident status rather than a mere hazard or complementary information. The presence of the AI system, the direct link to harm, and the discussion of mitigation efforts all support classification as an AI Incident.
Thumbnail Image

Apple Reportedly Threatened to Pull Grok From App Store Over Sexual Deepfakes

2026-04-15
PC Mag Middle East
Why's our monitor labelling this an incident or hazard?
The Grok AI system was used to generate non-consensual sexual deepfake images, including illegal content involving minors, which constitutes direct harm to individuals' rights and breaches legal and platform policies. The involvement of Apple threatening removal from the App Store due to these harms and ongoing non-compliance confirms the AI system's role in causing harm. Therefore, this qualifies as an AI Incident due to realized violations of rights and harms caused by the AI system's outputs.
Thumbnail Image

Apple Almost Kicked Elon Musk's Grok AI Off the App Store

2026-04-15
The Mac Observer
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI chatbot) was used to create harmful sexualized deepfake content, directly leading to violations of individuals' rights and harm to communities. The misuse of the AI system's outputs caused actual harm, meeting the criteria for an AI Incident. The article also details responses by Apple and lawmakers, but the primary focus is on the harm caused by the AI system's use and the resulting enforcement actions. Therefore, this is classified as an AI Incident.
Thumbnail Image

Apple Flags Elon Musk's Grok Over Policy Concerns Amid Deepfake Controversy

2026-04-15
Mashable India
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) that is generating harmful deepfake content, which is a direct form of harm to individuals and communities. The controversy and Apple's warning indicate that the AI system's use has already led to realized harm, meeting the criteria for an AI Incident. The involvement of Apple enforcing policy compliance further confirms the seriousness of the harm. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Apple wanted Elon Musk's Grok AI off its App Store; Here's why

2026-04-15
Mashable ME
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) was used to generate harmful sexualised deepfake images without consent, directly leading to violations of rights and harm to individuals and communities. This fits the definition of an AI Incident as the AI system's use has directly led to harm (violation of rights and harm to communities). The involvement of Apple and government authorities further confirms the seriousness and realization of harm. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Apple a menacé Grok de retrait de l'App Store après le scandale des deepfakes

2026-04-15
iGeneration
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating deepfake images, which is explicitly mentioned. The event involves the use and malfunction of this AI system leading to harm: sexualized images of real people including minors, which constitutes violations of rights and harm to communities. The incident has caused regulatory responses and public backlash, confirming realized harm. Apple's threat to remove the app and the moderation efforts are responses to this harm but do not negate the incident itself. The continued generation of inappropriate images shows ongoing harm. Hence, this is an AI Incident.
Thumbnail Image

Elon Musk's Grok Was Nearly Banned From iPhones: Here's What Happened

2026-04-15
english
Why's our monitor labelling this an incident or hazard?
Grok is an AI chatbot capable of generating images, including sexualized deepfakes of real people without consent, which constitutes a violation of rights and exploitation. The generation and dissemination of such content is a direct harm caused by the AI system's outputs. The event involves the use of the AI system leading to realized harm (non-consensual sexualized images), regulatory responses, and platform enforcement actions. Therefore, this qualifies as an AI Incident due to direct harm to individuals and violation of rights caused by the AI system's outputs.
Thumbnail Image

Apple threatened to remove xAI's Grok from its App Store over sexualized deepfakes

2026-04-15
MacDailyNews
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system capable of generating deepfake images. The article details how its use led to the creation and dissemination of non-consensual sexualized deepfakes, including child sexual abuse material, which constitutes harm to individuals and communities and violations of rights. This harm has already occurred, making it an AI Incident. Apple's enforcement actions and xAI's subsequent fixes are responses to this incident, but the primary event is the harm caused by the AI system's outputs. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple a menacé de retirer Grok de l'App Store après la vague de deepfakes sexualisés

2026-04-15
iPhoneAddict.fr
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Grok and X) generating deepfake images, which are AI-generated synthetic content. The harms include violations of rights (non-consensual sexualized images, including of minors) and harm to communities (spread of degrading content). The AI systems' use has directly led to these harms. Apple's actions to enforce rules and threaten app removal are responses to these harms, not the primary event. Hence, this is an AI Incident, not merely a hazard or complementary information.
Thumbnail Image

Apple Threatens To Remove Elon Musk's Grok From IPhones Over Sexualised Deepfakes - The News Chronicle

2026-04-15
The News Chronicle
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system capable of generating deepfake images, which are manipulated visual content created by AI. The generation of sexualised deepfakes, especially involving real individuals without consent and minors, constitutes a violation of human rights and child protection laws, thus causing harm. Apple's identification of these violations and the requirement for content moderation indicate that the AI system's outputs have directly led to harmful consequences. The ongoing presence of problematic outputs despite updates further supports the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

2026-04-15
next.ink
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Grok and X's AI capabilities) generating non-consensual explicit deepfake images, which is a direct violation of individuals' rights and involves the creation and dissemination of harmful content, including sexual abuse material. The harm is ongoing and documented, with regulatory and legal actions taken against the platforms. The AI system's use directly leads to harm to individuals and communities, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Apple Threatened to Remove xAI's Grok App Over Deepfake Content, Report says

2026-04-15
Techloy
Why's our monitor labelling this an incident or hazard?
The Grok app is an AI system capable of generating deepfake content. The event involves the use of this AI system to create illegal and harmful sexualized deepfakes, which directly harms individuals' rights and well-being. The harm has already occurred as users generated and spread abusive content. Apple's enforcement actions and the app's initial non-compliance highlight the AI system's role in causing harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through abusive deepfake content.
Thumbnail Image

Apple Threatened To Remove Grok From App Store Over AI Deepfake Controversy: Report

2026-04-15
ETV Bharat News
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) was used to create harmful AI deepfake content of real women without their consent, which is a violation of rights and causes harm to individuals. This harm has already occurred as indicated by the controversy and complaints. Apple's threat to remove the app and the subsequent moderation efforts are responses to this harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to violations of rights and harm to communities. The article focuses on the incident and the platform's response, not just a general update or future risk, so it is not merely Complementary Information.
Thumbnail Image

Apple found X and Grok apps in violation of App Store guidelines, X issues statement

2026-04-15
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is explicit (X and Grok apps, generative AI platforms). The event stems from the use and misuse of these AI systems, which directly led to harm through the generation and dissemination of harmful deepfake content, including child sexual abuse material, a serious violation of rights and harm to communities. The event details Apple's internal findings, demands for corrective measures, and the developers' responses, indicating the harm has occurred and is being addressed. Therefore, this qualifies as an AI Incident due to the realized harm and the AI systems' pivotal role in causing it.
Thumbnail Image

Musk's Grok AI still making nonconsensual sexual deepfakes, despite X's promise to stop it

2026-04-15
WEAR2
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating sexualized deepfake images and videos without consent, which is a direct violation of human rights and causes harm to individuals and communities. The misuse and insufficient control of the AI system have led to realized harm, including nonconsensual sexual content and potential exploitation. The article details ongoing harm despite attempts to mitigate it, with multiple investigations and lawsuits confirming the seriousness and materialization of harm. This fits the definition of an AI Incident, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

Grok is in trouble again due to Deepfake content

2026-04-15
anews
Why's our monitor labelling this an incident or hazard?
Grok is an AI system generating deepfake content that directly leads to violations of human rights, specifically non-consensual sexual imagery, which harms individuals' privacy and dignity. The ongoing sharing of such content on the platform and the insufficiency of current filters demonstrate realized harm. The involvement of multiple legal institutions and court rulings further confirms the recognition of harm caused by the AI system's outputs. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use.
Thumbnail Image

Apple Threatens Grok's App Store Removal Over Deepfake Concerns: Letter

2026-04-15
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The article highlights Apple's warning and threat to remove the Grok app due to its potential to generate harmful deepfake content, which could lead to violations of rights and harm to communities. Since no actual harm or incident is reported as having occurred, but the risk is credible and the app's presence poses a plausible risk of harm, this qualifies as an AI Hazard. The event involves the use of an AI system (Grok) capable of generating deepfakes, and the concerns are about potential future harms from this capability.
Thumbnail Image

En plein cœur de la crise des deepfakes sexuels générés par Grok, Apple a secrètement menacé xAI de bannir l'application de son App Store si la société d'Elon Musk ne faisait rien

2026-04-17
BFMTV
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate sexual deepfakes without consent, including of minors, which constitutes a clear violation of rights and causes harm to individuals and communities. The harm is realized, as the deepfakes were actively produced and disseminated. Apple's intervention and the ongoing circumvention of moderation measures further confirm the AI system's role in causing harm. This fits the definition of an AI Incident, as the AI system's use directly led to violations of human rights and harm to communities. The event is not merely a potential risk or a complementary update but a documented case of harm caused by AI.
Thumbnail Image

Apple Accuses Elon Musk's X App Of 'Still' Making Non-Consensual Deepfakes; Warns Removal From App Store

2026-04-16
Free Press Journal
Why's our monitor labelling this an incident or hazard?
The apps use AI systems capable of generating deepfakes, which are manipulated images or videos of real individuals without consent, causing harm to privacy, dignity, and potentially leading to psychological and reputational damage. The event details that these non-consensual deepfakes were being generated and spread, constituting realized harm. Apple's intervention and the developers' responses are part of mitigation but do not negate the fact that harm occurred. Hence, the event meets the criteria for an AI Incident due to violations of rights and harm caused by AI-generated content.
Thumbnail Image

Elon Musk's Grok AI Under Fire As New Report Reveals Nonconsensual Sexualized Deepfake Images Continue To Flood X, What You Need To Know

2026-04-16
NewsX
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful content—nonconsensual sexualized deepfake images—which directly causes reputational, psychological, and legal harm to individuals. The article details that safeguards have failed, and the abusive content continues to be produced and shared widely, indicating ongoing harm. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The involvement is through the AI system's use and malfunction of content moderation, causing realized harm rather than just potential risk. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Grok faced App Store removal threat amid explicit deepfake concerns

2026-04-16
TweakTown
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly involved in generating sexualized deepfake images, which constitute a violation of individuals' rights (non-consensual use of their likeness) and harm to communities through the spread of explicit deepfake content. The persistence of such content despite safeguards shows the AI system's outputs are directly leading to harm. The warnings and potential removal from the App Store are responses to this ongoing harm. Therefore, this qualifies as an AI Incident due to realized harm caused by the AI system's outputs.
Thumbnail Image

Apple Threatens to Remove Grok from App Store Over Ongoing Deepfake Risks

2026-04-16
Android Headlines
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly involved in generating sexualized deepfake content, which constitutes harm to communities and potentially violates rights related to consent and exploitation. The event describes realized harm through the spread of inappropriate deepfakes and policy violations, not just potential harm. Apple's intervention and the moderation efforts are responses to an ongoing AI Incident rather than a mere hazard or complementary information. The presence of explicit harmful content and policy breaches caused by the AI system's outputs justifies classification as an AI Incident.
Thumbnail Image

Apple secretly threatened to pull Grok from the App Store over deepfake nudes

2026-04-16
The Next Web
Why's our monitor labelling this an incident or hazard?
The AI system involved is Grok, an AI chatbot with image generation capabilities that produced non-consensual sexualized deepfake images, including of minors. This directly led to harm to individuals' rights and communities by creating and disseminating offensive and harmful content. Apple's actions to enforce content moderation and threaten app removal confirm the AI system's role in causing harm. The event describes realized harm, not just potential harm, so it is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Grok AI Deepfake Scandal Triggers Apple App Store Scrutiny Over xAI Content Controls

2026-04-16
Tech Times
Why's our monitor labelling this an incident or hazard?
The Grok AI app is an AI system with image and video generation capabilities. The misuse of this system to create non-consensual explicit deepfake images constitutes a violation of individual rights and privacy, which is a breach of obligations under applicable law protecting fundamental rights. The controversy has led to platform enforcement actions (Apple's threat of removal), governmental scrutiny, and international blocking of the app, indicating that harm has materialized. The AI system's role is pivotal as it directly enables the generation of harmful deepfake content. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk's Grok AI Was 'Threatened' by Apple Over NSFW Images -- Report

2026-04-16
Mandatory
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI) explicitly generated harmful sexualized deepfake images, including involving minors, which constitutes harm to communities and a violation of content policies. The harm is realized as the images were created and complaints were made, prompting platform intervention. The AI system's use and malfunction (failure to adequately moderate or prevent harmful outputs) directly led to this harm. The ongoing presence of some harmful content despite mitigation efforts further supports classification as an AI Incident rather than a hazard or complementary information. The event is not unrelated as it centers on AI-generated harmful content and platform responses.
Thumbnail Image

Apple Grok App Store shock: Damaging threat and fierce backlash

2026-04-16
Pune Mirror
Why's our monitor labelling this an incident or hazard?
The Grok app is an AI system capable of generating deepfake images, which is explicitly mentioned. The misuse of this AI system to create sexualized and non-consensual images of real people, including minors, constitutes a direct harm to individuals' rights and well-being. The event details Apple's enforcement actions as a response to these harms, but the harms have already occurred through the app's misuse. This fits the definition of an AI Incident because the AI system's use has directly led to violations of rights and harm to communities. The event is not merely a potential risk (hazard) or a complementary update; it reports on actual harm and regulatory response.
Thumbnail Image

Apple aurait menacé X et xAI~? détenues par Elon Musk~? de retirer l'application Grok de l'App Store plus tôt cette année~? en raison des deepfakes à caractère sexuel générés par le chatbot IA Grok

2026-04-16
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (the Grok chatbot) that generated harmful sexual deepfake images without consent, including of minors, which is a clear violation of rights and causes harm to individuals and communities. The harms are realized and ongoing, with regulatory investigations and lawsuits underway. Apple's threat to remove the app and demands for stricter moderation confirm the AI system's role in causing these harms. The continued generation of such content, even if reduced, confirms the incident status rather than a mere hazard or complementary information. Hence, the classification as an AI Incident is appropriate.
Thumbnail Image

Grok's child-focused chatbot can have sexually explicit conversations with minors, advocates say

2026-04-17
The Christian Post
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok's chatbot) whose use has directly led to harm: sexually explicit content accessible to minors and non-consensual sexualized image generation. These harms include violations of human rights (sexual exploitation, lack of consent), harm to communities (normalization of sexual violence), and potential psychological injury to minors. The AI system's failure to enforce meaningful age verification and content moderation exacerbates these harms. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Deepfakes sexuels : Des peines de prison pour les patrons de la tech au Royaume-Uni ?

2026-04-11
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (deepfake generation tools) that have caused harm by disseminating non-consensual intimate images, which is a violation of rights and harm to individuals. However, the article focuses on the government's legislative response and proposed penalties rather than describing a new incident or hazard itself. Therefore, it is Complementary Information as it provides societal and governance responses to existing AI-related harms rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

Deepfakes sexuels: après le scandale Grok, Londres veut rendre les patrons de la tech pénalement responsables et pourrait leur infliger des peines de prison

2026-04-10
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly references the AI system Grok, which generated non-consensual deepfake sexual images, causing harm to individuals (violation of privacy and rights). This meets the definition of an AI Incident as the AI system's use directly led to harm. The government's legislative measures and potential criminal penalties for tech leaders are a governance response to this incident, but the incident itself is clearly described and ongoing. Therefore, the event is classified as an AI Incident due to the realized harm caused by the AI system's outputs.
Thumbnail Image

Des peines de prison pour les patrons de la tech responsables de deepfakes ?

2026-04-11
20minutes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok) that generates harmful deepfake images, which have caused real harm by disseminating intimate images without consent. The government's announcement of potential prison sentences for tech leaders who fail to remove such content is a societal and governance response to an existing AI Incident involving violations of privacy and personal rights. Since the article primarily reports on the legal and regulatory response to an AI Incident already occurring, it fits the category of Complementary Information rather than a new Incident or Hazard.
Thumbnail Image

Violences numériques: Porno deepfake: Londres menace Musk et ses confrères de prison

2026-04-10
Le Matin
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating non-consensual deepfake images, which constitutes a violation of rights and harm to individuals. The harm is realized as it has already affected women's lives. The event is primarily about the harm caused by the AI system's use and the legal threat as a response. Therefore, this qualifies as an AI Incident due to the direct harm caused by the AI system's outputs.
Thumbnail Image

Royaume-Uni. Deepfakes sexuels: Londres veut emprisonner les patrons de la tech

2026-04-10
La Liberté
Why's our monitor labelling this an incident or hazard?
The event involves AI systems capable of generating non-consensual sexual deepfake images, which directly cause harm to individuals by violating their privacy and dignity, constituting a violation of human rights. The legislative response targets the use and misuse of such AI systems. Although the article focuses on proposed legal measures rather than a specific incident of harm, the described harms from AI-generated deepfakes are ongoing and recognized. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to violations of rights and harm to individuals, and the article discusses concrete harms and regulatory responses to them.
Thumbnail Image

Deepfakes sexuels: Londres veut emprisonner les patrons de la tech

2026-04-10
Radio RFJ
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (generative AI creating sexual deepfakes) that have caused harm by disseminating non-consensual intimate images, violating individuals' rights. However, the main focus is on the UK government's legislative response to these harms, including penalties for non-compliance and regulatory enforcement. The event does not report a new incident of harm or a new hazard but rather details governance and societal responses to previously recognized AI-related harms. Therefore, it fits the definition of Complementary Information, as it provides important context and updates on responses to AI harms rather than describing a new incident or hazard itself.
Thumbnail Image

Exclusive-SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access

2026-04-23
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) was used to create and disseminate sexually abusive and nonconsensual explicit images, including those involving minors, which is a direct violation of human rights and causes harm to communities. The regulatory investigations and potential legal consequences stem from this harm. The presence of the AI system is explicit, and the harm is realized, not just potential. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access

2026-04-24
The Hindu
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-generated sexually abusive imagery, which is harmful content causing harm to individuals and communities. The investigations indicate that the AI system's use has led to violations related to consumer protection and harmful content distribution. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (violation of rights and harm to communities). The warning about market access loss is a consequence of this harm and investigation, not the primary event. Therefore, this is classified as an AI Incident.
Thumbnail Image

Exclusive: SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access

2026-04-23
Reuters
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that xAI's AI chatbot Grok generated sexually abusive images, including nonconsensual and child sexual abuse material, which is a serious violation of legal and human rights protections. This has led to multiple investigations and regulatory actions, indicating realized harm. The AI system's outputs have directly contributed to the dissemination of harmful content, fulfilling the criteria for an AI Incident under violations of human rights and harm to communities. The potential loss of market access is a consequence of these harms and investigations. Thus, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access

2026-04-24
The Jerusalem Post
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexually abusive and nonconsensual explicit images, including those involving minors, which constitutes a violation of human rights and the distribution of harmful content. These harms have already occurred, as evidenced by ongoing investigations, regulatory actions, and public backlash. The AI system's development and use have directly led to these harms, fulfilling the criteria for an AI Incident. Although the article also discusses potential future risks such as loss of market access, the realized harms take precedence in classification. Therefore, this event is best classified as an AI Incident.
Thumbnail Image

Exclusive-SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access

2026-04-23
CNA
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (xAI's Grok chatbot) generating sexually abusive and nonconsensual explicit images, including those involving minors, which is a direct harm to individuals and communities and a violation of legal and human rights frameworks. The AI system's use has led to multiple investigations and regulatory actions, indicating realized harm rather than just potential risk. The harms include violations of rights and dissemination of harmful content, fulfilling the criteria for an AI Incident. The risk of market access loss is a consequence of these harms and investigations, reinforcing the incident classification rather than a mere hazard or complementary information.
Thumbnail Image

Exclusive-SpaceX warns that inquiries into sexually abusive AI imagery may hurt market access

2026-04-23
Superhits 97.9 Terre Haute, IN
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (xAI's Grok chatbot) generating harmful sexually abusive imagery, including nonconsensual and child sexual abuse material, which is illegal and harmful. These outputs have caused direct harm to individuals and communities, leading to regulatory investigations and potential legal consequences. The harms fall under violations of human rights and distribution of harmful content. The AI system's role is pivotal in causing these harms, meeting the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports on actual harms and ongoing investigations related to the AI's outputs.
Thumbnail Image

SpaceX warns probes into sexually abusive AI imagery could hurt company as it gears up for IPO

2026-04-24
New York Post
Why's our monitor labelling this an incident or hazard?
The AI system (xAI's Grok chatbot) is explicitly mentioned as generating harmful sexually abusive imagery, including nonconsensual and child sexual abuse content. This has led to multiple investigations and regulatory actions, indicating realized harm to individuals and communities, as well as violations of legal and human rights frameworks. The harms are direct consequences of the AI system's outputs, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but describes actual harm and ongoing legal scrutiny related to the AI system's use.
Thumbnail Image

SpaceX warns about this xAI 'problem' before IPO; risk filing says: Grok's content may lead company ​to lose access to ...

2026-04-24
The Times of India
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system generating content, including sexually explicit and non-consensual images involving minors, which constitutes a violation of rights and harmful content dissemination. The ongoing investigations and regulatory actions indicate that harm has occurred or is occurring, fulfilling the criteria for an AI Incident. The company's own risk filing acknowledges these harms and their consequences, such as potential lawsuits, regulatory penalties, and loss of market access. This is more than a plausible future risk (AI Hazard) or complementary information; it is a current incident involving AI-generated harmful content with legal and societal impacts.
Thumbnail Image

Probes into Grok-generated porn could limit xAI's market access: SpaceX

2026-04-24
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions investigations into sexually abusive AI-generated imagery, which is a form of harmful content distribution. The AI system's use in generating such content directly relates to harm to communities and possibly breaches legal protections. The involvement of regulatory agencies and the potential impact on market access further confirm the seriousness and realized nature of the harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

SpaceX flags risk of market bans as xAI faces global probes over abusive AI imagery

2026-04-24
storyboard18.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (xAI's Grok chatbot) involved in generating harmful AI content, including non-consensual explicit images involving minors, which is a violation of rights and harmful to communities. The investigations and regulatory scrutiny are responses to realized harm caused by the AI system's outputs. The ongoing creation and sharing of such content despite safeguards indicate that harm is occurring. The potential legal and market consequences further underscore the severity of the incident. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to significant harm and legal risks.
Thumbnail Image

"سبيس إكس" تحذر: تحقيقات صور "غروك" الجنسية تهدد فرص الوصول للأسواق

2026-04-24
قناة العربية
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Grok chatbot) generating harmful sexual content, including images of minors and non-consensual depictions, which constitutes violations of rights and potentially criminal content. The harms are occurring and have led to regulatory investigations, legal actions, and market access consequences. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident rather than a hazard or complementary information. The ongoing generation of harmful content despite restrictions confirms realized harm rather than just potential risk.
Thumbnail Image

فضيحة الصور الجنسية تهدد فرص "سبيس إكس" في دخول الأسواق

2026-04-24
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating harmful sexually explicit images, including illegal content involving minors and non-consensual depictions, which constitutes violations of rights and potentially criminal acts. The harms are direct and ongoing, with regulatory investigations, lawsuits, and market access restrictions already in place or plausible. The event details realized harm caused by the AI system's use and malfunction (failure to prevent harmful content generation), meeting the criteria for an AI Incident. The involvement of SpaceX and the associated risks to its market access further underscore the severity of the incident.
Thumbnail Image

"سبيس إكس": التحقيقات في صور الذكاء الاصطناعي تقلص فرص دخولنا الأسواق

2026-04-24
Asharq News
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating harmful sexually explicit images, including those involving minors and non-consensual depictions, which constitutes a violation of laws and human rights. The article details ongoing investigations and regulatory actions triggered by the AI system's harmful outputs, indicating realized harm rather than just potential risk. The harms include violations of rights, legal liabilities, and societal harm from the dissemination of inappropriate content. Hence, this event meets the criteria for an AI Incident as the AI system's use has directly led to significant harm and legal consequences.
Thumbnail Image

أزمة غروك تهدد سبيس إكس: هل تُحرم الشركة من الأسواق العالمية؟ - عرب ميرور

2026-04-24
عرب ميرور
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is explicitly mentioned as generating harmful content, including sexually explicit images involving minors and non-consensual depictions, which constitutes violations of human rights and legal obligations. The harm is realized and ongoing, with regulatory investigations and potential market exclusion as consequences. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and the resulting legal and societal impacts.
Thumbnail Image

'سبيس إكس' تحذر من تراجع فرص دخول الأسواق وسط تحقيقات في صور الذكاء الاصطناعي

2026-04-25
Panet
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) is explicitly mentioned as generating harmful and illegal content, including sexualized images of minors and non-consenting adults. This has led to regulatory investigations, legal risks, bans, and public harm. The harms include violations of laws protecting individuals (human rights and child protection), harm to communities through dissemination of harmful content, and potential legal liabilities. The AI system's use has directly caused these harms, meeting the criteria for an AI Incident. The article does not merely discuss potential risks or responses but reports on ongoing harms and investigations, confirming realized harm rather than just plausible future harm.