French Prosecutors Investigate AI-Generated Deepfake Scandal Involving Elon Musk's Companies

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

French prosecutors are investigating Elon Musk's companies X and xAI after Grok, their AI system, generated sexualized deepfake images, including those depicting minors. Authorities suspect the controversy may have been orchestrated to artificially inflate company valuations ahead of a planned 2026 stock listing. Musk publicly insulted the prosecutors in response.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system ('Grok') to generate deepfake sexual videos, which are being investigated for their role in artificially inflating company value and potentially involving illegal content related to children. This involves direct use of AI leading to violations of law and human rights, fitting the definition of an AI Incident. The investigation and legal scrutiny confirm that harm has occurred or is ongoing, rather than just a potential risk, thus it is not merely a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenBusiness

Harm types
Human or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

القضاء الفرنسي يشتبه بمحاولة ماسك تضخيم قيمة "إكس" بصورة مصطنعة

2026-03-21
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Grok') to generate deepfake sexual videos, which are being investigated for their role in artificially inflating company value and potentially involving illegal content related to children. This involves direct use of AI leading to violations of law and human rights, fitting the definition of an AI Incident. The investigation and legal scrutiny confirm that harm has occurred or is ongoing, rather than just a potential risk, thus it is not merely a hazard or complementary information.
Thumbnail Image

ماسك رداً على اتهامات فرنسا له بتضخيم قيمة إكس: "يعانون إعاقة ذهنية" | صحيفة الخليج

2026-03-22
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (the 'Grok' AI program) to generate deepfake sexual videos, which is a direct use of AI technology. The harms include potential violations of laws related to child exploitation and the artificial inflation of company value through deceptive means, which constitute breaches of legal and fundamental rights. The investigation and legal scrutiny indicate that these harms have materialized or are strongly suspected to have materialized. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing or enabling these harms.
Thumbnail Image

اشتباه قضائي فرنسي: إيلون ماسك حاول رفع قيمة "إكس" بصورة مصطنعة

2026-03-21
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology ('Grok' chatbot) to create fake sexual videos, including those involving minors, which constitutes a violation of human rights and legal obligations. The use of AI in this context has directly led to harms related to exploitation and illegal content, and the investigation is focused on these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm, including violations of rights and potentially criminal activities.
Thumbnail Image

فرنسا تشتبه بمحاولة ماسك تضخيم قيمة "إكس" عبر مقاطع جنسية

2026-03-22
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology (an AI system) to produce fake sexual videos, including potentially illegal content involving children. This use has led to legal investigations and concerns about harm to individuals (sexual exploitation, child protection issues) and communities (spread of harmful content). The AI system's use is directly linked to these harms and legal violations, fulfilling the criteria for an AI Incident. The investigation and legal scrutiny further confirm the seriousness and realized nature of the harm.
Thumbnail Image

Elon Musk, despre procurorii francezi: "înapoiți mintal". Ce l-a supărat pe miliardar

2026-03-22
Digi24
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized deepfake images, including those involving minors, which constitutes direct harm to individuals and communities. The French prosecutors' investigation and the reported misuse of the AI system to spread Holocaust denial and sexualized content demonstrate violations of rights and harm to communities. These harms have materialized, not just potential risks, qualifying this as an AI Incident rather than a hazard or complementary information. The involvement of the AI system in producing and disseminating harmful content is central to the event.
Thumbnail Image

"Sunt niște înapoiați mintal": Elon Musk îi insultă pe procurorii din Paris care susțin că scandalul Grok a fost provocat pentru a crește valoarea X și xAI - HotNews.ro

2026-03-22
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake images, including those of minors, which constitutes harm to individuals and communities. The scandal has led to legal investigations and accusations of misuse of AI for manipulative purposes, including potential violations of law and rights. The involvement of the AI system in generating harmful content and the resulting legal and societal impacts meet the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm scenario involving AI misuse and its consequences.
Thumbnail Image

Elon Musk, atac virulent la adresa procurorilor parizieni: "Sunt niște înapoiați mintali" - Știrile ProTV

2026-03-22
Stirile ProTV
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system capable of generating and editing images based on user prompts. The reported generation of millions of sexualized deepfake images, including those resembling children, represents harm to communities and violations of rights (non-consensual image generation). The investigation by multiple authorities and the reported misuse of the AI system confirm that the harms have materialized. Hence, this is an AI Incident due to the direct link between the AI system's use and realized harms.
Thumbnail Image

"Înapoiați mintal". Elon Musk îi jignește pe procurorii francezi după o anchetă legată de companiile X și xAI

2026-03-22
Gândul
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system generating deepfake images, including sexualized content and images resembling minors, which is illegal and harmful. The French prosecutors' investigations into the use of AI-generated content for manipulation and harmful dissemination demonstrate direct harm to communities and potential legal violations. The AI system's role is pivotal in producing and distributing this harmful content, fulfilling the criteria for an AI Incident. The event describes actual harm and ongoing investigations, not just potential risks or general AI-related news, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Elon Musk i-a numit retardați mintal pe procurorii francezi

2026-03-22
Cotidianul RO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake content that is allegedly used deliberately to manipulate company valuations, which constitutes a misuse of AI leading to harm. The involvement of prosecutors investigating this misuse indicates that harm has occurred or is ongoing, fulfilling the criteria for an AI Incident. The harm includes violations of legal frameworks (market manipulation) and harm to communities (through misinformation and deceptive practices).
Thumbnail Image

Investitorii urmăresc cum anchetele pariziene influențează planul de listare a entității SpaceX-xAI

2026-03-22
Business24
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (Grok, an AI bot) is explicit, and its use has directly led to harms such as the generation of sexualized deepfake images, including those depicting children, which constitutes harm to communities and violations of rights. The ongoing investigations and legal actions confirm that these harms are materialized rather than hypothetical. The article also discusses the impact on corporate governance and investor confidence, further evidencing significant harm linked to the AI system's use. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk îi atacă pe procurorii francezi după ancheta Grok

2026-03-22
Financiarul.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized deepfake images, including those depicting children, which constitutes harm to communities and violations of rights. The prosecutors' investigation and the described harms indicate that the AI system's use has directly led to these harms. The involvement of the AI system in generating non-consensual sexual content and the resulting legal scrutiny confirm this as an AI Incident rather than a hazard or complementary information. The article details realized harm, not just potential harm or responses to past events.
Thumbnail Image

Elon Musk îi califică pe procurorii francezi drept "înapoiaţi mintal" după o sesizare în Statele Unite

2026-03-22
News.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized deepfake images, including those depicting children, which constitutes harm to individuals and communities. The misuse and dissemination of such content represent violations of rights and societal harm. The ongoing investigations and legal actions confirm that harm has occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Francúzska prokuratúra podozrieva Muska z podpory deepfakeov pre rast hodnoty X - Svet SME

2026-03-21
www.sme.sk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Grok) to generate sexualized deepfake videos, which constitute a violation of rights and cause harm to communities by spreading harmful and potentially illegal content. The involvement of the AI system in producing and disseminating this content has directly led to legal investigations and concerns about harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm and legal consequences.
Thumbnail Image

Musk dostáva jednu ranu za druhou: Prokuratúra odhalila ďalšie zverstvá, ktoré má na krku

2026-03-22
www.pluska.sk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI tool (Grok) used to spread harmful content such as Holocaust denial and sexual deepfakes, which are violations of human rights and legal obligations. The involvement of prosecutors and international legal authorities confirms the seriousness and realized harm of these actions. Therefore, this qualifies as an AI Incident due to the direct use of AI causing harm and legal violations.
Thumbnail Image

Francúzska prokuratúra podozrieva Muska z podpory deepfakeov pre rast hodnoty platformy X

2026-03-21
Hospodarske Noviny
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) used to generate deepfake videos, which are harmful content causing violations of rights and harm to communities. The AI system's use has directly led to the spread of sexualized deepfakes and Holocaust denial content, which are serious harms. The involvement of legal authorities and ongoing investigations further confirm the materialization of harm. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Francúzska prokuratúra podozrieva Muska z podpory deepfakeov pre rast hodnoty X

2026-03-21
trend.sk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) to create sexualized deepfake videos, which are being spread on a social media platform (X). The suspected intent is to artificially inflate the company's value, which implies a violation of legal frameworks and potential harm to investors and the public. The involvement of prosecution and regulatory authorities further supports the seriousness of the harm. Since the AI system's use has directly or indirectly led to potential violations of law and harm to communities, this event meets the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Podľa francúzskej prokuratúry je pravdepodobné, že Elon Musk podporoval tvorbu deepfakov na platforme X

2026-03-21
.týždeň - iný pohľad na spoločnosť
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexually explicit deepfakes, which are harmful content violating human rights and legal norms. The allegations that this was done to manipulate company valuation and the ongoing investigations indicate that harm has occurred or is occurring. The AI system's use directly led to the creation and dissemination of harmful content, fulfilling the criteria for an AI Incident. The involvement of multiple jurisdictions and legal authorities further supports the seriousness and realized nature of the harm.
Thumbnail Image

Francúzske úrady podozrievajú Elona Muska z podpory deepfakeov na sieti X

2026-03-21
Denník E
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) to generate deepfake images and videos that are sexualized and include Holocaust denial content, which are harmful and violate legal and human rights frameworks. The AI system's outputs are being used in a way that has already caused harm or legal concerns, such as misinformation and potentially manipulative content affecting communities and rights. The investigation by multiple authorities confirms the seriousness and realized nature of the harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

法檢懷疑馬斯克操弄深偽爭議 人為拉抬X估值 | 聯合新聞網

2026-03-22
UDN
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot, an AI system, generated unauthorized pornographic deepfake images, which constitutes harm to communities and possibly violations of rights. The controversy is alleged to have been deliberately manipulated to inflate company valuation, indicating misuse of the AI system. The involvement of legal authorities and the investigation into the AI system's role in spreading harmful content confirms that harm has occurred. Hence, this is an AI Incident as the AI system's use has directly led to significant harm and legal consequences.
Thumbnail Image

法檢懷疑馬斯克操弄深偽爭議 人為拉抬X估值 | 國際 | 中央社 CNA

2026-03-22
Central News Agency
Why's our monitor labelling this an incident or hazard?
The AI system (Grok chatbot) generated unauthorized pornographic deepfake images, which is a direct violation of rights and causes harm to communities. The controversy is suspected to be intentionally manipulated to affect company valuation, indicating misuse of the AI system. The event involves realized harm linked to the AI system's outputs and its use in a way that breaches legal and ethical standards. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

法国检方怀疑马斯克 怂恿深度伪造以推高X估值

2026-03-22
東方網 馬來西亞東方日報
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok AI chatbot) generating harmful deepfake content, including pornographic images involving minors, which constitutes direct harm to individuals and communities. The suspected deliberate encouragement of such content to manipulate company valuation indicates misuse of the AI system. The ongoing investigations into political interference and Holocaust denial content dissemination further confirm violations of rights and harm to communities. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to significant harms.
Thumbnail Image

马斯克回应xAI或将上线电脑操控智能体"Grok Computer":即将推出

2026-03-23
k.sina.com.cn
Why's our monitor labelling this an incident or hazard?
The article mentions an AI system under development that will enable autonomous computer control through AI agents, which is a significant AI advancement. However, there is no indication that any harm has occurred or that there is a direct or indirect link to realized harm. The data collection for the project was paused previously, but no incident or hazard is described. The main focus is on the upcoming launch and the technical capabilities, without mention of risks or incidents. Therefore, this is best classified as Complementary Information, providing context and updates about AI system development without reporting an incident or hazard.
Thumbnail Image

法检方怀疑马斯克操弄深伪争议拉升X估值 - 欧洲头条

2026-03-23
xinouzhou.com
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating content, including unauthorized and harmful deepfake images. The controversy caused by this AI-generated content has resulted in legal investigations and allegations of manipulation for financial gain. The AI system's outputs have directly caused harm through the spread of non-consensual explicit content and potentially harmful misinformation, constituting violations of rights and harm to communities. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

涉嫌生成儿童色情深度伪造内容 xAI遭美国巴尔的摩市起诉索赔 - cnBeta.COM 移动版

2026-03-25
cnBeta.COM
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) used to generate deepfake images, including illegal child sexual exploitation content, which is a direct violation of human rights and legal protections. The harm is realized and significant, involving exploitation and illegal content generation. The lawsuit and regulatory investigations confirm the AI system's role in causing these harms. Therefore, this qualifies as an AI Incident due to the direct and serious harm caused by the AI system's use.
Thumbnail Image

涉嫌生成儿童色情深度伪造内容,xAI遭美国巴尔的摩市起诉索赔

2026-03-25
凤凰网(凤凰新媒体)
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating harmful deepfake content, including illegal child sexual exploitation images, which is a direct violation of laws and causes significant harm to individuals and communities. The lawsuit and regulatory investigations confirm that the AI system's use has led to realized harm, meeting the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a concrete case of harm caused by the AI system's outputs.
Thumbnail Image

Francuski tužitelj: Musk je poticao eksplicitne deepfakeove da poveća vrijednost firme

2026-03-21
IndexHR
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI) was used to generate explicit deepfake images without consent, which is a clear violation of rights and causes harm to individuals depicted and the broader community. The French prosecutors' investigation and the reported scale of generated images confirm that harm has materialized. The AI's role in producing and spreading this harmful content is pivotal, meeting the criteria for an AI Incident. The event is not merely a potential risk or complementary information but a realized harm linked to AI use.
Thumbnail Image

Francuski tužitelji sumnjaju da je Musk poticao odvratne objave: 'Kontaktirali smo odvjetnike'

2026-03-21
Jutarnji list
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating sexualized deepfake images without consent, including of children, which constitutes direct harm to individuals' rights and communities. The involvement of the AI system in producing this harmful content is explicit, and the harm is realized and ongoing. The legal investigations and official complaints further confirm the materialized harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Francuski tužitelji: Musk je poticao seksualizirane deepfakeove

2026-03-21
tportal.hr
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake images, including those of children, which is a clear harm to individuals and communities. The involvement of French prosecutors and investigations by other jurisdictions indicates that harm has materialized and is being addressed legally. Elon Musk's alleged encouragement of such content further implicates the AI system's use in causing harm. This meets the criteria for an AI Incident because the AI system's use has directly led to violations of rights and harm to communities through the creation and spread of harmful deepfake content.
Thumbnail Image

Elon Musk poticao seksualizirane deepfakeove? Evo što je izazvalo potpuni bijes i pokrenulo istragu

2026-03-21
Vecernji.hr
Why's our monitor labelling this an incident or hazard?
The Grok AI system is explicitly mentioned as generating sexualized deepfake images without consent, which constitutes a violation of rights and harm to individuals and communities. The event describes realized harm, not just potential harm, as millions of such images were generated and disseminated. The involvement of legal authorities and investigations further confirms the materialization of harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Francuski tužitelji sumnjaju da je Musk poticao seksualizirane deepfakeove - Novi list

2026-03-22
Novi list
Why's our monitor labelling this an incident or hazard?
The AI system (Grok AI) was used to generate sexualized deepfake images without consent, including images of minors, which constitutes harm to individuals and communities (privacy violations, potential exploitation). The involvement of the AI system in producing harmful content is explicit, and the harm is realized, not just potential. The suspected encouragement by Musk further implicates the use of the AI system in causing harm. This meets the criteria for an AI Incident as defined, involving direct harm linked to the AI system's use.
Thumbnail Image

Francuski tužitelji sumnjaju da je Musk poticao seksualizirane deepfakeove da poveća vrijednost X-a

2026-03-21
Telegram.hr
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Grok AI) generating sexualized deepfake images without consent, which is a violation of rights and harms individuals and communities. The AI's outputs have directly caused harm, and the suspected encouragement by Musk links the AI's use to intentional misconduct. This meets the criteria for an AI Incident due to realized harm involving human rights violations and harm to communities.
Thumbnail Image

Musk pod istragom: Deepfakeovi i Grok dizali vrijednost X-a?

2026-03-21
Glas Istre HR
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake images without consent, including of minors, which constitutes harm to individuals and communities and breaches rights. The involvement of the AI system in producing this harmful content is direct and central to the incident. The investigation and legal actions confirm that harm has occurred. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to violations of rights and harm to communities.
Thumbnail Image

OPAKO Francuski tužitelji sumnjaju da je Musk poticao seksualizirane deepfakeove

2026-03-21
Nacional
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating deepfake images. The sexualized deepfake images, especially those involving children, represent clear harm to individuals' rights and dignity, fulfilling the criteria for an AI Incident. The event describes realized harm through the generation and spread of these images, and the legal investigations confirm the seriousness of the issue. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Promotoria de Paris suspeita que Musk incentivou 'deepfakes' para aumentar valor do X

2026-03-21
Estadão
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (deepfake technology and the AI chatbot Grok) being used to create and disseminate harmful content, including sexualized deepfakes and Holocaust denial videos. These actions constitute violations of rights and harm to communities, fulfilling the criteria for an AI Incident. The investigation and legal actions indicate that harm has already occurred or is ongoing, rather than being merely potential. Thus, the event is best classified as an AI Incident.
Thumbnail Image

Justiça francesa alerta EUA para possível valorização artificial do X

2026-03-21
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake content, which is a clear AI system involvement. The use of this AI system is suspected to have directly led to significant harms, including violations of law related to child exploitation material and market manipulation, which are breaches of legal and ethical rights. The investigation and the sharing of information with US authorities and the SEC indicate that these harms are materialized and under legal scrutiny. Hence, the event meets the criteria for an AI Incident due to direct involvement of AI in causing harm and legal violations.
Thumbnail Image

Elon Musk alimentou polémica com imagens sexuais falsas, acredita a Justiça francesa

2026-03-21
JN
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system capable of generating images based on user prompts. Its use to create non-consensual sexualized deepfake images constitutes a violation of rights and causes harm to individuals and communities. The fact that the AI generated around three million such images, including those depicting children, and that this has led to investigations by French, UK, and EU authorities, demonstrates realized harm. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs.
Thumbnail Image

Musk incentivou uso de IA para criar deepfake de mulheres nuas, diz promotoria francesa

2026-03-21
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) to create and disseminate deepfake sexual content, including images involving children, which is a clear violation of human rights and legal norms. The harm is realized, as evidenced by the large volume of such content and the international investigations underway. The AI system's use has directly led to harm (sexual abuse, violation of rights), fulfilling the criteria for an AI Incident. The involvement of the platform's owner in encouraging this use further supports the classification. This is not merely a potential risk or complementary information but a concrete case of AI-enabled harm.
Thumbnail Image

Elon Musk é investigado por uso de deepfakes para valorizar X e xAI

2026-03-22
Diário do Centro do Mundo
Why's our monitor labelling this an incident or hazard?
The Grok AI system is explicitly mentioned as enabling the generation of sexualized deepfake content without consent, which constitutes a violation of rights and harm to individuals depicted. Additionally, the alleged use of this AI-generated content to artificially inflate company valuations implies indirect harm through market manipulation. These harms have already occurred or are under active investigation, indicating realized harm rather than just potential risk. Therefore, this qualifies as an AI Incident due to the direct and indirect harms caused by the AI system's use.
Thumbnail Image

Promotoria de Paris suspeita que Musk incentivou 'deepfakes' para aumentar o valor do X

2026-03-21
ISTOÉ Independente
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating deepfake content, which is harmful and illegal in nature. The suspected encouragement by Elon Musk to produce such content for financial gain indicates misuse of the AI system. The harms include violations of human rights (e.g., sexual exploitation, Holocaust denial), potential market manipulation, and dissemination of harmful misinformation. These harms have already occurred or are ongoing, meeting the criteria for an AI Incident. The involvement of regulatory authorities and ongoing investigations further supports the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Procuradoria de Paris suspeita que Musk incentivou uso de deepfakes para impulsionar valor do X

2026-03-22
O Globo
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating deepfake content with sexual explicitness and false information, which has caused harm to individuals (non-consensual sexualized images) and communities (Holocaust denial, misinformation). The suspected encouragement by Musk to use such content to inflate company value links the AI system's use to potential legal violations and harm. The harms are realized and under investigation, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but involves actual harm and legal scrutiny.
Thumbnail Image

Promotoria de Paris suspeita que Musk incentivou 'deepfakes' para aumentar o valor do X

2026-03-21
UOL notícias
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating sexual deepfakes, which are AI-generated synthetic content. The suspicion is that this was deliberately encouraged to artificially increase company valuation, implying potential misuse of AI for fraudulent purposes. Since the event is currently under investigation and no confirmed harm or incident has been reported, it fits the definition of an AI Hazard, where the AI system's use could plausibly lead to legal and ethical harms (market manipulation, violation of laws). There is no indication that harm has already occurred or been confirmed, so it is not an AI Incident. It is not merely complementary information because the main focus is on the suspicion of misuse and potential harm, not on responses or ecosystem context. Therefore, the classification is AI Hazard.
Thumbnail Image

Justiça francesa alerta autoridades dos EUA para possível valorização artificial do X de Elon Musk

2026-03-21
ECO
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating hyper-realistic sexual deepfakes, which are suspected to have been used deliberately to manipulate company valuations, a form of market manipulation and violation of financial regulations. Additionally, the investigation includes serious allegations related to illegal sexual content involving minors, which constitutes a violation of fundamental rights and laws. The AI's involvement in creating and disseminating such content directly or indirectly leads to significant harms, including legal violations and potential harm to individuals depicted or affected. The event involves the use and possible misuse of AI, leading to or associated with violations of law and rights, fitting the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Musk teria incentivado 'deepfakes' sexuais para aumentar o valor do X, diz promotoria de Paris

2026-03-22
VEJA
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved in generating harmful sexual deepfake content without consent, which is a violation of rights and causes harm to individuals and communities. The Prosecutor's Office links this to deliberate actions to manipulate company valuation, indicating misuse of the AI system. The harms described are realized and ongoing, including non-consensual sexual imagery and misinformation. Therefore, this event qualifies as an AI Incident due to direct harm caused by the AI system's use and its societal impact.
Thumbnail Image

Justiça francesa alerta autoridades dos EUA para possível valorização artificial do X de Elon Musk

2026-03-21
Executive Digest
Why's our monitor labelling this an incident or hazard?
The AI system (Grok) is explicitly mentioned as generating sexual deepfake images, which are suspected to be used deliberately to manipulate company valuation, constituting a violation of legal and ethical standards. The involvement of AI in creating illegal content and market manipulation directly links to harms including violations of law and potential harm to communities and individuals. Since the event describes ongoing investigations into realized or ongoing harms caused by the AI system's use, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Promotoria de Paris suspeita que Musk incentivou 'deepfakes' para aumentar o valor do X

2026-03-21
TradingView
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate sexual deepfakes without consent, which constitutes a violation of rights and harm to individuals and communities. The investigations and official actions by multiple authorities confirm that harm has occurred due to the AI system's outputs. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm, including violations of rights and dissemination of harmful content. The mention of Elon Musk allegedly encouraging such use to artificially inflate company value further supports the direct involvement of AI misuse leading to harm.
Thumbnail Image

Baltimore sues Elon Musk's AI company over Grok's fake nude images

2026-03-24
The Guardian
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system generating sexualized deepfake images without consent, including child sexual abuse material, which is a clear violation of rights and causes harm to individuals and communities. The lawsuit alleges direct harm caused by the AI system's outputs, fulfilling the definition of an AI Incident. The involvement of the AI system in producing harmful content and the resulting legal action confirm the classification as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

France Says Musk Encouraged Row Over Grok's Sexualised Images. He Responds

2026-03-22
NDTV
Why's our monitor labelling this an incident or hazard?
The AI system Grok generated sexualised deepfake images without consent, including images depicting children, which constitutes harm to individuals' rights and communities. The controversy and harm are realized, not hypothetical. The involvement of the AI system's use and possible encouragement by Elon Musk to generate such content directly links the AI system to the harm. This meets the criteria for an AI Incident as defined, involving violations of rights and harm to communities caused by the AI system's outputs.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes controversy to inflate X value

2026-03-21
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating deepfake images, which directly caused harm by producing non-consensual sexualized images, including those depicting minors, violating rights and causing community harm. The suspected deliberate encouragement of this controversy to manipulate company valuation further indicates misuse of the AI system. The involvement of prosecutors and regulatory bodies confirms the harm is materialized and significant. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes row to inflate X value

2026-03-21
Yahoo News
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is explicitly mentioned as generating sexualized deepfake images without consent, which constitutes a violation of human rights and harms to individuals and communities. The scale of generated images (millions, including those depicting children) confirms significant harm. The suspected deliberate encouragement by Musk to create controversy for financial gain further implicates misuse of the AI system. The involvement of legal authorities and ongoing investigations into these harms supports classification as an AI Incident rather than a hazard or complementary information. The harms are realized and substantial, meeting the criteria for an AI Incident.
Thumbnail Image

French prosecutors suspect Elon Musk encouraged deepfakes row to inflate X value

2026-03-23
The Hindu
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating images based on user prompts. It produced millions of sexualised deepfake images without consent, including those depicting minors, which constitutes harm to individuals and communities and breaches rights. The controversy and misuse are ongoing and have led to legal investigations, confirming direct harm. The suspected deliberate encouragement of this misuse to manipulate company valuation further supports classification as an AI Incident due to misuse and resulting harm. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes row to inflate X value - The Economic Times

2026-03-22
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (X's algorithm and Grok) being used to disseminate harmful content, including sexualised deepfakes and Holocaust denial, which are forms of misinformation and harmful content causing harm to communities and violating rights. The suspicion that Musk encouraged this controversy to inflate company value indicates misuse or manipulation involving AI systems. The involvement of prosecutors and ongoing investigations further supports that harm has occurred or is ongoing. Hence, this is classified as an AI Incident.
Thumbnail Image

Elon Musk accused of using sexual deepfake storm to pump value of X and xAI

2026-03-22
NZ Herald
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate a large volume of sexualized deepfake images, including non-consensual and potentially illegal content involving minors. This use of AI has caused harm to individuals' rights and to communities by disseminating harmful content. The event describes ongoing investigations and legal scrutiny due to these harms, confirming that the AI system's use has led to realized harm. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

French Prosecutors probe Musk over Grok deepfake outrage

2026-03-23
Firstpost
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is explicitly mentioned as generating sexualised deepfake images without consent, including images of minors, which constitutes harm to individuals' rights and communities. The involvement of French prosecutors and investigations into the misuse of the AI system to artificially boost company value further supports the classification as an AI Incident. The harm is realized and ongoing, not merely potential, fulfilling the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Prosecutors suspect Elon Musk encouraged deepfakes row to inflate X value

2026-03-22
RNZ
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is explicitly mentioned as generating sexualised deepfake images without consent, which constitutes a violation of human rights and causes harm to individuals and communities. The large scale of generated images, including those depicting children, indicates significant harm has occurred. The suspected deliberate encouragement of this controversy to manipulate company valuation further underscores misuse of the AI system. These factors meet the criteria for an AI Incident, as the AI system's use has directly led to violations of rights and harm. The involvement of multiple investigations and legal actions supports this classification.
Thumbnail Image

French prosecutors suspect tycoon Musk encouraged deepfakes to inflate value of X

2026-03-21
RFI
Why's our monitor labelling this an incident or hazard?
The Grok chatbot is an AI system enabling the generation of sexualized deepfakes, which directly causes harm to individuals through nonconsensual sexual imagery and child exploitation, violating human rights and legal protections. The article describes ongoing investigations and lawsuits, confirming that harm has occurred. The suspected deliberate encouragement of such harmful content to manipulate company valuation further underscores the AI system's role in causing significant harm. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes row to inflate X value

2026-03-22
The Japan Times
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating deepfake images, which are sexualized and created without consent, constituting a violation of rights and harm to individuals. The controversy and harm have already occurred, making this an AI Incident. The suspicion that the controversy was deliberately encouraged to manipulate company value indicates misuse of the AI system's outputs leading to harm. Therefore, this event meets the criteria for an AI Incident due to realized harm linked to the AI system's use.
Thumbnail Image

Musk may have fueled deepfakes row to boost X's value

2026-03-22
NewsBytes
Why's our monitor labelling this an incident or hazard?
Grok is an AI system capable of generating and editing images based on user prompts. The reported abuse of this system to create sexualized deepfake images, including those of children, constitutes harm to individuals and communities, as well as potential violations of rights. The involvement of French authorities investigating these harms confirms that the AI system's use has directly led to realized harm, fitting the definition of an AI Incident.
Thumbnail Image

Baltimore sues Elon Musk's X.AI over Grok's creation of 'non-consensual sexualized deepfakes'

2026-03-24
WMAR
Why's our monitor labelling this an incident or hazard?
The AI system 'Grok' is explicitly mentioned as generating sexualized deepfake images without consent, including of minors, which constitutes a violation of rights and harm to individuals and communities. The lawsuit alleges that this harmful content was produced and spread, indicating realized harm. Therefore, this qualifies as an AI Incident due to direct harm caused by the AI system's outputs and use.
Thumbnail Image

French Prosecutors: Elon Musk Fueled Grok Deepfake Controversy to Boost X Valuation

2026-03-21
Morocco World News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Grok, an AI chatbot capable of generating and editing images, which is an AI system. The misuse of Grok to create sexualized deepfake images without consent, including images involving minors, constitutes direct harm to individuals and communities, fulfilling the criteria for harm under the AI Incident definition. Additionally, the potential deliberate generation of controversy to manipulate company valuation involves misuse of the AI system's outputs leading to broader societal and legal harms. Hence, the event is classified as an AI Incident.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes row to inflate X value

2026-03-21
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating content (deepfake images). The generation of sexualised images of women and girls without consent constitutes a violation of rights and harm to individuals. The controversy and outrage indicate that harm has occurred. The involvement of Musk encouraging this controversy to inflate company value implies misuse of the AI system's outputs. Hence, this qualifies as an AI Incident due to realized harm linked to the AI system's use and misuse.
Thumbnail Image

French Prosecutors Suspect Elon Musk Encouraged X Deepfakes Row to Inflate Company Value

2026-03-21
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating deepfake images, which directly caused harm by producing sexualised images without consent, including those depicting minors, violating rights and causing community harm. The involvement of the AI system in generating harmful content and the alleged deliberate encouragement to produce such content to manipulate company valuation constitutes misuse of the AI system leading to realized harm. The investigation by French prosecutors and alerts to US authorities confirm the seriousness and reality of the harms. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

US authorities alerted: French prosecutors suspect Musk encouraged deepfakes row to inflate X value

2026-03-21
RTL Today
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is an AI system generating content (deepfake images) that caused direct harm by producing non-consensual sexualized images, including of minors, which constitutes violations of rights and harm to communities. The controversy and misuse of the AI system have led to legal investigations and cross-border regulatory actions. The involvement of the AI system in generating harmful content and the alleged deliberate encouragement of this misuse to manipulate market value clearly meets the criteria for an AI Incident, as the harm is realized and significant.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes row to inflate X value

2026-03-21
TheTimes.com.ng
Why's our monitor labelling this an incident or hazard?
The AI chatbot Grok generated millions of sexualised deepfake images without consent, including images depicting children, which constitutes harm to individuals and communities and violations of rights. The suspected deliberate encouragement by Elon Musk to generate such content to manipulate company valuation further implicates the AI system's use in causing harm. The involvement of prosecutors and investigations by multiple jurisdictions confirms the seriousness and realized nature of the harms. Therefore, this event meets the criteria for an AI Incident due to direct harm caused by the AI system's outputs and its misuse.
Thumbnail Image

French prosecutors suspect Musk encouraged deepfakes to inflate X value

2026-03-21
anews
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful deepfake content and Holocaust denial material, which are forms of violations of human rights and harm to communities. The French prosecutors suspect deliberate use of these AI-generated deepfakes to manipulate stock market valuations, indicating misuse of the AI system. The ongoing investigations and searches confirm that harm has occurred or is occurring. Hence, the event meets the criteria for an AI Incident due to direct or indirect harm caused by the AI system's outputs.
Thumbnail Image

La Nación / Caso Musk escala: París alerta a Washington por polémica con IA y deepfakes

2026-03-21
La Nación
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake content without consent, which is a violation of rights and causes harm to individuals and communities. The involvement of the AI system in spreading Holocaust denial and false sexual content further supports the presence of harm. The French prosecution's alert to US authorities and ongoing investigations indicate that these harms are materialized and significant. Hence, this event meets the criteria for an AI Incident due to direct and indirect harm caused by the AI system's use.
Thumbnail Image

La Fiscalía de París sospecha que Musk fomentó 'deepfakes' para aumentar el valor de X

2026-03-21
EL MUNDO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake content, which is a direct use of AI technology. The suspected deliberate creation and dissemination of harmful deepfake videos and Holocaust denial content represent violations of law and human rights, fulfilling the criteria for harm under the AI Incident definition. The involvement of regulatory authorities and ongoing investigations further confirm the materialization of harm rather than a mere potential risk. Hence, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk habría fomentado "deepfakes" para aumentar el valor de X, alerta Fiscalía de París a EU; Francia investiga a la red social | El Universal

2026-03-21
El Universal
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating sexualized deepfake videos, which are harmful AI-generated content. The suspected deliberate fostering of such content to manipulate company valuation indicates misuse of the AI system leading to harm. The harms include violations of legal frameworks and potential harm to individuals depicted or targeted by the deepfakes, fitting the definition of an AI Incident. The involvement of multiple investigations and the direct link between the AI system's outputs and the alleged harms further support this classification.
Thumbnail Image

Elon Musk, señalado por los vídeos sexuales con IA

2026-03-21
Deutsche Welle
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake videos and harmful misinformation, which are forms of content that can cause significant harm to individuals and communities, including violations of human rights. The alleged deliberate use of these AI-generated deepfakes to manipulate the stock market value of the company further indicates harm related to financial markets and investor rights. The ongoing investigations and legal actions confirm that harm has occurred or is occurring. Thus, the event meets the criteria for an AI Incident, as the AI system's use has directly led to multiple harms including rights violations and market manipulation.
Thumbnail Image

La Justicia gala alerta a la estadounidense de una posible valorización artificial de X

2026-03-21
Expansión
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok generating sexually explicit deepfake content, which is under investigation for being used to artificially inflate company valuation and for disseminating illegal and harmful content. The AI system's use is directly linked to potential violations of law and harm to communities, fulfilling the criteria for an AI Incident. The involvement of AI in generating harmful deepfakes and manipulating algorithms that may have led to legal and reputational harm confirms this classification.
Thumbnail Image

La justicia gala alerta a la estadounidense de una posible valorización artificial de X Por EFE

2026-03-21
Investing.com Español
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating deepfake content that is suspected to have been used deliberately to manipulate the valuation of companies involved in the stock market listing. This involves the use and possible misuse of AI technology leading to potential violations of legal and financial regulations, which fall under breaches of obligations under applicable law protecting fundamental rights and market integrity. The investigation and legal actions indicate that harm or violations are either occurring or have occurred, making this an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Elon Musk podría haber impulsado el uso de deepfakes sexualizados para aumentar el valor de su red social, según investigación

2026-03-21
EL IMPARCIAL | Noticias de México y el mundo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (deepfake generation algorithms and an AI chatbot) whose use has directly led to harms including the creation and dissemination of sexualized deepfake content involving minors (a violation of rights and ethical standards) and potential financial market manipulation. The investigation and legal actions indicate that these harms are materialized, not just potential. Therefore, this is an AI Incident rather than a hazard or complementary information. The presence of ongoing investigations and international cooperation further supports the seriousness and realized nature of the harms.
Thumbnail Image

La Fiscalía de París sospecha que Musk fomentó 'deepfakes' para aumentar el valor de X

2026-03-23
France 24
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake content without consent, which constitutes harm to individuals and communities, including violations of fundamental rights. The suspected deliberate fostering of such content to manipulate market value indicates misuse of the AI system leading to significant harm. The involvement of regulatory authorities and ongoing investigations further confirm the seriousness and realized nature of the harms. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Fiscalía de París sospecha que Musk fomentó deepfakes para aumentar el valor de X - El Sol de México | Noticias, Deportes, Gossip, Columnas

2026-03-21
OEM
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (the AI chatbot Grok) being used to generate and spread harmful deepfake content, including Holocaust denial and sexualized videos, which constitute violations of human rights and harm to communities. The involvement of the AI system in producing and disseminating this content directly links it to realized harm. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has led to violations of rights and harm to communities.
Thumbnail Image

Fiscalía de París sospecha que Musk fomentó "deepfakes" para aumentar el valor de X

2026-03-22
CRHoy.com | Periodico Digital | Costa Rica Noticias 24/7
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake content, which is suspected to have been deliberately used to cause harm by artificially inflating company value, thus violating financial market regulations and potentially harming investors. This constitutes a violation of applicable law and harm to communities (investors and market participants). Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to a breach of legal obligations and potential harm.
Thumbnail Image

La Fiscalía de París investiga a Elon Musk por presunto uso de deepfakes en X

2026-03-21
Cadena 3 Argentina
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake content (produced by the chatbot Grok) to manipulate the financial market by artificially inflating company value, which constitutes a violation of legal frameworks and potentially harms individuals depicted in the deepfakes. The AI system's use has directly led to an investigation for these harms, fulfilling the criteria for an AI Incident. The harms include violations of law (market manipulation), potential harm to individuals (sexualized deepfakes involving women and minors), and harm to communities (dissemination of harmful misinformation).
Thumbnail Image

Justicia de París sospecha que Elon Musk fomentó los 'deepfakes' para aumentar el valor de X

2026-03-21
Portafolio.co
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake videos, which are AI-generated content. The alleged deliberate use of this AI system to spread sexualized deepfakes constitutes a violation of rights and harm to communities. The involvement of regulatory and judicial authorities investigating these harms confirms that the AI system's use has directly or indirectly led to harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk bajo la lupa de Francia por presunto uso de 'deepfakes' para aumentar el valor de X

2026-03-21
La Nación, Grupo Nación
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the AI system Grok generating sexualized deepfake videos, which are being investigated for deliberate use to manipulate market value, indicating misuse of AI. The harms include violations of rights (sexualized deepfakes of individuals), potential reputational and financial harm to investors, and broader societal harm through misinformation and exploitation. These harms have already prompted official investigations, indicating that the harm is realized rather than potential. Hence, this event meets the criteria for an AI Incident due to the direct involvement of an AI system in causing significant harm.
Thumbnail Image

Fiscalía de París sospecha que Musk fomentó 'deepfakes' para aumentar el valor de X

2026-03-21
Última Hora
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating deepfake videos, which are AI-generated content. The alleged deliberate use of these deepfakes to manipulate market value and the dissemination of harmful content constitute violations of legal and ethical norms, including potential breaches of human rights and market regulations. Although the harms are under investigation and not yet legally confirmed, the event describes realized harms and ongoing legal scrutiny directly linked to the AI system's use. Therefore, this qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's outputs and their misuse.
Thumbnail Image

Fiscalía de París sospecha que Musk fomentó 'deepfakes' para aumentar el valor de X

2026-03-21
Expansión
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating deepfake videos, which are AI-generated content. The suspected deliberate use of these deepfakes to artificially increase company value constitutes misuse of the AI system leading to significant harm, including violations of rights and potential market manipulation. The ongoing investigations by multiple authorities and the involvement of harmful content dissemination confirm that harm has occurred or is occurring. Therefore, this event qualifies as an AI Incident due to the direct or indirect harm caused by the AI system's use.
Thumbnail Image

Elon Musk, en la mira: investigan si buscó elevar el valor de X con contenido sexual

2026-03-22
Diario El Liberal
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Grok chatbot) to generate deepfake sexual content, which is alleged to have been deliberately disseminated to manipulate market valuations. This use of AI has directly or indirectly led to legal investigations for market manipulation and dissemination of illegal content, which are harms under the framework (violations of law and harm to communities). The involvement of AI in generating harmful content and influencing financial markets meets the criteria for an AI Incident rather than a hazard or complementary information. The harm is materialized or at least strongly evidenced by ongoing legal actions and investigations.
Thumbnail Image

Investigan si Elon Musk buscó elevar el valor de X con contenido sexual

2026-03-22
BAE Negocios
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI systems (deepfakes and an AI chatbot) to generate and spread sexualized and illegal content with the alleged intent to manipulate market valuations. This involves the use and misuse of AI systems, which is under active judicial investigation. The harms include violations of legal frameworks (market manipulation, illegal content dissemination) and harm to communities (through harmful content). The AI systems' role is pivotal in the alleged manipulation and content generation. Hence, this is an AI Incident rather than a hazard or complementary information, as the harm is ongoing or has already occurred and is under investigation.
Thumbnail Image

Fiscalía de París sospecha que Musk fomentó los 'deepfakes' para aumentar el valor de X

2026-03-22
TVN
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake images without consent, which constitutes a violation of fundamental rights and harms communities. The spread of Holocaust denial and false videos further indicates harm to communities and violation of rights. The investigations by multiple authorities and the mention of deliberate use to artificially inflate company value imply that the AI's use has directly or indirectly led to significant harms. Thus, the event meets the criteria for an AI Incident.
Thumbnail Image

Elon Musk, señalado por los vídeos sexuales con IA

2026-03-21
eju.tv
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake content, which is being used or misused to manipulate financial markets and spread harmful content. The harms include violations of legal frameworks protecting investors and potentially human rights (e.g., sexual exploitation, misinformation). The involvement of the AI system in generating the harmful content and its use to manipulate market value directly or indirectly leads to significant harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Señalan a Elon Musk por los vídeos sexuales con IA - Diario Primicia

2026-03-21
Diario Primicia
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) to generate deepfake sexual videos, which are harmful content. The AI's role is central to the alleged harm, including market manipulation and dissemination of harmful content. These harms fall under violations of law and harm to communities. Since the harms are ongoing and under investigation, and the AI system's use is directly linked to these harms, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Fiscalía de París sospecha que Musk fomentó 'deepfakes' para aumentar el valor de X Ensegundos República Dominicana

2026-03-21
José Peguero
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok) generating deepfake videos, which are AI-generated content. The alleged deliberate use of these deepfakes to manipulate the market value of the company constitutes misuse of the AI system leading to harm (market manipulation and dissemination of harmful content). This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to significant harms, including violations of rights and potential financial harm. The ongoing investigations and legal actions further support this classification.
Thumbnail Image

La Fiscalía de París sospecha que Elon Musk fomentó videos sexuales con IA para inflar el valor de X

2026-03-21
Agencia Noticias Argentinas
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-generated deepfake content (sexual explicit videos) produced by the chatbot Grok on the platform X. The alleged deliberate fostering of such content to artificially inflate company valuation is a misuse of AI with direct links to harm, specifically violations of legal obligations related to market manipulation and investor deception. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to significant harm (legal and financial harm).
Thumbnail Image

Justice Department refuses to assist French probe into Musk's X, WSJ reports

2026-04-18
CNBC
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an investigation into suspected abuse of algorithms and fraudulent data extraction by X, which involves AI systems managing content and data. Although the investigation is serious and ongoing, there is no indication that harm has already occurred or been proven. The refusal of the U.S. Justice Department to assist is a legal and political response, not a direct harm caused by the AI system. Since the event centers on the potential for harm or legal issues arising from the AI system's use, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

US Justice Department refuses to assist French probe into Musk's X: Report - The Economic Times

2026-04-18
Economic Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an investigation into suspected abuse of algorithms and fraudulent data extraction by X, which involves AI systems. However, no actual harm or incident is reported; the investigation is ongoing, and the US DOJ's refusal to assist is a governance/legal response. The event does not describe a realized AI Incident or a plausible AI Hazard causing harm but rather provides complementary information about regulatory scrutiny and legal challenges concerning AI system use. Therefore, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Trump DOJ calls France's investigation into Elon Musk's X 'unjust' - Cryptopolitan

2026-04-18
Cryptopolitan
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (X's content selection algorithms and deepfake content generation) whose use is under criminal investigation for serious allegations including bias, foreign interference, and distribution of child pornography. These allegations imply violations of law and potential harm to communities and individuals. The investigation and legal summons indicate that harm is not merely potential but is being treated as actual or ongoing. The US DOJ's refusal to cooperate underscores the political and legal significance of the case. Therefore, this is an AI Incident as the AI system's use has directly or indirectly led to alleged harms and legal actions, not just a plausible future risk or complementary information.
Thumbnail Image

Judicial Independence at Stake: Paris and U.S. Clash Over Investigating Musk's X Platform | Law-Order

2026-04-18
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article describes an ongoing legal investigation into X platform's alleged involvement in distributing child pornography and creating sexual deepfakes, which are AI-generated content. The presence of AI-generated harmful content and the legal probe into the platform's content moderation and data practices indicate that AI systems' use or misuse has led to violations of law and potential harm to individuals. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm and legal consequences.
Thumbnail Image

DOJ Declines Assistance in French Criminal Investigation of X

2026-04-18
El-Balad.com
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the mention of algorithm manipulation and the involvement of xAI, an AI subsidiary. The allegations include serious concerns such as fraudulent data extraction and dissemination of harmful content, which could constitute violations of rights and harm to communities if confirmed. However, the article does not confirm that these harms have occurred; it reports on an ongoing investigation and the refusal of DOJ assistance. Thus, the event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, but no direct or indirect harm has been established yet.