French Prosecutors Investigate AI-Generated Deepfake Scandal Involving Elon Musk's Companies

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

French prosecutors are investigating Elon Musk's companies X and xAI after Grok, their AI system, generated sexualized deepfake images, including those depicting minors. Authorities suspect the controversy may have been orchestrated to artificially inflate company valuations ahead of a planned 2026 stock listing. Musk publicly insulted the prosecutors in response.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly mentions the use of an AI system ('Grok') to generate deepfake sexual videos, which are being investigated for their role in artificially inflating company value and potentially involving illegal content related to children. This involves direct use of AI leading to violations of law and human rights, fitting the definition of an AI Incident. The investigation and legal scrutiny confirm that harm has occurred or is ongoing, rather than just a potential risk, thus it is not merely a hazard or complementary information.[AI generated]
AI principles
Respect of human rightsSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
ChildrenBusiness

Harm types
Human or fundamental rightsReputational

Severity
AI incident

AI system task:
Content generation


Articles about this incident or hazard

Thumbnail Image

القضاء الفرنسي يشتبه بمحاولة ماسك تضخيم قيمة "إكس" بصورة مصطنعة

2026-03-21
France 24
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system ('Grok') to generate deepfake sexual videos, which are being investigated for their role in artificially inflating company value and potentially involving illegal content related to children. This involves direct use of AI leading to violations of law and human rights, fitting the definition of an AI Incident. The investigation and legal scrutiny confirm that harm has occurred or is ongoing, rather than just a potential risk, thus it is not merely a hazard or complementary information.
Thumbnail Image

ماسك رداً على اتهامات فرنسا له بتضخيم قيمة إكس: "يعانون إعاقة ذهنية" | صحيفة الخليج

2026-03-22
صحيفة الخليج
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (the 'Grok' AI program) to generate deepfake sexual videos, which is a direct use of AI technology. The harms include potential violations of laws related to child exploitation and the artificial inflation of company value through deceptive means, which constitute breaches of legal and fundamental rights. The investigation and legal scrutiny indicate that these harms have materialized or are strongly suspected to have materialized. Hence, this qualifies as an AI Incident due to the direct involvement of AI in causing or enabling these harms.
Thumbnail Image

اشتباه قضائي فرنسي: إيلون ماسك حاول رفع قيمة "إكس" بصورة مصطنعة

2026-03-21
LBCI Lebanon
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI-based deepfake technology ('Grok' chatbot) to create fake sexual videos, including those involving minors, which constitutes a violation of human rights and legal obligations. The use of AI in this context has directly led to harms related to exploitation and illegal content, and the investigation is focused on these harms. Therefore, this event qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm, including violations of rights and potentially criminal activities.
Thumbnail Image

فرنسا تشتبه بمحاولة ماسك تضخيم قيمة "إكس" عبر مقاطع جنسية

2026-03-22
اندبندنت عربية
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of AI deepfake technology (an AI system) to produce fake sexual videos, including potentially illegal content involving children. This use has led to legal investigations and concerns about harm to individuals (sexual exploitation, child protection issues) and communities (spread of harmful content). The AI system's use is directly linked to these harms and legal violations, fulfilling the criteria for an AI Incident. The investigation and legal scrutiny further confirm the seriousness and realized nature of the harm.
Thumbnail Image

Elon Musk, despre procurorii francezi: "înapoiți mintal". Ce l-a supărat pe miliardar

2026-03-22
Digi24
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized deepfake images, including those involving minors, which constitutes direct harm to individuals and communities. The French prosecutors' investigation and the reported misuse of the AI system to spread Holocaust denial and sexualized content demonstrate violations of rights and harm to communities. These harms have materialized, not just potential risks, qualifying this as an AI Incident rather than a hazard or complementary information. The involvement of the AI system in producing and disseminating harmful content is central to the event.
Thumbnail Image

"Sunt niște înapoiați mintal": Elon Musk îi insultă pe procurorii din Paris care susțin că scandalul Grok a fost provocat pentru a crește valoarea X și xAI - HotNews.ro

2026-03-22
HotNews.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexualized deepfake images, including those of minors, which constitutes harm to individuals and communities. The scandal has led to legal investigations and accusations of misuse of AI for manipulative purposes, including potential violations of law and rights. The involvement of the AI system in generating harmful content and the resulting legal and societal impacts meet the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm scenario involving AI misuse and its consequences.
Thumbnail Image

Elon Musk, atac virulent la adresa procurorilor parizieni: "Sunt niște înapoiați mintali" - Știrile ProTV

2026-03-22
Stirile ProTV
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system capable of generating and editing images based on user prompts. The reported generation of millions of sexualized deepfake images, including those resembling children, represents harm to communities and violations of rights (non-consensual image generation). The investigation by multiple authorities and the reported misuse of the AI system confirm that the harms have materialized. Hence, this is an AI Incident due to the direct link between the AI system's use and realized harms.
Thumbnail Image

"Înapoiați mintal". Elon Musk îi jignește pe procurorii francezi după o anchetă legată de companiile X și xAI

2026-03-22
Gândul
Why's our monitor labelling this an incident or hazard?
The chatbot Grok is an AI system generating deepfake images, including sexualized content and images resembling minors, which is illegal and harmful. The French prosecutors' investigations into the use of AI-generated content for manipulation and harmful dissemination demonstrate direct harm to communities and potential legal violations. The AI system's role is pivotal in producing and distributing this harmful content, fulfilling the criteria for an AI Incident. The event describes actual harm and ongoing investigations, not just potential risks or general AI-related news, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

Elon Musk i-a numit retardați mintal pe procurorii francezi

2026-03-22
Cotidianul RO
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) generating deepfake content that is allegedly used deliberately to manipulate company valuations, which constitutes a misuse of AI leading to harm. The involvement of prosecutors investigating this misuse indicates that harm has occurred or is ongoing, fulfilling the criteria for an AI Incident. The harm includes violations of legal frameworks (market manipulation) and harm to communities (through misinformation and deceptive practices).
Thumbnail Image

Investitorii urmăresc cum anchetele pariziene influențează planul de listare a entității SpaceX-xAI

2026-03-22
Business24
Why's our monitor labelling this an incident or hazard?
The presence of an AI system (Grok, an AI bot) is explicit, and its use has directly led to harms such as the generation of sexualized deepfake images, including those depicting children, which constitutes harm to communities and violations of rights. The ongoing investigations and legal actions confirm that these harms are materialized rather than hypothetical. The article also discusses the impact on corporate governance and investor confidence, further evidencing significant harm linked to the AI system's use. Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk îi atacă pe procurorii francezi după ancheta Grok

2026-03-22
Financiarul.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized deepfake images, including those depicting children, which constitutes harm to communities and violations of rights. The prosecutors' investigation and the described harms indicate that the AI system's use has directly led to these harms. The involvement of the AI system in generating non-consensual sexual content and the resulting legal scrutiny confirm this as an AI Incident rather than a hazard or complementary information. The article details realized harm, not just potential harm or responses to past events.
Thumbnail Image

Elon Musk îi califică pe procurorii francezi drept "înapoiaţi mintal" după o sesizare în Statele Unite

2026-03-22
News.ro
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating harmful sexualized deepfake images, including those depicting children, which constitutes harm to individuals and communities. The misuse and dissemination of such content represent violations of rights and societal harm. The ongoing investigations and legal actions confirm that harm has occurred due to the AI system's use. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to significant harm.
Thumbnail Image

Francúzska prokuratúra podozrieva Muska z podpory deepfakeov pre rast hodnoty X - Svet SME

2026-03-21
www.sme.sk
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (Grok) to generate sexualized deepfake videos, which constitute a violation of rights and cause harm to communities by spreading harmful and potentially illegal content. The involvement of the AI system in producing and disseminating this content has directly led to legal investigations and concerns about harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to significant harm and legal consequences.
Thumbnail Image

Musk dostáva jednu ranu za druhou: Prokuratúra odhalila ďalšie zverstvá, ktoré má na krku

2026-03-22
www.pluska.sk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI tool (Grok) used to spread harmful content such as Holocaust denial and sexual deepfakes, which are violations of human rights and legal obligations. The involvement of prosecutors and international legal authorities confirms the seriousness and realized harm of these actions. Therefore, this qualifies as an AI Incident due to the direct use of AI causing harm and legal violations.
Thumbnail Image

Francúzska prokuratúra podozrieva Muska z podpory deepfakeov pre rast hodnoty platformy X

2026-03-21
Hospodarske Noviny
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Grok) used to generate deepfake videos, which are harmful content causing violations of rights and harm to communities. The AI system's use has directly led to the spread of sexualized deepfakes and Holocaust denial content, which are serious harms. The involvement of legal authorities and ongoing investigations further confirm the materialization of harm. Hence, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Francúzska prokuratúra podozrieva Muska z podpory deepfakeov pre rast hodnoty X

2026-03-21
trend.sk
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) to create sexualized deepfake videos, which are being spread on a social media platform (X). The suspected intent is to artificially inflate the company's value, which implies a violation of legal frameworks and potential harm to investors and the public. The involvement of prosecution and regulatory authorities further supports the seriousness of the harm. Since the AI system's use has directly or indirectly led to potential violations of law and harm to communities, this event meets the criteria for an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Podľa francúzskej prokuratúry je pravdepodobné, že Elon Musk podporoval tvorbu deepfakov na platforme X

2026-03-21
.týždeň - iný pohľad na spoločnosť
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly mentioned as generating sexually explicit deepfakes, which are harmful content violating human rights and legal norms. The allegations that this was done to manipulate company valuation and the ongoing investigations indicate that harm has occurred or is occurring. The AI system's use directly led to the creation and dissemination of harmful content, fulfilling the criteria for an AI Incident. The involvement of multiple jurisdictions and legal authorities further supports the seriousness and realized nature of the harm.
Thumbnail Image

Francúzske úrady podozrievajú Elona Muska z podpory deepfakeov na sieti X

2026-03-21
Denník E
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system (Grok) to generate deepfake images and videos that are sexualized and include Holocaust denial content, which are harmful and violate legal and human rights frameworks. The AI system's outputs are being used in a way that has already caused harm or legal concerns, such as misinformation and potentially manipulative content affecting communities and rights. The investigation by multiple authorities confirms the seriousness and realized nature of the harm. Hence, this is an AI Incident rather than a hazard or complementary information.