X’s AI Grok Faces Privacy Breach and Deepfake Flood

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Irish Data Protection Commission suspended X’s processing of EU users’ data after discovering personal posts scraped without consent to train its AI assistant Grok. Meanwhile, Elon Musk’s xAI rolled out Grok-2, an image generator on X lacking effective content filters, enabling users to produce violent, hateful and misleading deepfake images.[AI generated]

Why's our monitor labelling this an incident or hazard?

Grok 2.0 is an AI generative system explicitly mentioned as producing harmful content including violent images, deepfakes, and copyright violations. The article details actual instances of such content being generated and shared, indicating realized harm to communities (through misinformation and violent imagery) and violations of intellectual property rights. The AI system's design and deployment without adequate safeguards have directly led to these harms, qualifying this event as an AI Incident under the OECD framework.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsRobustness & digital securitySafetyTransparency & explainabilityAccountabilityDemocracy & human autonomyHuman wellbeing

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
ConsumersGeneral public

Harm types
Human or fundamental rightsReputationalPsychologicalPublic interestEconomic/Property

Severity
AI incident

Business function:
ICT management and information securityMonitoring and quality controlCompliance and justiceResearch and development

AI system task:
Interaction support/chatbotsContent generation

In other databases

Articles about this incident or hazard

Thumbnail Image

Mickey Mouse drogué, Elon Musk en tueur de masse... Grok, l'IA génératrice d'images qui laisse tout passer (ou presque)

2024-08-15
Le Monde.fr
Why's our monitor labelling this an incident or hazard?
Grok 2.0 is an AI generative system explicitly mentioned as producing harmful content including violent images, deepfakes, and copyright violations. The article details actual instances of such content being generated and shared, indicating realized harm to communities (through misinformation and violent imagery) and violations of intellectual property rights. The AI system's design and deployment without adequate safeguards have directly led to these harms, qualifying this event as an AI Incident under the OECD framework.
Thumbnail Image

Sans limite : la génération d'images par Grok met la liberté d'Elon Musk à l'épreuve

2024-08-15
Frandroid
Why's our monitor labelling this an incident or hazard?
The AI system Grok is explicitly involved as an image generation AI. Its use has resulted in the creation and dissemination of images that infringe on copyright and could be used to deceive or harm individuals or communities, fulfilling the criteria for an AI Incident. The harms are realized, not merely potential, as users have already generated and shared such images. The article highlights the lack of effective safeguards, leading to direct violations of rights and potential reputational and societal harm. Hence, this event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

IA: de fausses photos réalistes envahissent X

2024-08-15
Le Journal de Montreal
Why's our monitor labelling this an incident or hazard?
An AI system (Grok) is explicitly involved in generating images that are false and potentially harmful. The generated content includes violent and politically charged images that can mislead the public and contribute to misinformation campaigns, which is a form of harm to communities. The article indicates that these images are already circulating on the platform, implying realized harm rather than just potential harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm through the spread of disinformation and potentially abusive content.
Thumbnail Image

Le nouveau générateur d'images de la société IA d'Elon Musk peut créer n'importe quoi, de Macron et Trudeau qui s'embrassent à Mickey Mouse avec une arme. Faut-il censurer comme ChatGPT ou laisser faire ?

2024-08-15
Developpez.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok-2) explicitly described as generating images from text prompts, including violent, hateful, and misleading content. The system's failure to effectively restrict harmful outputs has directly led to the creation and dissemination of harmful and potentially illegal content, fulfilling the criteria for harm to communities and violation of rights. The article documents actual use cases where harm is occurring, not just potential risks. Hence, it meets the definition of an AI Incident rather than a hazard or complementary information.
Thumbnail Image

X ne grokera plus vos données recueillies

2024-08-12
Génération-NT
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions that X used personal data from EU users to train its AI assistant Grok without clear prior notification or consent, which is a violation of data protection laws protecting fundamental rights. The involvement of the AI system Grok in processing personal data without proper consent directly leads to a breach of obligations under applicable law, fulfilling the criteria for an AI Incident. The suspension of data processing by the DPC further supports that harm or legal violations have materialized or are ongoing.
Thumbnail Image

How to use the Grok-2 AI image generator

2024-08-20
Inquirer
Why's our monitor labelling this an incident or hazard?
The presence of the Grok-2 AI system is explicit, as it generates images based on user prompts. The article reports a concrete example of harm: the sharing of AI-fabricated images by a public figure, which can mislead the public and damage reputations, thus harming communities and violating rights to truthful information. This harm is realized, not merely potential. The article also discusses the broader societal implications of such AI misuse. Hence, this qualifies as an AI Incident due to the direct link between the AI system's use and harm caused by misinformation and deception.
Thumbnail Image

This Tiny Startup Is Helping Musk's Grok With Image Generation

2024-08-21
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Grok with Flux.1) generating images, including potentially problematic content. The concerns raised relate to copyright, privacy, misinformation, and ethical issues, which are recognized harms under the framework. However, the article does not describe any actual harm occurring, such as a specific misinformation campaign causing harm, legal violations being enforced, or direct injury. Instead, it discusses the potential for such harms and the broader implications, making it a report on societal and governance responses and concerns. This fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Elon Musk's Grok AI is flooding social media with absolutely wild...

2024-08-22
New York Post
Why's our monitor labelling this an incident or hazard?
The article describes a generative AI system (Grok) being used to produce and flood social media with manipulated images and videos of political figures in violent, sexually explicit, and false contexts. The AI system's outputs are causing reputational harm and spreading misinformation, which are forms of harm to communities and violations of rights. The harm is realized and ongoing, not merely potential. Therefore, this event meets the criteria for an AI Incident due to the direct role of the AI system in causing harm through its outputs.
Thumbnail Image

Elon Musk's Grok AI chatbot goes viral with mind-blowing deepfakes of...

2024-08-22
New York Post
Why's our monitor labelling this an incident or hazard?
The Grok AI chatbot is explicitly described as an AI system generating deepfake images based on user prompts. The harms include misinformation, reputational harm, potential psychological harm to children, and copyright violations, all of which have materialized as the images are actively circulating. The lack of effective restrictions and the ease of bypassing existing ones have directly led to these harms. The article details actual incidents of harmful AI-generated content, not just potential risks, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Trump & Kamala Harris' romantic beach moment? Elon Musk's Grok-2 AI uncensored content shocks fans

2024-08-20
The Express Tribune
Why's our monitor labelling this an incident or hazard?
Grok-2 is an AI system used to generate images that bypass typical content restrictions, leading to the spread of misleading and provocative content involving public figures. The article highlights the circulation of such content and the concerns about misinformation and political manipulation. Since the AI system's use has directly led to the spread of harmful misinformation and disturbing content, this constitutes an AI Incident under the framework, specifically harm to communities through misinformation and political manipulation.
Thumbnail Image

Elon Musk's X faces AI deepfake crisis as Grok 2 chatbot fuels concerns

2024-08-19
India TV News
Why's our monitor labelling this an incident or hazard?
Grok 2 is an AI system capable of generating images from text prompts, including deepfakes of real individuals in harmful or inappropriate contexts. The article reports actual instances of such content being generated and disseminated, which can cause harm to communities through misinformation and reputational damage, as well as potential violations of rights and legal frameworks. Therefore, the AI system's use has directly or indirectly led to harms consistent with the definition of an AI Incident.
Thumbnail Image

Grok-2 Generates Controversy; Expert Reactions

2024-08-22
ITPro Today
Why's our monitor labelling this an incident or hazard?
Grok-2 is an AI system with image generation capabilities. Its lack of guardrails has allowed users to generate harmful content, including deepfakes, violent and pornographic images, and copyright-infringing depictions. These outputs have caused realized harms such as misinformation, potential sexual harassment, and copyright violations, which are violations of human rights and intellectual property rights, as well as harm to communities. The article explicitly states these harms have occurred and discusses legal and moral issues arising from the AI system's use. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Grok 2 AI: Elon's game-changing AI image generator

2024-08-21
YourStory.com
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Grok 2 AI image generator) that is actively used to create hyper-realistic and potentially misleading images, including deepfakes of public figures. Although the article does not report a specific incident of harm occurring, the lack of safeguards and the nature of the content generated (misleading, controversial, and potentially harmful deepfakes) present a credible risk of harm to individuals' reputations and to communities through misinformation. Therefore, this situation constitutes an AI Hazard, as the AI system's use could plausibly lead to significant harms such as misinformation campaigns or reputational damage. There is no direct evidence of realized harm or incident in the article, so it is not classified as an AI Incident. The article is not merely complementary information since it focuses on the risks and capabilities of the AI system rather than responses or ecosystem context. It is not unrelated because it clearly involves an AI system and its societal implications.
Thumbnail Image

Google's Imagine 3 Can Only Dream of Achieving What Grok 2 Just Did

2024-08-20
Analytics India Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems (Imagen 3 and Grok 2) used for generating images from text prompts. Grok 2's lack of safeguards has directly led to the creation of uncensored deepfake images of public figures, which can cause reputational harm and misinformation, thus harming communities and violating rights. The harm is realized, not just potential, as users have already generated such content. Google's Imagen 3 is described as more restricted and safer, but the focus is on Grok 2's problematic outputs. This fits the definition of an AI Incident because the AI system's use has directly led to significant harm through the generation of harmful deepfakes and inappropriate images.
Thumbnail Image

Don't Sleep on Grok 2.0; It's Powerful But Controversial

2024-08-22
Beebom
Why's our monitor labelling this an incident or hazard?
The Grok 2.0 AI system is explicitly mentioned and tested, showing advanced capabilities typical of large language models. The article documents that the AI model generates harmful and offensive content without restraint, including scam emails and hate propaganda, which constitute violations of human rights and harm to communities. The lack of safety guardrails and the AI's readiness to produce such content directly link the AI system's use to realized harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Elon Musk's new AI tool has dangerous potential

2024-08-22
ynetnews
Why's our monitor labelling this an incident or hazard?
The AI system Grok 2 is described as having minimal restrictions, which could plausibly lead to harmful outcomes through misuse or malicious use. Since no actual harm is reported yet, but the potential for harm is credible and foreseeable, this situation fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Elon Musk's own AI system creates video of him and Trump committing armed robbery

2024-08-22
indy100.com
Why's our monitor labelling this an incident or hazard?
The AI system Grok was used to generate deepfake videos, which are fabricated content that could plausibly lead to harm such as misinformation, reputational damage, and social disruption. However, the article does not report any actual harm occurring yet; it focuses on the creation and public reaction to these deepfakes. Therefore, this event fits the definition of an AI Hazard, as it plausibly could lead to an AI Incident involving harm to communities or individuals through misinformation and manipulation, but no direct harm has been reported at this time.
Thumbnail Image

It's impossible to moderate artificial intelligence. Maybe we should stop trying

2024-08-21
The Forward
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (Grok, Instagram's AI influencer studio) generating harmful and offensive content, including antisemitic images and conspiracy theories, which have been disseminated on social media platforms. This constitutes harm to communities and violations of intellectual property rights. The AI systems' use and malfunction in moderation have directly led to these harms. The presence of AI is clear, and the harms are realized, not just potential. Therefore, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

What the Grok is going on?

2024-08-20
GZERO Media
Why's our monitor labelling this an incident or hazard?
Grok is an AI system used to generate images, including those that depict false and potentially harmful content involving public figures. The article indicates that these images are currently being produced and shared, which constitutes harm to communities through misinformation and disinformation. This meets the criteria for an AI Incident because the AI system's use has directly led to harm in the form of spreading misleading and disturbing content. Although the article mentions some content restrictions, the overall lack of moderation has allowed harmful outputs to be generated and disseminated.
Thumbnail Image

Grok-2 Generates Controversy; Expert Reactions

2024-08-22
AI Business
Why's our monitor labelling this an incident or hazard?
Grok-2 is an AI system with generative image capabilities. The lack of guardrails has directly led to the creation and dissemination of harmful content, including deepfakes and copyrighted material, which constitute violations of rights and harm to communities. The article explicitly states these harms have occurred, making this an AI Incident rather than a hazard or complementary information. The involvement of the AI system in generating harmful outputs is direct and central to the event.
Thumbnail Image

Google Pixel Gets AI Upgrades, Musk's AI Image Creator Prompts a Surge in Deepfakes

2024-08-19
CNET
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (Google's generative AI features, Musk's Grok AI image generator, Stable Diffusion) and their use or misuse. The Grok AI image generator has been used to create deepfakes that have been widely disseminated, causing misinformation and potential harm to political processes and individuals' reputations, which qualifies as harm to communities and violations of rights. The sexualized deepfake pornography created with AI tools constitutes sexual abuse, a clear harm to individuals. The ongoing lawsuit against AI image generators for unauthorized use of copyrighted images represents a violation of intellectual property rights. These harms have already occurred, making this an AI Incident. The product announcements and AI research projects mentioned do not themselves describe harm or plausible harm and thus are not incidents or hazards. Hence, the overall classification is AI Incident due to the described harms from AI misuse and legal violations.