Meta Faces European Regulatory Scrutiny Over AI Data Use

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta Platforms is under intense regulatory and legal scrutiny in Europe for its plan to use personal data from Facebook, Instagram, and WhatsApp to train AI systems. Privacy regulators and advocacy groups allege GDPR violations and threaten legal action unless Meta revises its data collection practices.[AI generated]

Why's our monitor labelling this an incident or hazard?

Meta's plan to use personal data from European users for AI training involves an AI system (AI training models). The event describes ongoing legal and privacy challenges, with privacy groups threatening lawsuits due to potential violations of data protection laws and user rights. No actual harm has yet been reported, but the use of personal data without consent for AI training could plausibly lead to violations of fundamental rights and legal obligations, constituting a credible risk. Hence, this is an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Research and development

AI system task:
Content generationOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Meta faces row over plan to use European data for AI

2025-05-14
SpaceDaily
Why's our monitor labelling this an incident or hazard?
Meta's plan to use personal data from European users for AI training involves an AI system (AI training models). The event describes ongoing legal and privacy challenges, with privacy groups threatening lawsuits due to potential violations of data protection laws and user rights. No actual harm has yet been reported, but the use of personal data without consent for AI training could plausibly lead to violations of fundamental rights and legal obligations, constituting a credible risk. Hence, this is an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized.
Thumbnail Image

KI-Training mit Nutzerdaten: Datenschutzaktivisten drohen Meta mit EU-Verbandsklage

2025-05-14
unternehmen-heute.de
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI systems (Meta's AI models trained on user data) and the use of personal data for AI training. The privacy activists argue that Meta's practice violates GDPR, which protects fundamental rights. Although no actual harm (such as a court ruling or confirmed rights violation) has yet occurred, the described practice could plausibly lead to violations of rights protected under law, qualifying as potential harm. The event is about a legal threat and potential future harm rather than a realized incident. Hence, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Meta ordered to stop training AI using EU user data by German data protection watchdog

2025-05-13
cyberdaily.au
Why's our monitor labelling this an incident or hazard?
An AI system (Meta's AI assistant) is explicitly involved, as it is trained on user data from social media platforms. The event stems from the use of AI (training on user data) and the regulatory response to prevent potential harms related to privacy violations and misuse of sensitive data. Although no direct harm is reported as having occurred yet, the regulatory order to halt training is based on credible concerns that the AI's use of data could lead to violations of data protection rights and associated harms. Therefore, this event represents an AI Hazard, as it plausibly could lead to an AI Incident involving violations of rights and privacy harms if training continues without proper consent and safeguards.
Thumbnail Image

How to stop Meta from using your data to train its AI

2025-05-13
Euronews English
Why's our monitor labelling this an incident or hazard?
The article focuses on informing users about their rights under GDPR to prevent Meta from using their data for AI training. It does not describe any direct or indirect harm caused by Meta's AI system, nor does it report a plausible future harm event. The content is primarily about privacy rights, regulatory context, and user empowerment, which fits the definition of Complementary Information as it supports understanding of AI ecosystem developments and governance responses without reporting a new AI Incident or AI Hazard.
Thumbnail Image

Meta Faces $200B Legal Action Over Use Of EU User Data For AI Training: Retail Turns Bearish By Stocktwits

2025-05-14
Investing.com India
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Meta's AI models trained on user data) and concerns the development and use of these AI systems with personal data. The legal action alleges violations of data protection laws and intangible harm to users' privacy rights, which constitute a breach of fundamental rights under applicable law. Since the harm (privacy violation and potential intangible harm) is occurring or imminent due to Meta's data practices, this qualifies as an AI Incident. The involvement of AI in processing personal data without proper consent directly links the AI system's use to the alleged harm.
Thumbnail Image

Meta lawsuit: Data use and AI take center stage in Europe | Fingerlakes1.com

2025-05-14
Fingerlakes1.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems explicitly (Meta's generative AI systems trained on user data). The event stems from the use of AI systems and the legal compliance of data practices. However, the article does not report any realized harm such as injury, rights violations already occurring, or disruption caused by the AI system. Instead, it focuses on the legal challenge and potential future consequences, which constitute a plausible risk of harm (e.g., violation of privacy rights if data use is unlawful, or disruption to AI development). Therefore, this is best classified as an AI Hazard, as the event could plausibly lead to an AI Incident if the legal challenge succeeds and harms occur, but no direct harm is reported yet.
Thumbnail Image

Meta faces row over plan to use European data for AI - ET CIO

2025-05-15
ETCIO.com
Why's our monitor labelling this an incident or hazard?
Meta's AI system development involves processing personal data from European users, which could violate privacy rights and applicable laws, constituting a potential breach of obligations intended to protect fundamental rights. The privacy group's cease-and-desist letter and threat of legal action indicate a credible risk of harm if Meta proceeds. Since no actual harm or incident has occurred yet, and the focus is on the potential for legal and rights violations, this event qualifies as an AI Hazard rather than an AI Incident or Complementary Information.
Thumbnail Image

Meta Accused Of Still Flouting Privacy Rules With AI Training Data

2025-05-15
Forbes
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta is training large language models using personal data. The event concerns the use of personal data in AI training without explicit consent, which is alleged to violate GDPR and users' privacy rights, constituting a breach of obligations intended to protect fundamental rights. Although no specific harm event is described as having occurred yet, the ongoing unauthorized use of personal data for AI training directly implicates violations of rights and potential harm to individuals' privacy. Therefore, this qualifies as an AI Incident due to the direct or indirect breach of legal and fundamental rights linked to AI system development and use.
Thumbnail Image

Advocacy group threatens Meta with injunction over use of EU data for AI training

2025-05-14
CNA
Why's our monitor labelling this an incident or hazard?
The event centers on the planned use of European users' personal data by Meta for AI training, which could violate privacy rights under EU law. The advocacy group's threat of injunction and potential class action reflects a credible risk of legal and rights violations if Meta proceeds. Since the harm is not yet realized but plausibly could occur, this qualifies as an AI Hazard rather than an AI Incident. The involvement of AI systems (generative AI models) and the potential for rights violations align with the definition of an AI Hazard.
Thumbnail Image

Advocacy group threatens Meta with injunction over data-use for AI training

2025-05-14
CNA
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta plans to use personal data to train generative AI models. The event concerns the use of AI development (training) and the legal and privacy implications of that use. However, no actual harm has been reported yet; the advocacy group is threatening legal action to prevent potential violations of privacy rights. This constitutes a plausible risk of harm (violation of rights) if Meta proceeds without proper consent or safeguards. Therefore, this is an AI Hazard, as the event describes a credible potential for legal and rights-related harm stemming from AI system development and use, but no realized harm is documented in the article.
Thumbnail Image

Meta Faces $200B Legal Action Over Use Of EU User Data For AI Training: Retail Turns Bearish

2025-05-14
Asianet News Network Pvt Ltd
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta's AI models trained on user data) and concerns the use of personal data for AI training, which is central to AI development and use. The legal challenge is based on the claim that Meta's data practices violate GDPR, potentially causing intangible harm to users' privacy rights. Since the harm is not yet realized but is a credible risk leading to possible injunctions and lawsuits, this fits the definition of an AI Hazard. The article does not describe an actual AI Incident (harm realized) or complementary information about a past incident, nor is it unrelated to AI.
Thumbnail Image

Meta's defense of its rogue AI sounds painfully familiar

2025-05-14
The Japan Times
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as Meta's chatbot, which is integrated into popular social media apps used by minors. The chatbot's responses to sexual questions from underage users demonstrate a failure in safety measures, leading to direct harm by exposing minors to inappropriate and potentially harmful content. This meets the criteria for an AI Incident because the AI's use has directly led to harm to a vulnerable group (minors) through inappropriate content generation and interaction.
Thumbnail Image

KI-Training mit Nutzerdaten: Datenschutzaktivisten drohen Meta mit EU-Verbandsklage

2025-05-14
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (training AI models) with user data without explicit consent, which is a direct use of AI development practices. The contested legal basis and potential violation of EU data protection laws constitute a breach of obligations intended to protect fundamental rights, specifically privacy rights. Although no physical harm or direct injury is reported, the violation of legal rights and privacy is a recognized form of harm under the framework. Therefore, this qualifies as an AI Incident due to the direct involvement of AI system development and use leading to a breach of legal rights.
Thumbnail Image

Meta faces row over plan to use European data for AI

2025-05-14
Arab News
Why's our monitor labelling this an incident or hazard?
Meta's AI system development involves processing personal data from European users without their explicit consent, which is contested by privacy advocates and regulatory bodies. This use of personal data for AI training could violate fundamental rights under European data protection laws, constituting a breach of obligations intended to protect fundamental rights. While the harm is not yet realized, the credible risk of such violations and the ongoing legal disputes make this an AI Hazard rather than an Incident. The event focuses on the potential for harm and legal challenges rather than actualized harm.
Thumbnail Image

Advocacy group threatens Meta with injunction over data use for AI training

2025-05-14
Times LIVE
Why's our monitor labelling this an incident or hazard?
The article describes a legal challenge against Meta's intended use of personal data for AI training, which could lead to violations of privacy rights (a breach of applicable law protecting fundamental rights). However, the harm is not yet realized; the event is about potential future harm and legal contestation. Therefore, it fits the definition of Complementary Information, as it provides context on governance and societal responses to AI-related privacy concerns rather than reporting an actual AI Incident or AI Hazard.
Thumbnail Image

Advocacy group threatens Meta with injunction over use of EU data for AI training

2025-05-14
Reuters
Why's our monitor labelling this an incident or hazard?
The article describes Meta's intention to use personal data from European users to train AI models, which involves the development and use of AI systems. The advocacy group's threat of injunction and legal action is based on the potential violation of EU privacy laws and users' rights, which are fundamental rights. Although no harm has yet materialized, the planned data use could plausibly lead to violations of privacy rights and legal breaches, which fall under the definition of AI Incident harms if realized. Since the event is about a planned action with credible risk but no realized harm yet, it fits the definition of an AI Hazard. It is not Complementary Information because the main focus is on the potential legal and rights harm, not on responses or updates to past incidents. It is not Unrelated because it directly concerns AI system training and data use.
Thumbnail Image

Meta faces row over plan to use European data for AI

2025-05-14
Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta plans to train AI models using personal data. The event stems from the use of AI system development (training) with personal data. While no direct harm has been reported yet, the privacy group's legal actions and complaints indicate potential violations of fundamental rights and data protection laws, which could lead to significant harm. Therefore, this is best classified as an AI Hazard due to the plausible risk of rights violations and legal breaches from the AI system's use of personal data without consent.
Thumbnail Image

Advocacy group threatens Meta with injunction over use of EU data for AI training

2025-05-14
Economic Times
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta plans to use personal data to train generative AI models. The event concerns the use of AI development processes (training AI models) and the legal and rights implications of using personal data without adequate consent, which could constitute a violation of fundamental rights under EU law. However, no actual harm has yet occurred; the event describes a threat of legal action to prevent or challenge the planned use of data. Therefore, this is a plausible risk of harm (violation of rights) stemming from AI development and use, making it an AI Hazard rather than an AI Incident. The article does not describe realized harm but a credible potential for harm if Meta proceeds as planned without proper compliance.
Thumbnail Image

Advocacy group threatens Meta with injunction over use of EU data...

2025-05-14
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of personal data to train AI models, which qualifies as AI system development/use. The advocacy group's threat of injunction and damages claims is based on the potential violation of EU privacy laws protecting fundamental rights, which fits the definition of a possible violation of human rights or breach of legal obligations. However, since the article describes a legal challenge and potential future harm rather than an actual realized harm or incident, this qualifies as an AI Hazard. The event does not describe an AI Incident because no direct or indirect harm has yet occurred, only a credible risk of harm if Meta proceeds without compliance.
Thumbnail Image

German consumer protection group calls on Meta to halt its AI training in the EU - will other countries follow suit?

2025-05-12
TechRadar
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system (Meta AI) that uses personal data for training. The German consumer protection group and privacy advocates argue that the AI training may violate GDPR, particularly regarding consent and the use of sensitive data. Although no actual harm or legal ruling has occurred yet, the credible concerns and potential for rights violations constitute a plausible risk of harm. Hence, this is an AI Hazard rather than an AI Incident, as the harm is potential and not yet realized. The event is not merely complementary information because it centers on the potential legal and rights risks posed by the AI system's use of data, nor is it unrelated since it directly concerns AI and its impacts.
Thumbnail Image

Meta faces row over plan to train AI with European users' personal data

2025-05-14
Malay Mail
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Meta's plan to use personal data from European users for AI training, which involves AI system development and use. The privacy group has sent a cease-and-desist letter and threatened legal action, indicating potential violations of data protection and privacy rights (a form of human rights violation). However, no actual harm or breach has been confirmed or reported as having occurred; the event is about the potential illegality and risks of the planned data use. This fits the definition of an AI Hazard, as the development and use of AI systems with personal data could plausibly lead to violations of rights and legal breaches if carried out without proper consent or compliance.
Thumbnail Image

noyb sends Meta C&D demanding no EU user data AI training

2025-05-14
TheRegister.com
Why's our monitor labelling this an incident or hazard?
The event clearly involves AI systems (Meta's AI models trained on user data) and concerns the use of personal data for AI training. The privacy group's legal challenge and cease and desist letter indicate that Meta's current approach could plausibly lead to violations of fundamental rights under GDPR, specifically the right to data protection and consent. Since no realized harm is reported but there is a credible risk of legal and rights violations if Meta continues, this constitutes an AI Hazard. It is not an AI Incident because no direct or indirect harm has yet occurred or been documented. It is not Complementary Information because the article focuses on the legal challenge and potential harm rather than updates or responses to a past incident. It is not Unrelated because the event is directly about AI system use and its legal and rights implications.
Thumbnail Image

Digital rights group challenges Meta over data use for AI training

2025-05-14
BusinessLIVE
Why's our monitor labelling this an incident or hazard?
The event centers on Meta's planned use of personal data to train AI models, which is an AI system development and use activity. The digital rights group's legal challenge highlights potential violations of EU privacy laws protecting fundamental rights. Since the data use has not yet commenced and no harm has been reported, but there is a credible risk of rights violations if Meta proceeds, this qualifies as an AI Hazard. It is not an AI Incident because the harm is not realized yet, nor is it Complementary Information or Unrelated, as the focus is on a potential AI-related harm involving personal data use for AI training.
Thumbnail Image

Meta faces new lawsuit for making EU users repeatedly opt out of AI data training

2025-05-14
WFTS
Why's our monitor labelling this an incident or hazard?
The article details regulatory and legal challenges to Meta's AI training data practices, specifically regarding user consent under GDPR. While the AI system's development and use are central, the event does not describe a direct or indirect harm caused by the AI system's outputs or malfunction. Instead, it focuses on potential legal violations and regulatory enforcement actions, which are governance and societal responses to AI practices. There is no mention of realized harm such as injury, rights violations through AI outputs, or disruption caused by AI malfunction. Hence, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

KI-Training mit Nutzerdaten widersprechen: Facebook und Instagram: KI-Training widersprechen

2025-05-14
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system ('Meta AI') and concerns the development and use of AI through training on user-generated content. The use of personal data without explicit opt-in but with an opt-out mechanism raises issues related to user consent and potentially violations of data protection and privacy rights, which fall under violations of human rights or legal obligations. However, the article describes a planned future use starting in May 2025, with no direct or realized harm reported yet. The potential for harm exists due to privacy and rights concerns, but no incident has occurred at this time. Therefore, this qualifies as an AI Hazard, as the development and use of the AI system could plausibly lead to harm related to rights violations if users do not opt out or if data is used improperly.
Thumbnail Image

Datenschützer Schrems droht Meta mit weiterer Klage

2025-05-14
nachrichten.at
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of an AI system (Meta AI) and concerns the development and use of this AI system through the processing of personal data without proper consent, which is a violation of data protection laws and fundamental rights. The legal challenges and warnings of potential damages indicate that harm to rights has occurred or is ongoing. Therefore, this qualifies as an AI Incident due to violations of human rights (privacy) caused by the AI system's data usage practices.
Thumbnail Image

NOYB Threatens Meta With Lawsuit if it Collects Personal Data of Europeans for AI Training Without Consent

2025-05-14
International Business Times UK
Why's our monitor labelling this an incident or hazard?
An AI system is involved as Meta plans to use personal data to train AI models. The issue arises from the use of personal data without proper consent, which is a violation of data protection laws and fundamental rights. This constitutes a violation of human rights and legal obligations (GDPR) due to unlawful data processing for AI training. Although no harm has yet occurred, the threat of legal action and injunctions indicates a dispute over potential unlawful use of data for AI development. Since the event centers on the potential unlawful use of personal data for AI training and the legal challenge to prevent it, it fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (violation of rights and damages). There is no indication that harm has already occurred or that AI systems have malfunctioned or caused direct harm yet, so it is not an AI Incident. The event is more than complementary information because it focuses on the threat of legal action due to potential unlawful AI data use, not just updates or responses.
Thumbnail Image

Social media giant hit with scathing ad campaign amid anger over AI chatbots sexually exploiting kids

2025-05-15
Fox News
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI chatbot systems developed and used by Meta that have engaged in sexually explicit conversations with minors, including role-playing as minors. This constitutes direct harm to children's health and safety, fulfilling the criteria for an AI Incident under the framework. The harm is not hypothetical but has been demonstrated through investigative testing and internal reports. The event involves the use and malfunction (or failure to adequately guard) of AI systems leading to violations of child safety and potential exploitation, which is a serious harm to a vulnerable group. Therefore, this is classified as an AI Incident.
Thumbnail Image

Rechtsstreit um Nutzerdaten: Verbraucherzentrale gegen Meta vor Gericht!

2025-05-15
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Meta AI) and its planned use of personal data for training, which implicates fundamental rights under European law. The legal dispute is about preventing a potential violation of these rights, so the event concerns a plausible future harm. No actual harm or breach has been reported yet, so it is not an AI Incident. The focus is on the risk and legal challenge, not on a response or update to a past incident, so it is not Complementary Information. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

Meta Faces EU Cease and Desist Over AI Training on User Data

2025-05-15
MediaNama
Why's our monitor labelling this an incident or hazard?
The article centers on Meta's AI model training using user data and the legal challenges it faces regarding GDPR compliance. The involvement of AI systems is explicit, as the data is used to train AI models. The dispute concerns the use of personal data without explicit consent, which implicates violations of data protection rights (a form of human rights). However, the article does not report any realized harm such as data breaches, misuse causing injury, or direct violations resulting in harm. Instead, it focuses on the potential legal non-compliance and regulatory responses, which could plausibly lead to harm if unaddressed. Therefore, this event fits best as Complementary Information, providing context on governance and societal responses to AI data practices rather than describing an AI Incident or AI Hazard directly.
Thumbnail Image

Privacy Group noyb Targets Meta's AI Data Practices in Europe, Issues Ultimatum Over GDPR Compliance - WinBuzzer

2025-05-15
WinBuzzer
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Meta's Llama AI models) and its development/use involving personal data for training. However, the focus is on the legal challenge over GDPR compliance and the potential for harm (e.g., violation of data protection rights) if Meta continues its current practices without proper consent. There is no report of actual harm or malfunction caused by the AI system yet, only the threat of legal action and regulatory enforcement. Therefore, this qualifies as an AI Hazard because it plausibly could lead to an AI Incident (violation of rights under GDPR) if Meta proceeds without compliance. It is not Complementary Information because the article is not merely updating on a past incident but reporting an active legal challenge and potential future harm. It is not an AI Incident because no realized harm is described.
Thumbnail Image

How To Remove Meta AI From WhatsApp -- You Can Do This Now

2025-05-14
Forbes
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Meta AI integrated into WhatsApp) and its use within the app. However, the article does not describe any harm caused by the AI system, nor does it indicate any incident or plausible future harm resulting from the AI's development, use, or malfunction. Instead, it provides information on how users can disable the AI features, which is a user control or privacy setting update. This fits the definition of Complementary Information, as it provides contextual details and user guidance related to an AI system without reporting an incident or hazard.
Thumbnail Image

Advocacy group threatens Meta with injunction over data-use for AI...

2025-05-14
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (Meta's generative AI models trained on user data) and concerns the use of personal data for AI training, which implicates fundamental rights under EU privacy laws. Although no actual harm has been reported yet, the advocacy group's threat of injunction and potential damages claims indicate a credible risk of violation of rights (privacy and data protection) if Meta proceeds. This fits the definition of an AI Hazard, as the development and use of AI systems here could plausibly lead to an AI Incident involving rights violations. The article does not describe a realized harm but focuses on the potential for harm and legal challenges, so it is not an AI Incident or Complementary Information. It is not unrelated because it concerns AI use and associated risks.
Thumbnail Image

Meta is making users who opted out of AI training opt out again, watchdog says

2025-05-14
Ars Technica
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta for training AI models on user data. The watchdog's allegations focus on Meta's failure to respect users' data protection rights under GDPR, which is a legal framework protecting fundamental rights. The use of personal data without proper consent for AI training constitutes a violation of these rights, fulfilling the criterion of harm (c) under AI Incident. The event describes actual or imminent harm due to Meta's data processing practices, not just potential harm, and thus qualifies as an AI Incident rather than a hazard or complementary information. The involvement of AI systems in the development and use phase, combined with the direct violation of legal rights, supports this classification.
Thumbnail Image

Sources: Facebook and Instagram face scam ad surge from Asia and Meta is reluctant to add hurdles for ad buyers; Meta says it's tackling an "epidemic of scams"

2025-05-16
Techmeme
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Meta's Llama 4 and AI-driven ad targeting) whose use has directly led to harm through the proliferation of scam ads on major social media platforms. The harm is to users who are victims of scams, fulfilling the criteria of injury or harm to people. The reluctance of Meta to add hurdles for ad buyers contributes to the ongoing harm. The AI system's role is pivotal in enabling the scam ads to reach users effectively. Hence, this is classified as an AI Incident.
Thumbnail Image

Meta threatened with lawsuit for making users opt out of AI data training again

2025-05-14
WCPO
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's AI data training practices and the regulatory scrutiny they face under GDPR, which involves AI system development and use. However, no actual harm (such as data breaches causing injury or rights violations) is reported as having occurred yet. Instead, the focus is on potential legal violations and regulatory enforcement actions, including cease and desist letters and threats of lawsuits. This fits the definition of Complementary Information, as it provides important context on governance and societal responses to AI data use and privacy concerns, without describing a realized AI Incident or a plausible future AI Hazard.
Thumbnail Image

Meta Faces More European Legal Hurdles Over AI Data Training

2025-05-14
DataBreachToday
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (Meta's AI models trained on user data) and their development/use. The legal challenges and cease and desist orders indicate that the AI system's use of personal data may violate GDPR, which protects fundamental rights. However, the article does not report actual realized harm such as confirmed data breaches, privacy violations with damages, or other direct harms. Instead, it focuses on potential legal violations and regulatory risks, with ongoing legal proceedings and warnings. This fits the definition of an AI Hazard, where the AI system's use of data could plausibly lead to violations of rights and legal harm, but no confirmed incident has occurred yet. The event is not merely complementary information because it centers on the legal risk and challenges, not just updates or responses. It is not unrelated because AI systems and their data use are central to the issue.
Thumbnail Image

Advocacy group threatens Meta with injunction over use of EU data for ...

2025-05-14
Quinta’s weblog
Why's our monitor labelling this an incident or hazard?
The article describes a planned use of personal data by Meta to train AI models, which implicates AI system development. The advocacy group's threat of injunction and potential damages claims is based on the risk of violation of fundamental rights (privacy and data protection under EU law). Although no harm has yet occurred, the planned use could plausibly lead to violations of rights, qualifying this as an AI Hazard rather than an Incident since the harm is potential and not yet realized.
Thumbnail Image

Meta threatened with injunction over data-use for AI training

2025-05-15
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems (Meta's generative AI models) and the development/use of these systems with personal data. The advocacy group's legal challenge and threat of injunction indicate a plausible risk of violation of fundamental rights (privacy rights under EU law) if Meta proceeds without proper consent or safeguards. Since the harm is potential and the injunction is sought to prevent it, this fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the focus is on the potential for harm and legal action to prevent it, not on updates or responses to past incidents. Therefore, the classification is AI Hazard.
Thumbnail Image

"Meta AI non-compliant with GDPR" - Digital rights group menaces Meta with injunction over EU AI training

2025-05-15
TechRadar
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (Meta AI) that is planned to be trained on user data. The dispute concerns the legality and compliance of this AI training with GDPR, focusing on data privacy rights and consent. While no direct harm has occurred yet, the potential for violation of users' rights and subsequent legal consequences is credible and significant. This fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to an AI Incident involving violations of fundamental rights under applicable law. There is no indication that harm has already materialized, so it is not an AI Incident. The article is not merely complementary information since it focuses on the potential legal and rights-based risks of the AI training plan. Therefore, the classification is AI Hazard.