Meta Faces Landmark Trial Over AI Algorithms' Harm to Children in New Mexico

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta is on trial in New Mexico, accused of misleading users about children's safety on its platforms. Prosecutors allege Meta's AI-driven algorithms promoted harmful and addictive content to minors, prioritizing profits over safety and violating consumer protection laws. Jury deliberations follow extensive testimony on the algorithms' impact.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system insofar as Meta's social media platforms use AI-driven algorithms for content recommendation and user engagement, which are central to the allegations of harm to teens' mental health and safety. The harm described (mental health damage and risk of sexual exploitation) falls under harm to groups of people. Since the harm is alleged to have occurred and is the subject of a legal case, this qualifies as an AI Incident. The event focuses on the use and impact of AI-enabled social media systems leading to harm, not just potential or future harm, nor is it merely complementary information or unrelated news.[AI generated]
AI principles
Human wellbeingSafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

New Mexico, Meta make closing arguments to jury over case alleging harm to state's youth

2026-03-23
Yahoo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system insofar as Meta's social media platforms use AI-driven algorithms for content recommendation and user engagement, which are central to the allegations of harm to teens' mental health and safety. The harm described (mental health damage and risk of sexual exploitation) falls under harm to groups of people. Since the harm is alleged to have occurred and is the subject of a legal case, this qualifies as an AI Incident. The event focuses on the use and impact of AI-enabled social media systems leading to harm, not just potential or future harm, nor is it merely complementary information or unrelated news.
Thumbnail Image

U.S. Jury begins deliberations in landmark New Mexico trial over Meta risk to children

2026-03-24
The Hindu
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of complex algorithms used by Meta to recommend content on social media platforms. The harm is to children who are exposed to addictive and harmful content, which affects their health and well-being. The trial focuses on whether Meta prioritized engagement and growth over children's safety, with evidence that the AI algorithms played a pivotal role in pushing harmful content. This meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm. The event is not merely a hazard or complementary information but a concrete legal case addressing realized harm linked to AI system use.
Thumbnail Image

Landmark trial in New Mexico to decide whether Meta misled users about childrens safety risks | Company Business News

2026-03-23
mint
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's complex algorithms, which are AI systems, and their role in recommending harmful content to children, leading to addiction and exposure to harmful material. This constitutes harm to health and communities, fulfilling the criteria for an AI Incident. The trial focuses on whether Meta's use of these AI systems violated consumer protection laws by prioritizing engagement over children's safety, indicating direct or indirect harm caused by the AI system's outputs. The presence of harm is realized and under legal scrutiny, distinguishing this from a mere hazard or complementary information.
Thumbnail Image

Landmark trial in New Mexico to decide whether Meta misled users about children's safety risks

2026-03-23
KPRC
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of Meta's complex algorithms that recommend content to users, including children. The alleged harm includes the promotion of harmful and addictive content to minors, which constitutes harm to health and communities. The trial concerns whether Meta's use of these AI systems has directly or indirectly led to these harms. Since the harm is ongoing and the trial is about past and current use of AI systems causing harm, this qualifies as an AI Incident rather than a hazard or complementary information. The event is not merely about policy or research updates but about a concrete legal case addressing realized harms linked to AI system use.
Thumbnail Image

Jury begins deliberations in landmark New Mexico trial over children's safety risks on Meta

2026-03-24
Newsday
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of Meta's complex algorithms that recommend content to users, including children. The allegations and evidence presented indicate that these AI systems have directly or indirectly led to harm to children's safety and mental health, fulfilling the criteria for an AI Incident. The trial focuses on the use and impact of these AI systems, with harm already realized and under legal scrutiny. Therefore, the event is best classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Jury begins deliberations in landmark New Mexico trial over children's safety risks on Meta

2026-03-24
The Manila times
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's algorithms recommending harmful and sensational content to teenagers, which has led to real harm to children and communities. The AI system (algorithms) is central to the allegations and the harm caused. The harm is not hypothetical but is the subject of a legal trial with evidence and testimonies. This fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm (harm to health and communities). The event is not merely a potential risk or a complementary update but a concrete incident under judicial consideration.
Thumbnail Image

Jury begins deliberations in landmark New Mexico trial over children's safety risks on Meta

2026-03-24
WPLG
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of complex algorithms used by Meta to recommend content on social media platforms. The harm described includes exposure of children to harmful and addictive content, which is a violation of consumer protection laws and constitutes harm to a vulnerable group. The trial focuses on the development and use of these AI systems and their failure to adequately protect children, leading to real and ongoing harm. This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to harm to people (children).
Thumbnail Image

Jury begins deliberations in landmark New Mexico trial over children's safety risks on Meta - Boston News, Weather, Sports | WHDH 7News

2026-03-23
WHDH 7 Boston
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of Meta's complex algorithms that recommend content to users, including children. The harm is linked to these AI systems promoting harmful and addictive content, which has affected children's safety and well-being, constituting harm to a group of people. The trial is a direct consequence of these harms, making this an AI Incident. The article does not merely discuss potential harm or future risks but focuses on harm that has occurred and is being legally addressed. Therefore, the classification is AI Incident.
Thumbnail Image

Closing arguments wrap up in Meta trial

2026-03-24
KOB 4
Why's our monitor labelling this an incident or hazard?
The article involves AI systems indirectly through Meta's safety mechanisms (age verification, content moderation) which likely use AI. The trial concerns alleged harms to children and whether Meta's AI-related safety systems are sufficient. However, the article does not report a confirmed AI Incident (harm caused by AI system malfunction or misuse) or an AI Hazard (plausible future harm). Instead, it reports on legal proceedings and potential fines, which are governance responses to prior concerns. Thus, it fits the definition of Complementary Information rather than an Incident or Hazard.
Thumbnail Image

Meta accused of profits over safety at trial

2026-03-23
The Columbian
Why's our monitor labelling this an incident or hazard?
The article centers on a trial accusing Meta of misleading users about platform safety, with specific concerns about complex algorithms affecting children. These algorithms can be reasonably inferred as AI systems. However, the article does not report a confirmed AI Incident (realized harm directly or indirectly caused by AI) or an AI Hazard (plausible future harm) but rather focuses on the legal process and societal response. Thus, it fits the definition of Complementary Information, providing important context and updates on AI-related governance and societal reactions without describing a new incident or hazard.
Thumbnail Image

Jury begins deliberations in landmark New Mexico trial over children's safety risks on Meta

2026-03-23
www.weny.com
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of complex algorithms used by Meta's platforms to recommend content. The harm alleged is to children's safety and well-being, including exposure to harmful and addictive content, which fits the harm to health and communities categories. The AI system's use is central to the alleged harm, and the trial is about whether Meta's AI-driven practices caused or contributed to these harms. Therefore, this is an AI Incident rather than a hazard or complementary information, as the harm is claimed to have occurred and is under legal scrutiny.
Thumbnail Image

Landmark trial in New Mexico to decide whether Meta misled users about children's safety risks

2026-03-23
Idaho State Journal
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of Meta's complex recommendation algorithms that have been shown to promote harmful and addictive content to children, leading to health and community harms. The trial focuses on whether Meta misled users about these risks and failed to protect children, indicating that harm has occurred. The AI system's role is pivotal in causing these harms through content recommendation and age enforcement failures. Thus, this is an AI Incident, not merely a hazard or complementary information, as the harm is materialized and central to the case.
Thumbnail Image

Landmark trial in New Mexico to decide whether Meta misled users about children's safety risks

2026-03-23
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
While Meta's social media platforms almost certainly involve AI systems (e.g., recommendation algorithms, content moderation AI), the article centers on a trial about alleged misleading information regarding safety, not on a concrete AI system malfunction or harm caused by AI. There is no explicit or implicit description of an AI incident or hazard occurring or plausibly occurring. The event is primarily about a legal process and allegations, which fits best as Complementary Information related to AI governance and societal response.
Thumbnail Image

Meta on Trial: New Mexico Case Challenges Social Media Safety | Law-Order

2026-03-23
Devdiscourse
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions Meta's algorithms recommending harmful content to minors, which is an AI system influencing user experience and safety. The harm is realized or at least strongly alleged, as the prosecution claims violations of consumer protection laws due to these algorithms. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm (to children and potentially to their rights). The event is not merely a potential hazard or complementary information but a concrete legal challenge based on alleged harm caused by AI systems in operation.
Thumbnail Image

Landmark trial in New Mexico to decide whether Meta misled users about children's safety risks

2026-03-23
KULR-8 Local News
Why's our monitor labelling this an incident or hazard?
The article centers on a trial addressing alleged harms caused by AI-driven algorithms on Meta's platforms, specifically their impact on children's safety. The AI system (algorithms) is implicated in causing harm (addiction, exposure to harmful content), which aligns with the definition of an AI Incident. However, since the article reports on the trial proceedings and legal arguments rather than a new or ongoing incident, it serves as complementary information about an existing AI Incident. The main focus is on the legal and societal response, not on a new AI Incident or AI Hazard itself.
Thumbnail Image

Landmark trial in New Mexico to decide whether Meta misled users about children's safety risks - KSLNewsRadio

2026-03-23
KSL NewsRadio
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of Meta's complex recommendation algorithms that have directly led to harm by promoting harmful and addictive content to children. The harm includes health and safety risks to minors and violations of consumer protection laws. The trial is about these realized harms and Meta's alleged misleading disclosures about them. Therefore, this is an AI Incident because the AI system's use has directly contributed to significant harm, and the event centers on addressing these harms through legal action.