Meta Faces Lawsuit in Massachusetts Over AI-Driven Social Media Addiction in Youth

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta Platforms must face a lawsuit in Massachusetts alleging its AI-driven features on Instagram and Facebook deliberately foster addiction and mental health harm in young users. The court rejected Meta's federal immunity claims, highlighting the role of AI algorithms in causing harm to adolescents.[AI generated]

Why's our monitor labelling this an incident or hazard?

Meta's social media platforms use AI systems to drive engagement through features like endless scrolling, notifications, and likes, which are designed to maximize user attention. The lawsuits allege that these AI-driven features have caused addiction and psychological harm to adolescents, constituting injury or harm to health. The involvement of AI in the design and operation of these platforms is explicit and central to the harm claims. Hence, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (young users).[AI generated]
AI principles
Human wellbeingDemocracy & human autonomy

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Meta deve enfrentar em Massachusetts julgamento por vício em jovens

2026-04-10
uol.com.br
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms use AI systems to drive engagement through features like endless scrolling, notifications, and likes, which are designed to maximize user attention. The lawsuits allege that these AI-driven features have caused addiction and psychological harm to adolescents, constituting injury or harm to health. The involvement of AI in the design and operation of these platforms is explicit and central to the harm claims. Hence, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (young users).
Thumbnail Image

Meta deve enfrentar em Massachusetts julgamento por vício em jovens

2026-04-10
Terra
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms employ AI systems to personalize content and engagement features that have been alleged to cause addiction and psychological harm to young users. The lawsuit claims that these AI-driven features were deliberately designed to exploit vulnerabilities, leading to real harm. This fits the definition of an AI Incident as the AI system's use has directly led to harm to a group of people (harm to health and well-being of young users). The event is not merely a hazard or complementary information but a concrete legal case alleging realized harm caused by AI system design and use.
Thumbnail Image

Meta será processada por viciar crianças no Instagram, decide Justiça

2026-04-10
Olhar Digital - O futuro passa primeiro aqui
Why's our monitor labelling this an incident or hazard?
The Instagram platform uses AI-driven features to influence user behavior, and the lawsuit alleges these features have directly led to mental health harm in children. The court's decision to allow the case to proceed indicates recognition of the AI system's role in causing harm. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to injury or harm to a group of people (children's mental health).
Thumbnail Image

Meta Deve Enfrentar em Massachusetts Julgamento por Vício em Jovens

2026-04-10
Forbes Brasil
Why's our monitor labelling this an incident or hazard?
The article details lawsuits accusing Meta of creating addictive social media features that harm young users' mental health. These features, such as push notifications, likes, and endless scrolling, are driven by AI algorithms designed to maximize user engagement. The harm to adolescents' psychological health is a direct consequence of these AI-powered systems. The legal actions and judgments recognize this harm, fulfilling the criteria for an AI Incident involving injury or harm to a group of people. The AI system's use in the platforms' design and operation is central to the harm caused, justifying classification as an AI Incident.
Thumbnail Image

Meta deve enfrentar processo da procuradoria de Massachusetts sobre dependência de jovens, decide tribunal

2026-04-10
Valor Econômico
Why's our monitor labelling this an incident or hazard?
Instagram uses AI-based algorithms to optimize user engagement, including features like infinite scroll and push notifications, which are alleged to exploit psychological vulnerabilities of young users, causing addiction and mental health harm. The lawsuit targets the company's conduct in designing these AI-driven features, which directly or indirectly led to harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people (children and adolescents).