Arkansas Sues YouTube Over AI-Driven Addiction and Youth Mental Health Harm

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Arkansas has sued YouTube, Google, and Alphabet, alleging that YouTube's AI-powered recommendation algorithms are deliberately designed to be addictive, causing mental health issues among youth. The lawsuit claims this has led to increased state spending on mental health services, while the companies deny the allegations.[AI generated]

Why's our monitor labelling this an incident or hazard?

YouTube's recommendation system is an AI system that influences user engagement by steering users towards certain content. The lawsuit claims this system amplifies harmful material and drives addictive behavior among youth, leading to mental health harms and exposure to harmful content. These harms fall under injury or harm to health and harm to communities. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance response but describes ongoing harm caused by the AI system's operation.[AI generated]
AI principles
Human wellbeingSafetyTransparency & explainabilityAccountabilityDemocracy & human autonomyRespect of human rightsPrivacy & data governance

Industries
Media, social platforms, and marketingGovernment, security, and defenceHealthcare, drugs, and biotechnology

Affected stakeholders
ChildrenGovernment

Harm types
PsychologicalEconomic/PropertyPublic interest

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Arkansas sues YouTube over claims that the site is fueling a mental...

2024-09-30
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation system is an AI system that influences user engagement by steering users towards certain content. The lawsuit claims this system amplifies harmful material and drives addictive behavior among youth, leading to mental health harms and exposure to harmful content. These harms fall under injury or harm to health and harm to communities. The AI system's use is directly linked to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance response but describes ongoing harm caused by the AI system's operation.
Thumbnail Image

Arkansas Lawsuit: Google's YouTube Fuels Youth Mental Health Crisis

2024-10-02
Breitbart
Why's our monitor labelling this an incident or hazard?
YouTube's content recommendation algorithms are AI systems that infer from user behavior to generate outputs (recommended videos) influencing user engagement. The lawsuit alleges these AI-driven systems deliberately amplify harmful content and foster addiction, leading to mental health harms among youth, which is a direct harm to a group of people. The involvement of AI in causing this harm is central to the complaint. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

This US state is suing YouTube for 'fueling a mental health crisis'

2024-10-01
Hindustan Times
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation algorithms are AI systems that influence user engagement by selecting and promoting content. The lawsuit claims these algorithms amplify harmful material and contribute to youth mental health problems, which is a direct harm to health and communities. The AI system's use is central to the alleged harm, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but an ongoing harm as per the lawsuit's claims, and thus it is not an AI Hazard or Complementary Information.
Thumbnail Image

US state sues YouTube for harming mental health of young adults, causing 'brain rot'

2024-10-01
Firstpost
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation system is an AI system that infers from user behavior to generate content suggestions. The lawsuit alleges that these AI-driven algorithms push addictive and harmful content to young users, leading to mental health harms. This fits the definition of an AI Incident because the AI system's use has directly led to harm to a group of people (mental health issues among youth). The event is not merely a potential risk or a general policy discussion but a legal claim of realized harm caused by the AI system's outputs and design.
Thumbnail Image

World News | Arkansas Sues YouTube over Claims Site Fuelling Mental Health Crisis | LatestLY

2024-09-30
LatestLY
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation system is an AI system that infers from user input and behavior to generate content recommendations. The lawsuit alleges that this AI system's use has directly or indirectly led to harm to the health of a group of people (youth in Arkansas), fulfilling the criteria for an AI Incident. The harm is mental health deterioration among youth, linked to the platform's addictive design and algorithmic amplification of harmful content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

Arkansas sues YouTube over claims that the site is fueling a mental health crisis

2024-09-30
Financial Post
Why's our monitor labelling this an incident or hazard?
The article discusses a lawsuit claiming that YouTube's AI-powered recommendation system amplifies harmful content affecting youth mental health, which implies indirect harm through AI use. However, the harm is alleged and part of ongoing legal and societal debate, not a confirmed incident with direct causation. The article focuses on the legal challenge and responses rather than documenting a realized AI Incident or a plausible future hazard. Thus, it fits the definition of Complementary Information, providing important context on governance and societal reactions to AI-related harms.
Thumbnail Image

Arkansas AG Says YouTube Addicts And Harms Youth Users - Law360

2024-10-01
law360.com
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation system is an AI system that influences what content users see. The lawsuit claims that this AI-driven design causes addiction and exposure to harmful content, resulting in mental health harm to youth. This is a direct harm caused by the use of an AI system, meeting the criteria for an AI Incident under the framework.
Thumbnail Image

USA: Arkansas sues YouTube over alleged negative impact on youth mental health - Business & Human Rights Resource Centre

2024-10-04
Business & Human Rights
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation system is an AI system that influences what content users see, and the lawsuit claims this system is designed to be addictive, causing mental health harm to youth. The harm is indirect but significant, as it has led to increased mental health issues and state expenditures. Therefore, this event qualifies as an AI Incident due to the AI system's use leading to harm to health.
Thumbnail Image

Arkansas sues YouTube over claims that the site is fueling a mental health crisis - KION546

2024-09-30
KION546
Why's our monitor labelling this an incident or hazard?
The article describes a legal action against YouTube for alleged harm related to mental health impacts on youth, which is linked to the platform's addictive nature. YouTube's recommendation algorithms are AI systems that influence user engagement, but the article does not specify that the AI system's development, use, or malfunction directly or indirectly caused the harm. The lawsuit is a societal and legal response to concerns about AI-driven platform effects rather than a report of an AI Incident or Hazard. Thus, it fits the definition of Complementary Information as it provides context on governance and societal reactions to AI-related issues without describing a specific AI Incident or Hazard.
Thumbnail Image

Arkansas sues Google & YouTube, alleging addictive platforms harm teens

2024-09-30
KHBS/KHOG Channel 40/29
Why's our monitor labelling this an incident or hazard?
The lawsuit explicitly alleges that the platforms use algorithms to create addictive experiences, which is a direct involvement of AI systems in causing harm to the mental health of teenagers. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to harm to a group of people. The harm is realized (mental health issues), and the AI system's role is pivotal as per the lawsuit's claims. Therefore, this event is classified as an AI Incident.
Thumbnail Image

ABD'nin Arkansas eyaleti zihin sağlığı sorunlarını artırdığı iddiasıyla YouTube'a dava açtı

2024-10-01
CNN Türk
Why's our monitor labelling this an incident or hazard?
While YouTube likely uses AI systems for content recommendation and moderation, the article does not explicitly link AI system development, use, or malfunction to the alleged harms. The lawsuit concerns the platform's addictive nature and its societal effects, which are indirect and not clearly attributable to AI system failures or misuse. The event is primarily about a legal action and public policy response to perceived harms associated with social media platforms, fitting the definition of Complementary Information rather than an AI Incident or Hazard.
Thumbnail Image

YouTube'a zihin sağlığı sorunlarını artırdığı iddiasıyla dava!

2024-10-01
T24
Why's our monitor labelling this an incident or hazard?
YouTube's recommendation algorithms are AI systems that influence content exposure and user engagement. The lawsuit alleges that these AI-driven mechanisms cause addiction and increased mental health problems among young users, which is a direct harm to health. The involvement of AI in content curation and its impact on mental health fits the definition of an AI Incident, as the AI system's use has directly or indirectly led to harm. The event is not merely a potential risk or a complementary update but a legal claim of realized harm linked to AI system use.
Thumbnail Image

Gençlerin zihin sağlığını bozuyor diye YouTube'a dava

2024-10-01
Diken
Why's our monitor labelling this an incident or hazard?
YouTube employs AI systems for content recommendation and personalization, which are central to user engagement and can contribute to addictive behaviors and mental health problems. The lawsuit explicitly links the platform's growth and its harmful effects on youth mental health, indicating realized harm. The AI system's role is pivotal as it drives content exposure and engagement patterns. Hence, this event meets the criteria for an AI Incident due to indirect harm to health caused by AI system use.
Thumbnail Image

ABD'de YouTube'a dava açtı: Zihin sağlığını bozuyor!

2024-10-01
Haber7
Why's our monitor labelling this an incident or hazard?
The lawsuit explicitly alleges that YouTube's platform, which relies on AI-powered recommendation systems to amplify content and drive user engagement, has directly led to increased mental health problems among young people in Arkansas. This is a direct harm to health caused by the use of an AI system. The case is about realized harm, not just potential harm, so it qualifies as an AI Incident rather than an AI Hazard. The event is not merely a policy update or research finding, so it is not Complementary Information. It is clearly related to AI systems and their harmful impact, so it is not Unrelated.