French Families Sue TikTok Over AI-Driven Harmful Content to Minors

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

In France, 16 families filed a collective complaint against TikTok, alleging its AI-powered recommendation algorithm promoted harmful content—such as suicide, self-harm, and eating disorders—to vulnerable minors. The complaint links the algorithm to several suicides and severe mental health issues among adolescents, prompting legal and criminal investigations.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok's platform uses AI systems for personalized content recommendation and continuous scrolling, which are explicitly mentioned as mechanisms causing psychological harm to minors by exposing them to morbid and harmful content. The harm includes mental health deterioration and suicidal behavior, which are direct harms to individuals' health and well-being. The families' legal actions and ongoing investigations further confirm the realized harm linked to the AI system's use. Hence, this event meets the criteria for an AI Incident.[AI generated]
AI principles
SafetyHuman wellbeing

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Physical (death)Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Seize familles demandent l'élargissement d'une enquête sur la plateforme TikTok, accusée de promouvoir le suicide

2026-05-11
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
TikTok's platform uses AI systems for personalized content recommendation and continuous scrolling, which are explicitly mentioned as mechanisms causing psychological harm to minors by exposing them to morbid and harmful content. The harm includes mental health deterioration and suicidal behavior, which are direct harms to individuals' health and well-being. The families' legal actions and ongoing investigations further confirm the realized harm linked to the AI system's use. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

" Exploiter la vulnérabilité des adolescents " : seize familles françaises portent plainte contre la plateforme TikTok

2026-05-11
Ouest France
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content recommendation and personalization, which can influence user exposure to harmful content. The complaint specifically accuses TikTok of exploiting vulnerabilities through its platform, leading to serious harms including suicides and mental health disorders among adolescents. This constitutes direct or indirect harm caused by the AI system's use. Therefore, this event qualifies as an AI Incident under the OECD framework.
Thumbnail Image

France : Tiktok visé par une plainte pour "abus de faiblesse" sur mineurs

2026-05-11
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what content users see. The complaint alleges that this AI system has promoted harmful content to vulnerable minors, leading to real harm including suicides and severe mental health problems. The involvement of the AI system in causing harm to health and communities fits the definition of an AI Incident. The event describes realized harm linked to the AI system's use, not just potential harm or general information, so it is not an AI Hazard or Complementary Information.
Thumbnail Image

TikTok accusé de promouvoir le suicide: 16 familles demandent l'élargissement de l'enquête parisienne

2026-05-11
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that personalizes content feeds to maximize user engagement. The event describes direct harm to minors' mental health and suicidal behavior linked to the AI-driven content exposure. This constitutes a violation of rights and harm to health, fitting the definition of an AI Incident. The involvement of the AI system in causing or facilitating this harm is explicit and central to the complaint and investigation.
Thumbnail Image

TikTok : Une plainte collective de 39 personnes en France après des suicides d'adolescents

2026-05-11
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see. The complaint alleges that this system exploits minors' vulnerabilities by promoting harmful content, leading to suicides and mental health issues. This is a direct harm to health caused by the AI system's use. The event involves the use of an AI system and describes realized harm (suicides, anorexia, depression) linked to the AI system's operation. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

"Des prisons mentales": seize familles portent plainte contre Tiktok et accusent la plateforme d'enfermer des adolescents dans des "spirales morbides"

2026-05-11
BFMTV
Why's our monitor labelling this an incident or hazard?
The article explicitly identifies TikTok's algorithm, an AI system that personalizes and recommends content, as causing harm by trapping minors in harmful content spirals. The harms include suicides, depression, and anorexia among adolescents, which are direct injuries to health and well-being. The involvement of the AI system in the development and use phases (recommendation and content delivery) is clear. The legal complaints and ongoing investigations further confirm the recognition of harm caused by the AI system. Therefore, this event meets the criteria for an AI Incident.
Thumbnail Image

Une plainte collective en France contre TikTok après des suicides d'ados

2026-05-11
20minutes
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation and moderation systems are AI systems that influence user experience and exposure to content. The complaint alleges that these AI-driven systems have exploited minors' vulnerabilities, leading to severe mental health harms and suicides, which qualifies as harm to persons. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to significant harm to individuals. The event is not merely a policy discussion or general news but involves realized harm linked to AI system use.
Thumbnail Image

INFO FRANCEINFO. "Je voyais la mort planer au-dessus de ma fille" : 16 familles portent plainte contre TikTok pour "abus de faiblesse"

2026-05-11
Franceinfo
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that influences the content users see. The event details direct harm to minors, including suicides and severe mental health issues, linked to the algorithm's promotion of harmful content such as pro-anorexia videos. The families' legal action and the ongoing investigation by authorities confirm that the AI system's use has caused real harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to injury or harm to persons, specifically vulnerable minors.
Thumbnail Image

Plainte collective contre TikTok : la plateforme "fabrique des prisons mentales pour adolescents", dénonce l'avocate des familles plaignantes

2026-05-11
Franceinfo
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithms are AI systems that influence what users see. The complaint alleges that these AI systems have directly or indirectly led to serious harms, including suicides and mental health issues among adolescents, which fits the definition of an AI Incident. The involvement of AI in promoting harmful content and exploiting vulnerabilities is explicit and linked to realized harm, not just potential harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"Des vidéos pour savoir comment se pendre" : 16 familles portent plainte contre TikTok pour "abus de faiblesse"

2026-05-11
midilibre.fr
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that personalizes content feeds to maximize user engagement. The complaint alleges that this AI system has directly led to harm by exposing minors to dangerous content, resulting in psychological harm, addiction, and severe health issues such as anorexia and suicidal ideation. The harms described fall under injury or harm to health and harm to communities. The involvement of the AI system is central and causal in these harms, meeting the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a report of realized harm linked to AI use.
Thumbnail Image

Anorexie, idées suicidaires... 16 familles françaises portent plainte contre TikTok pour " abus de faiblesse "

2026-05-11
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what users see. The complaint alleges that this AI system exploits minors' vulnerabilities, leading to mental health harms and suicides, which are direct injuries to health and life. The involvement of the AI system in causing these harms is explicit and central to the complaint. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Suicide, automutilation, anorexie... TikTok visé par une nouvelle plainte de 16 familles

2026-05-11
Le Huffington Post
Why's our monitor labelling this an incident or hazard?
The TikTok platform uses an AI-driven recommendation algorithm that influences the content shown to users. The complaint and ongoing investigation highlight that this AI system has directly led to harm, including suicides and mental health deterioration among adolescents, which fits the definition of an AI Incident. The harm is to health and communities, and the AI system's role is pivotal in causing these harms. The article also mentions legal actions and societal responses, but the primary focus is on the realized harm caused by the AI system's use, not just complementary information or potential hazards.
Thumbnail Image

TikTok accusé de promouvoir le suicide : 16 familles demandent l'élargissement de l'enquête parisienne

2026-05-11
Le Telegramme
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see. The article reports that adolescents are being exposed to harmful content promoting suicide and self-harm, which has led to emotional distress and health harm. The families' legal action and ongoing investigation highlight the direct or indirect role of the AI system in causing harm. Therefore, this event qualifies as an AI Incident due to realized harm caused by the AI system's use.
Thumbnail Image

" Crack digital " : pourquoi 16 familles ont-elles porté plainte contre TikTok ?

2026-05-11
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences user behavior by curating videos. The complaint and ongoing investigation highlight that the AI algorithm has directly or indirectly led to significant harms, including suicides and mental health issues among minors. This meets the criteria for an AI Incident because the AI system's use has caused injury or harm to persons (harm category a) and harm to communities (category d). The legal action and investigation further confirm the recognized harm linked to the AI system's operation.
Thumbnail Image

" C'est presque son meilleur ami " : une plainte collective est déposée contre TikTok pour " abus de faiblesse "

2026-05-11
La Voix du Nord
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems to curate and recommend content to users, especially minors. The complaint alleges that this AI-driven content exposure has contributed to serious mental health harms, including anorexia and suicide, which are direct harms to individuals' health and well-being. The involvement of AI in content recommendation and its role in influencing vulnerable users' behavior meets the criteria for an AI Incident, as the harm has occurred and is linked to the AI system's use. The legal action and the described harms confirm the realized impact rather than a potential risk, distinguishing this from a hazard or complementary information.
Thumbnail Image

France : 16 familles portent plainte contre TikTok après le suicide de cinq adolescentes

2026-05-11
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that infers from user inputs and behavior to generate personalized content feeds. The complaint alleges that this AI system's use has contributed to psychological harm and suicides among adolescents, which is a direct or indirect injury to health. The event involves the use of an AI system and realized harm has occurred, meeting the criteria for an AI Incident. The legal action aims to establish responsibility for harm caused by the AI system's outputs, not just potential future harm, so it is not merely a hazard or complementary information.
Thumbnail Image

" Je voyais la mort planer au-dessus de ma fille " : 16 familles déposent une plainte collective pour " abus de faiblesse " contre le réseau social chinois TikTok

2026-05-11
Nice-Matin
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what users see. The complaint alleges that this AI system's use has directly led to harm to minors' mental health, including suicides, which constitutes injury or harm to persons. The event describes realized harm caused by the AI system's outputs and its exploitation of vulnerable users, meeting the definition of an AI Incident. The legal action and investigation are responses to this harm but do not change the classification of the event itself.
Thumbnail Image

C'est comme si je lui retirais sa came" : une plainte collective est déposée contre TikTok pour "abus de faiblesse

2026-05-11
LaMontagne.fr
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems to recommend and moderate content, which is central to the platform's operation. The complaint alleges that these AI-driven mechanisms have exploited vulnerabilities of children, leading to severe mental health harms and suicides. This constitutes direct or indirect harm caused by the use of an AI system. The event involves realized harm (mental health issues and suicides) linked to the AI system's use, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok peut-il être poursuivi et condamné en France pour la promotion du suicide auprès des adolescents ?

2026-05-11
Télérama
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system, namely TikTok's algorithmic recommendation engine that uses continuous scrolling and highly personalized content suggestions to engage users. The harm described includes mental health deterioration and exposure to suicidal content among minors, which constitutes injury or harm to a group of people (adolescents). The AI system's use is directly linked to the harm through its role in promoting addictive and harmful content. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly led to harm to health and well-being of minors.
Thumbnail Image

En France, des familles portent plainte contre TikTok, accusé " d'abus de faiblesse "

2026-05-11
01net
Why's our monitor labelling this an incident or hazard?
The article explicitly implicates TikTok's algorithm, an AI system, in causing direct harm to vulnerable minors by promoting content that leads to mental health crises and suicides. The harm is realized and significant, involving injury to health and violation of rights through abuse of vulnerability. The AI system's role is pivotal in the chain of causation, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a governance response but documents actual harm linked to AI use.
Thumbnail Image

TikTok visé par une procédure en France : seize familles ont déposé une plainte contre le réseau social

2026-05-12
centrepresseaveyron.fr
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that personalizes content feeds based on user behavior. The complaint alleges that this AI-driven algorithm has exposed vulnerable adolescents to harmful content, contributing to mental health crises and suicides. This is a direct link between the AI system's use and harm to persons (mental health harm and death). The event involves the use of an AI system leading to realized harm, not just potential harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok accusé de promouvoir le suicide : 16 familles demandent d'élargir l'enquête parisienne

2026-05-11
La Croix
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that personalizes content feeds to maximize user engagement. The complaint and ongoing investigations highlight that this AI system has directly contributed to psychological harm and suicidal behavior among minors by promoting harmful content. The harm is realized and significant, involving injury to health and violation of rights. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Anorexie, dépression et idées suicidaires... 16 familles portent plainte contre TikTok pour " abus de faiblesse " - L'Humanité

2026-05-11
L'Humanité
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm qualifies as an AI system because it uses complex data processing to personalize content feeds. The complaint alleges that this AI system's use has directly led to significant harm to the health of young users, including mental health deterioration and suicides, which fits the definition of an AI Incident. The harm is realized and severe, involving injury to persons and death, thus meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Paris : 16 familles portent plainte contre TikTok pour " abus de faiblesse " - Tunisie numerique

2026-05-11
Tunisie Numerique
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what content users see. The complaint alleges that this AI system's use has led to direct harm to minors' health and well-being, including suicides and mental health disorders. The involvement of the AI system in causing these harms is explicit and central to the complaint. Therefore, this event qualifies as an AI Incident due to the realized harm caused by the AI system's use.
Thumbnail Image

Seize familles portent plainte contre TikTok pour " abus de faiblesse " : on vous explique

2026-05-11
Courrier picard
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences user behavior and mental health. The complaint alleges direct harm (suicides, mental health issues) linked to the platform's algorithmic content delivery, which exploits vulnerable minors. This meets the criteria for an AI Incident as the AI system's use has directly led to harm to persons and violations of rights. The event is not merely a potential risk or a governance response but reports actual harm and legal action.
Thumbnail Image

Suicide, anorexie, dépression... Une plainte collective pour "abus de faiblesse" déposée contre Tiktok

2026-05-11
Corse Matin
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation algorithm is an AI system that curates and suggests content based on user behavior. The complaint alleges that this AI system's use has directly led to harm to a minor's health (mental health and eating disorders), fulfilling the criteria for an AI Incident under harm to health of a person. The involvement is through the AI system's use (content recommendation) causing direct harm. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

C'est comme si je lui retirais sa came" : une plainte collective est déposée contre TikTok pour "abus de faiblesse

2026-05-11
lyonne.fr
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see. The complaint alleges that this AI system exploits children's vulnerabilities, leading to severe mental health harms and deaths. The harms described (depression, anorexia, suicidal ideation, and suicides) are direct injuries to health caused by the AI system's outputs. The event is not merely a potential risk but reports actual harm linked to the AI system's use. Hence, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

TikTok est accusé d'abus de faiblesse par seize familles françaises

2026-05-11
Génération-NT
Why's our monitor labelling this an incident or hazard?
The TikTok recommendation algorithm is an AI system that influences content exposure based on user behavior. The complaint alleges that this AI system has directly led to psychological harm, including suicides and severe mental health issues among minors, which fits the definition of an AI Incident due to injury or harm to health and violation of rights. The ongoing legal and regulatory responses further confirm the seriousness and realized harm caused by the AI system's use.
Thumbnail Image

TikTok visé par une plainte pour abus de faiblesse

2026-05-11
Economie Matin
Why's our monitor labelling this an incident or hazard?
The event clearly involves an AI system, namely TikTok's recommendation algorithm, which is alleged to have directly contributed to serious psychological harm and deaths among minors by promoting harmful content. This constitutes direct harm to health and a violation of users' rights. Therefore, this qualifies as an AI Incident under the OECD framework, as the AI system's use has directly led to significant harm to individuals and communities.
Thumbnail Image

France : Tiktok visé par une plainte pour "abus de faiblesse" sur mineurs

2026-05-11
euronews
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what content users see. The complaint alleges that this AI system promotes harmful content to minors, leading to severe mental health issues and suicides. This constitutes direct harm to health and life caused by the AI system's use. The event describes realized harm, not just potential harm, and involves the AI system's use leading to violations of health and safety. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

TikTok accusé de promouvoir le suicide: 16 familles demandent l'élargissement de l'enquête parisienne

2026-05-11
TV5MONDE
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation system is an AI system that uses personalized content feeds to maximize user engagement. The families' accusations and ongoing investigations indicate that the AI system's use has directly led to harm to minors' mental health and exposure to harmful content, fulfilling the criteria for an AI Incident. The harm is realized (hospitalizations, psychological harm), and the AI system's role is pivotal in creating addictive exposure to harmful content. Therefore, this event qualifies as an AI Incident.
Thumbnail Image

"Addiction terrible" : une plainte collective pour "abus de faiblesse" déposée contre TikTok par seize familles - ICI

2026-05-11
ICI, le média de la vie locale
Why's our monitor labelling this an incident or hazard?
The event explicitly involves TikTok's algorithm, which is an AI system that recommends content based on user behavior. The complaint and investigation focus on how this AI system's use has directly or indirectly caused harm to vulnerable adolescents, including deaths by suicide and severe mental health conditions. This constitutes harm to health and communities, fitting the definition of an AI Incident. The event is not merely a potential risk or a complementary update but reports actual harm linked to the AI system's operation.
Thumbnail Image

En France, 16 familles accusent TikTok de promouvoir des contenus suicidaires - The Media Leader FR

2026-05-11
The Media Leader FR - N°1 sur les décideurs médias
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that uses continuous scrolling and personalized algorithms to maximize user engagement. The families' accusations and ongoing investigations indicate that this AI system's use has directly led to harm to minors' mental health, including exposure to suicidal content and psychological distress. The harm is materialized and significant, involving injury to health and harm to communities (vulnerable adolescents). Thus, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Denuncian en Francia a TikTok por suicidios o trastornos mentales de adolescentes

2026-05-11
EL MUNDO
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see, and the complaint alleges that this system exploits adolescent vulnerabilities, leading to serious mental health harms and suicides. The harms described (suicides, anorexia, depression) are direct harms to health caused or exacerbated by the AI system's outputs. The involvement of AI in content curation and its role in the harm is explicit and central to the complaint. Hence, this event meets the criteria for an AI Incident due to direct or indirect harm to health caused by the AI system's use.
Thumbnail Image

Varias familias denuncian a TikTok por suicidios o trastornos mentales de adolescentes

2026-05-11
Libertad Digital
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what videos users see to maximize engagement. The complaint alleges that this AI system exploits adolescent psychological vulnerabilities, leading to severe mental health harms including suicides and eating disorders. These harms are direct and materialized, fulfilling the criteria for an AI Incident. The event involves the use of an AI system (the recommendation algorithm), and the harms (mental health issues and suicides) are directly linked to its use. The legal action and ongoing investigation further confirm the seriousness and reality of the harm.
Thumbnail Image

Un grupo de 16 familias denuncian a TikTok por suicidios o trastornos mentales de adolescentes en Francia

2026-05-11
telecinco
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see. The complaint alleges that this system exploited minors' vulnerabilities, leading to serious mental health harms and suicides. The harm is realized and directly linked to the AI system's use, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a report of actual harm caused by the AI system's operation.
Thumbnail Image

Denuncia colectiva contra TikTok en Francia por suicidios o trastornos mentales en jóvenes

2026-05-11
HERALDO
Why's our monitor labelling this an incident or hazard?
TikTok employs AI-driven recommendation algorithms to personalize content feeds, which can influence user behavior and exposure to harmful content. The complaint alleges that this AI-driven content curation has directly or indirectly led to serious harms including suicides and mental health disorders among adolescents. The presence of an AI system is reasonably inferred from the platform's content recommendation mechanisms. The harms described fall under injury or harm to health and harm to communities. Since the harm is realized and the AI system's role is pivotal, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Familias francesas denuncian a TikTok por suicidios o trastornos mentales de adolescentes

2026-05-11
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see. The reported harms (suicides, mental health disorders) are directly linked to the use of this AI system, as the platform allegedly exploits adolescent vulnerability through its AI-driven content curation. The event involves realized harm to persons caused indirectly by the AI system's use, meeting the criteria for an AI Incident rather than a hazard or complementary information. The legal complaint and ongoing investigation further support the classification as an incident.
Thumbnail Image

Varias familias denuncian en Francia a TikTok por suicidios o trastornos mentales de adolescentes

2026-05-11
Diario de Navarra
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what content users see, and the complaint alleges that this system exploits adolescent vulnerability, leading to serious mental health harms and suicides. The involvement of AI is reasonably inferred from the platform's known use of AI for content curation and recommendation. The harms are realized and severe, including deaths and mental health disorders, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a direct allegation of harm caused by AI-driven content exposure.
Thumbnail Image

16 familias en Francia denuncian a TikTok por suicidios o trastornos mentales de adolescentes

2026-05-11
Metro
Why's our monitor labelling this an incident or hazard?
TikTok's AI-powered recommendation system is central to the platform's operation and content exposure. The allegations claim that the AI system's content curation has exploited adolescent vulnerabilities, leading to serious mental health harms and suicides. This constitutes indirect harm caused by the AI system's use. The ongoing investigation and legal action confirm that harm has occurred, meeting the criteria for an AI Incident under the framework, specifically harm to health (a).