Families sue TikTok over AI-recommended blackout challenge data deletion

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Four British families have sued TikTok and ByteDance in the US, alleging the platform’s AI-driven recommendation system promoted a dangerous “blackout challenge” leading to their children’s deaths. They seek account data to investigate, but TikTok’s senior government relations manager says some data may have been deleted and is unavailable.[AI generated]

Why's our monitor labelling this an incident or hazard?

TikTok's content recommendation algorithm is an AI system that influences what content users see. The lawsuit alleges that this AI system deliberately targeted children with harmful content, leading to their deaths. This constitutes direct harm to persons caused by the use of an AI system, fulfilling the criteria for an AI Incident. The harm is realized and severe (death), and the AI system's role is pivotal as per the allegations. Therefore, this event is classified as an AI Incident.[AI generated]
AI principles
AccountabilityTransparency & explainabilitySafetyHuman wellbeingRespect of human rightsRobustness & digital securityPrivacy & data governance

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
Physical (death)Psychological

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommenders


Articles about this incident or hazard

Thumbnail Image

British families sue TikTok in U.S. over Blackout Challenge children deaths

2025-02-07
Yahoo
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithm is an AI system that influences what content users see. The lawsuit alleges that this AI system deliberately targeted children with harmful content, leading to their deaths. This constitutes direct harm to persons caused by the use of an AI system, fulfilling the criteria for an AI Incident. The harm is realized and severe (death), and the AI system's role is pivotal as per the allegations. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Lawsuit against TikTok by parents of teenagers in Britain who died because of challenge - ProtoThema English

2025-02-07
protothemanews.com
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content recommendation and engagement maximization. The lawsuit alleges that these AI-driven design choices created harmful addictions and recommended dangerous content (the blackout challenge), which directly led to the deaths of teenagers. This constitutes harm to persons caused directly or indirectly by the AI system's use. The event is not merely a potential hazard or complementary information but a reported incident with realized harm linked to AI system use.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says

2025-02-11
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems to recommend videos to users, including children. The lawsuit alleges that these AI-driven recommendations pushed dangerous 'blackout challenge' videos, which contributed to the deaths of four children. This constitutes indirect harm caused by the AI system's use. The deletion of data per legal requirements further complicates accountability and investigation. Given the direct link between AI content recommendation and harm to children, this qualifies as an AI Incident under the framework, specifically harm to persons and potential rights violations.
Thumbnail Image

TikTok sued over deaths of children said to have attempted 'blackout challenge'

2025-02-07
Yahoo! Finance
Why's our monitor labelling this an incident or hazard?
The event describes a direct link between TikTok's AI-powered content recommendation system and the deaths of children who attempted a dangerous challenge promoted on the platform. The AI system's use (content recommendation algorithm) is alleged to have deliberately pushed harmful content to vulnerable users, leading to fatal outcomes. This fits the definition of an AI Incident because the AI system's use directly led to injury or harm to persons. The presence of the AI system is reasonably inferred from the description of TikTok's algorithm targeting users based on age and location to increase engagement. The harm is realized, not just potential, and the event is not merely a governance or complementary update but a legal claim of direct harm caused by AI use.
Thumbnail Image

TikTok sued by parents of UK teens after alleged challenge deaths

2025-02-07
BBC
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that personalizes and maximizes user engagement by algorithmically selecting and pushing content. The lawsuit alleges that this AI-driven system created harmful dependencies and exposed children to dangerous challenges, which directly or indirectly led to their deaths. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to injury or harm to persons. The event is not merely a potential hazard or complementary information but a reported incident involving harm linked to AI system use.
Thumbnail Image

Parents suing TikTok over children's deaths say it has 'no compassion'

2025-02-08
BBC
Why's our monitor labelling this an incident or hazard?
TikTok's content moderation and recommendation systems are AI-driven and are central to the dissemination or suppression of harmful content. The lawsuit alleges that these systems failed to prevent the spread or promotion of dangerous challenges that directly led to children's deaths, constituting injury or harm to persons. The event describes realized harm linked to the AI system's use and failure, meeting the criteria for an AI Incident rather than a hazard or complementary information. The focus is on harm caused by the AI system's malfunction or inadequate use, not just potential future harm or general information.
Thumbnail Image

TikTok sued over deaths of children said to have attempted 'blackout challenge'

2025-02-07
The Guardian
Why's our monitor labelling this an incident or hazard?
The event describes a lawsuit alleging that TikTok's AI-powered recommendation system directly contributed to the deaths of children by promoting dangerous content. The AI system's use is central to the harm, as the algorithm's targeting and promotion of harmful videos is claimed to have caused injury and death. This fits the definition of an AI Incident because the AI system's use has directly led to harm to persons. The involvement of AI is explicit in the claim about TikTok's algorithmic targeting, and the harm is realized and severe.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says

2025-02-11
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
TikTok employs AI algorithms for content recommendation and moderation. The wrongful death lawsuit claims that these AI-driven algorithms promoted dangerous content leading to the deaths of children attempting a challenge. Although TikTok denies the challenge was trending and states it bans related content, the incident involves harm to persons (children's deaths) linked to the AI system's use. Therefore, this qualifies as an AI Incident due to indirect harm caused by the AI system's content promotion and moderation functions.
Thumbnail Image

Parents suing TikTok over children´s deaths 'want answers´

2025-02-09
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what videos users see. The lawsuit alleges that this AI system pushed dangerous 'blackout challenge' videos to children, which directly contributed to their deaths. The involvement of the AI system is in its use (content recommendation), and the harm is direct and severe (children's deaths). This fits the definition of an AI Incident as the AI system's use has directly led to injury and death, a clear harm to persons. The article describes realized harm, not just potential harm, and the AI system's role is pivotal in the chain of events leading to the harm.
Thumbnail Image

TikTok sued for 'viral blackout challenge deaths': Family want answers

2025-02-08
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The event describes a wrongful death lawsuit against TikTok, alleging that its AI-powered recommendation system pushed harmful challenge videos to children, leading to their deaths. The AI system's use is explicitly implicated in causing harm (fatalities), fulfilling the criteria for an AI Incident. The harm is realized and directly linked to the AI system's operation (content recommendation algorithm). Although TikTok claims to block such content, the lawsuit asserts the algorithm still promoted it, causing injury and death. Hence, this is not a hazard or complementary information but a clear AI Incident.
Thumbnail Image

Parents suing TikTok over children's deaths say it 'has no compassion'

2025-02-08
Yahoo
Why's our monitor labelling this an incident or hazard?
The event explicitly involves the use of AI systems in TikTok's content recommendation and moderation processes, which are central to the allegations. The harms are realized and severe, involving the deaths of children linked to the platform's viral challenge content. The AI system's role is pivotal as the lawsuit claims the platform's algorithmic design and content management directly or indirectly led to these harms. Therefore, this qualifies as an AI Incident under the framework, as the development and use of AI systems on TikTok have directly or indirectly led to significant harm to persons.
Thumbnail Image

Archie Battersbee's family join bereaved parents to sue TikTok after children's deaths

2025-02-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses TikTok's algorithm, which is an AI system that recommends content to users. The lawsuit claims that this AI system deliberately pushed dangerous content to children, leading to their injuries and deaths. This is a direct link between the AI system's use and harm to persons, fulfilling the criteria for an AI Incident. The harm is realized (children died), and the AI system's role is pivotal in the chain of events. Therefore, this event is classified as an AI Incident.
Thumbnail Image

TikTok sued by grieving parents who claim their children died when attempting dangerous 'blackout challenge'

2025-02-07
We Got This Covered
Why's our monitor labelling this an incident or hazard?
TikTok's recommendation system is an AI system that curates and promotes content to users based on engagement and other factors. The lawsuit alleges that this AI-driven algorithm recommended the blackout challenge to children, which directly led to their deaths. This constitutes an AI Incident because the AI system's use has directly led to harm to persons (multiple child deaths). The involvement of the AI system is explicit in the court ruling that Section 230 immunity does not apply due to the algorithmic recommendations. Therefore, this is an AI Incident involving harm to health and life caused by the AI system's use.
Thumbnail Image

Parents suing TikTok over children's deaths 'want answers'

2025-02-09
The Independent
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what videos users see. The lawsuit claims that this AI system promoted dangerous 'blackout challenge' videos to children, which directly led to injuries and deaths. The involvement of the AI system in causing harm is explicit and direct, fulfilling the criteria for an AI Incident. The event describes realized harm (children's deaths) caused by the AI system's outputs, not just potential harm or general concerns, so it is not an AI Hazard or Complementary Information. The focus is on the harmful use of AI leading to serious injury and death, meeting the definition of an AI Incident.
Thumbnail Image

TikTok sued over deaths of 4 children after viral challenge

2025-02-07
Daily Post Nigeria
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation algorithms, which are AI systems, are implicated in promoting dangerous viral challenges that led to the deaths of children. The lawsuit alleges that the platform's AI-driven engagement mechanisms foreseeably caused harm by pushing harmful content to vulnerable users. This constitutes direct harm to health (deaths), fulfilling the criteria for an AI Incident. The presence of AI is reasonably inferred from the description of "programming decisions" and "engineered addiction-by-design" aimed at maximizing engagement, which aligns with AI-driven recommendation systems. Hence, the event is classified as an AI Incident.
Thumbnail Image

Parents suing TikTok over children's deaths 'want answers'

2025-02-09
The Independent
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems to recommend and push content to users, including children. The lawsuit alleges that these AI systems promoted dangerous challenges, leading to the deaths of children. This constitutes an AI Incident because the AI system's use is directly linked to harm (death of children) through the promotion of harmful content. The event describes actual harm caused by the AI system's outputs, not just potential harm or general information, thus qualifying as an AI Incident.
Thumbnail Image

'I'm haunted by time my child asked to get TikTok - six months later she died'

2025-02-09
Mirror
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems for content moderation and recommendation. The lawsuit alleges that these AI systems failed to prevent the spread of dangerous content that encouraged harmful behavior resulting in the deaths of children. The harm is realized (deaths), and the AI system's malfunction or inadequate use is a contributing factor. This fits the definition of an AI Incident because the AI system's use or malfunction directly or indirectly led to injury or harm to persons.
Thumbnail Image

Parents suing TikTok over children's deaths 'want answers'

2025-02-09
AOL.com
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems to recommend and promote content to users, including children. The lawsuit claims that these AI-driven recommendations pushed dangerous 'blackout challenge' videos that led to children's deaths. The involvement of AI in content curation and promotion is explicit and central to the alleged harm. The deaths of children constitute injury or harm to persons, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a direct claim of harm linked to AI system use.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says

2025-02-11
AOL.com
Why's our monitor labelling this an incident or hazard?
TikTok employs AI algorithms to recommend and moderate content. The wrongful death lawsuit claims that these AI-driven recommendations pushed harmful challenge videos to children, leading to their deaths. This constitutes indirect harm caused by the AI system's use. The deletion of data due to legal requirements further impacts the ability to investigate, but does not negate the AI system's role. Therefore, this event qualifies as an AI Incident due to the indirect link between the AI system's use and the harm (deaths) caused.
Thumbnail Image

British families sue TikTok in U.S. over Blackout Challenge children deaths - UPI.com

2025-02-07
UPI
Why's our monitor labelling this an incident or hazard?
TikTok employs AI-driven recommendation algorithms that influence what content users see. The Blackout Challenge is a harmful trend that spread on TikTok, and the lawsuit alleges that TikTok's platform contributed to the deaths of children participating in this challenge. The AI system's use in content recommendation and moderation is indirectly linked to the harm (deaths) by enabling or failing to prevent the spread of dangerous content. Therefore, this qualifies as an AI Incident due to indirect harm to persons caused by the AI system's role in content dissemination and moderation.
Thumbnail Image

TikTok sued by UK teen parents over THIS reason

2025-02-07
WION
Why's our monitor labelling this an incident or hazard?
TikTok uses AI-based recommendation systems to curate and push content to users. The lawsuit claims that these AI-driven algorithms promoted harmful challenges, leading to fatal outcomes for children. The harm is direct and severe (deaths), and the AI system's role in causing this harm is pivotal, fulfilling the criteria for an AI Incident. The event is not merely a potential risk but involves realized harm caused by the AI system's use.
Thumbnail Image

Parents sue TikTok over Children's deaths

2025-02-10
Euro Weekly News Spain
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems to recommend and promote content to users. The lawsuit claims that these AI-driven recommendations promoted dangerous challenges that directly influenced children to attempt harmful acts resulting in death. The harm is realized and directly linked to the AI system's use in content promotion. The event meets the criteria for an AI Incident because the AI system's use has indirectly led to injury or harm to persons. The legal action and calls for accountability further confirm the significance of the harm caused by the AI system's role.
Thumbnail Image

TikTok sued by parents of UK teens after alleged challenge deaths

2025-02-07
Capital FM Kenya
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is AI-based, influencing what videos users see. The viral "blackout challenge" circulated on the platform, and the deaths of teenagers attempting this challenge are linked to the platform's AI-driven content dissemination. Although TikTok claims to block related searches, the harm has already occurred. The AI system's role in promoting or failing to prevent harmful content indirectly led to injury and death, fitting the definition of an AI Incident involving harm to persons.
Thumbnail Image

Tiktok 'viral blackout challenge deaths' parents dealt data blow

2025-02-11
Bristol Post
Why's our monitor labelling this an incident or hazard?
TikTok uses AI-driven recommendation algorithms to promote content to users. The lawsuit claims that these algorithms promoted dangerous challenge videos that led to the deaths of children, which constitutes harm to persons. The AI system's role in content curation and moderation is directly linked to the harm, even if the challenge predates TikTok and the company denies the challenge was trending. The deletion of data and legal complexities around data access do not negate the AI system's involvement in the harm. Hence, this is an AI Incident involving indirect harm caused by AI content recommendation and moderation.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says

2025-02-11
Belfast Telegraph
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems for content recommendation and moderation, which can influence user exposure to harmful content. The deaths of children after attempting a challenge on TikTok indicate harm to persons linked to the platform's AI-driven content environment. The deletion of data relevant to these incidents obstructs legal and investigative processes, implicating the AI system's role in the harm and subsequent handling of evidence. This meets the criteria for an AI Incident as the AI system's use has indirectly led to harm and potential violations of rights.
Thumbnail Image

USA: TikTok sued over death of British teens allegedly caused by "blackout challenge" - Business & Human Rights Resource Centre

2025-02-07
Business & Human Rights
Why's our monitor labelling this an incident or hazard?
TikTok uses AI systems to recommend and push content to users, including children, based on their age and location. The lawsuit claims that these AI-driven recommendations promoted dangerous challenge videos, leading to the deaths of four teenagers. This constitutes harm to persons caused directly or indirectly by the AI system's use. The presence of the AI system is reasonably inferred from the description of content being pushed algorithmically to increase engagement. The harm (deaths) has occurred, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says

2025-02-11
Jersey Evening Post
Why's our monitor labelling this an incident or hazard?
TikTok's platform uses AI algorithms to recommend and moderate content. The wrongful death lawsuit claims that TikTok's AI-driven content recommendation pushed dangerous challenges to children, indirectly leading to their deaths. The AI system's role in promoting harmful content and the resulting fatalities constitute harm to persons, fulfilling the criteria for an AI Incident. The data deletion issue relates to legal compliance but does not negate the AI system's indirect role in harm. Hence, this event is classified as an AI Incident.
Thumbnail Image

Parents suing TikTok over children's deaths 'want answers'

2025-02-09
Jersey Evening Post
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-driven platform that uses algorithms to recommend and moderate content. The lawsuit alleges that harmful content promoting dangerous challenges was accessible and influential, leading to the deaths of children. This is a direct harm to persons caused by the use of an AI system (content recommendation and moderation algorithms). The involvement of AI in content curation and the resulting fatal harm to children meets the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a realized harm linked to AI system use.
Thumbnail Image

TikTok sued by parents of UK teens who allegedly died in viral trend

2025-02-07
STV News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves TikTok's algorithm, which is an AI system used to recommend content to users. The lawsuit claims that this AI system deliberately pushed dangerous content to children, leading to their deaths. This constitutes indirect harm caused by the AI system's use. The harm is significant (loss of life), and the AI system's involvement is central to the incident. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Parents of two south Essex children sue TikTok over 'online challenge' deaths

2025-02-07
Essex Echo
Why's our monitor labelling this an incident or hazard?
The event involves an AI system in the form of TikTok's content recommendation algorithms, which are AI-driven systems that influence what videos users see. The lawsuit alleges that these algorithms contributed to the harm by promoting dangerous content, leading to the deaths of children. This constitutes an AI Incident because the AI system's use (content recommendation and moderation) has indirectly led to significant harm to persons (children's deaths). The harm is realized, not just potential, and the AI system's role is pivotal in the chain of events. Therefore, this is classified as an AI Incident.
Thumbnail Image

Parents Take Legal Action Against TikTok Following Tragic Deaths of Teens Linked to Viral Challenge - The Global Herald

2025-02-07
The Global Herald
Why's our monitor labelling this an incident or hazard?
TikTok's platform relies on AI systems for content recommendation and moderation, which are designed to maximize engagement. The lawsuit alleges that this design exposed vulnerable teens to dangerous viral challenges, leading to their deaths. The AI system's role in promoting harmful content and fostering dependencies is a direct contributing factor to the harm. The event involves realized harm (deaths) linked to the AI system's use, meeting the criteria for an AI Incident. The legal action and calls for regulatory changes further underscore the significance of the harm caused by the AI system's operation.
Thumbnail Image

Parents suing TikTok over children's deaths 'want answers'

2025-02-09
Cambridge Independent
Why's our monitor labelling this an incident or hazard?
TikTok employs AI algorithms to recommend videos and moderate content. The lawsuit claims that these algorithms promoted dangerous challenges leading to children's deaths, which is a direct harm to persons. The AI system's role in content promotion and moderation is central to the incident. The harm has already occurred, and the AI system's use is a contributing factor, fulfilling the criteria for an AI Incident. The event is not merely a potential hazard or complementary information but a reported incident involving AI-related harm.
Thumbnail Image

Parents of two Essex children sue TikTok over 'online challenge' deaths

2025-02-08
The Gazette
Why's our monitor labelling this an incident or hazard?
TikTok is an AI-driven platform that uses algorithms to recommend content to users. The lawsuit alleges that these algorithms contributed to the spread and visibility of dangerous "blackout challenge" videos, which led to the deaths of children. The harm (deaths) has already occurred, and the AI system's role in content dissemination and moderation is central to the incident. This fits the definition of an AI Incident, as the AI system's use has indirectly led to harm to persons. The event is not merely a potential hazard or complementary information but a concrete incident involving harm linked to AI system use.
Thumbnail Image

TikTok sued by parents of UK teens after alleged challenge deaths

2025-02-07
Yahoo
Why's our monitor labelling this an incident or hazard?
The event describes actual deaths of children linked to participation in viral challenges promoted or surfaced by TikTok's AI-driven content recommendation system. The lawsuit alleges that the AI system's design caused addictive behavior and exposure to harmful content, which directly contributed to the harm (deaths). This fits the definition of an AI Incident as the AI system's use has directly or indirectly led to injury or harm to persons. The involvement of AI is reasonably inferred from the description of TikTok's programming decisions and engineered addiction-by-design, which refers to AI-driven recommendation algorithms. Hence, the classification is AI Incident.
Thumbnail Image

What is the blackout challenge? British parents sue TikTok over children's deaths

2025-02-11
Yahoo
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems for content recommendation and moderation. The lawsuit alleges that TikTok's AI-driven algorithms promoted harmful blackout challenge videos to children, which directly or indirectly led to the deaths of four children. The harm is realized (deaths), and the AI system's role in content dissemination is pivotal. Although TikTok denies trending of such content and claims removal efforts, the lawsuit and the described harm meet the criteria for an AI Incident. The event is not merely a hazard or complementary information, as the harm has occurred and is linked to AI system use.
Thumbnail Image

TikTok says 'blackout challenge' data of dead UK children may have been deleted

2025-02-11
South China Morning Post
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems to recommend and moderate content, which directly influences what users see, including potentially harmful challenges. The deaths of children attempting the 'blackout challenge' are a serious harm to health caused indirectly by the AI system's content curation. The lawsuit and data deletion issues highlight the AI system's role in the incident and the challenges in investigating it. Hence, this qualifies as an AI Incident under the framework, as the AI system's use has indirectly led to harm to persons and raises legal and ethical concerns.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says - Liverpool Echo

2025-02-11
Liverpool Echo
Why's our monitor labelling this an incident or hazard?
TikTok employs AI systems for content moderation and recommendation. The deaths of children attempting a dangerous challenge allegedly linked to TikTok content indicate harm to persons indirectly caused by the AI system's use. The lawsuit and discussion about data deletion relate to the AI system's development and use, including its content moderation capabilities. Although TikTok claims to proactively remove harmful content, the tragic outcomes suggest a failure or limitation in the AI system's effectiveness, constituting an AI Incident under the framework. The harm is realized (deaths), and the AI system's role is pivotal in content dissemination and moderation, justifying classification as an AI Incident.
Thumbnail Image

Data of dead British children may have been deleted, TikTok boss says

2025-02-11
Basingstoke Gazette
Why's our monitor labelling this an incident or hazard?
TikTok's content recommendation system is an AI system that influences what videos users see. The lawsuit claims that this AI system pushed harmful content, leading indirectly to the deaths of children attempting the blackout challenge. This constitutes harm to persons caused indirectly by the AI system's use. Additionally, the deletion of data under legal requirements complicates access to evidence, but the core issue remains the AI system's role in promoting harmful content. Therefore, this event qualifies as an AI Incident due to the realized harm linked to the AI system's use.
Thumbnail Image

TikTok: Γονείς μηνύουν την πλατφόρμα μετά τον θάνατο των παιδιών τους σε προκλήσεις | in.gr

2025-02-08
in.gr
Why's our monitor labelling this an incident or hazard?
The article explicitly links the deaths of children to their participation in viral challenges on TikTok, a platform known to use AI systems for content recommendation and moderation. The lawsuit claims that the platform's design and algorithms maximized user engagement by promoting harmful content, which directly led to fatal outcomes. This constitutes direct harm to persons caused by the use of an AI system, meeting the criteria for an AI Incident. The presence of AI is reasonably inferred from TikTok's content recommendation and moderation mechanisms, which are central to the platform's operation and the spread of viral challenges. The harm is realized (deaths), not just potential, so this is not a hazard or complementary information.
Thumbnail Image

Αγωγή κατά του TikTok από γονείς εφήβων στη Βρετανία που πέθαναν εξαιτίας challenge

2025-02-07
tothemaonline.com
Why's our monitor labelling this an incident or hazard?
TikTok employs AI-driven recommendation algorithms to maximize user engagement, which is alleged to have contributed to the exposure of teenagers to harmful challenges resulting in death. The lawsuit claims that the AI system's design fostered addictive behavior and exposure to dangerous content, directly linking the AI system's use to serious harm (fatalities). Therefore, this is an AI Incident involving harm to persons caused by the use of an AI system.
Thumbnail Image

Σε μήνυση κατά του TikTok για τους θανάτους των παιδιών τους, προχώρησαν 4 οικογένειες.

2025-02-08
www.topontiki.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (TikTok's content recommendation and moderation algorithms) whose use has directly led to significant harm—the deaths of children participating in a dangerous challenge promoted or insufficiently blocked by the platform. The lawsuit alleges that the AI-driven design decisions created addictive behaviors and exposure to harmful content, constituting a violation of safety and rights. The harm is realized and severe, meeting the criteria for an AI Incident rather than a hazard or complementary information. The AI system's role is pivotal in the chain of causation leading to the harm.
Thumbnail Image

TikTok: Γονείς κάνουν μηνύσεις μετά τον θάνατο των παιδιών τους σε προκλήσεις

2025-02-08
ant1news.gr
Why's our monitor labelling this an incident or hazard?
The TikTok platform uses AI systems, specifically recommendation algorithms, to maximize user engagement by curating and promoting content. The parents' lawsuit alleges that these AI-driven systems exposed children to harmful challenges, leading to their deaths. This is a direct harm to persons caused by the use of an AI system's outputs (content recommendations). Therefore, this qualifies as an AI Incident because the AI system's use has directly led to injury or harm to persons (the deaths).
Thumbnail Image

Βρετανία: Γονείς έκαναν αγωγή στο TikTok - Τα παιδιά τους πέθαναν λόγω ενός challenge

2025-02-09
taxydromos.gr
Why's our monitor labelling this an incident or hazard?
The event involves an AI system implicitly, as TikTok's content recommendation and moderation rely heavily on AI algorithms to promote engagement and filter content. The lawsuit claims that these AI-driven strategies led to addiction and exposure to harmful challenges, resulting in actual deaths. This constitutes an AI Incident because the AI system's use and design indirectly caused harm to individuals (children's deaths) through its role in content dissemination and user engagement. The harm is realized and significant, meeting the criteria for an AI Incident.
Thumbnail Image

Γονείς μηνύουν το TikTok μετά το θάνατο των παιδιών τους σε προκλήσεις

2025-02-08
www.kathimerini.com.cy
Why's our monitor labelling this an incident or hazard?
The TikTok platform uses AI systems to recommend and promote content to users, including viral challenges. The lawsuit alleges that these AI-driven recommendations led to the children engaging in harmful behavior resulting in death, which is a direct harm to persons. The AI system's role in maximizing engagement by promoting harmful content is central to the incident. Therefore, this event meets the criteria for an AI Incident due to indirect causation of harm through AI system use and design.
Thumbnail Image

Γονείς μηνύουν το TikTok μετά τον θάνατο των παιδιών τους σε προκλήσεις | Η ΚΑΘΗΜΕΡΙΝΗ

2025-02-07
H Kαθημερινή
Why's our monitor labelling this an incident or hazard?
The TikTok platform uses AI-driven recommendation algorithms to curate and promote content, including viral challenges. The lawsuit alleges that these AI systems created addictive behaviors and exposed children to dangerous challenges, which directly led to their deaths. This is a direct harm to persons caused by the AI system's use, meeting the criteria for an AI Incident. The event involves the use of an AI system (TikTok's recommendation algorithm), the harm is realized (deaths), and the AI system's role is central to the harm. Hence, the classification is AI Incident.