AI System Used in Germany to Detect and Remove Harmful Online Content for Youth Protection

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Landesanstalt für Kommunikation (LFK) in Baden-Württemberg, Germany, uses an AI-powered tool to systematically detect and flag harmful online content, such as hate speech, violence, and pornography, to protect children and adolescents. Human experts review flagged content and coordinate with platforms for removal, enhancing youth protection online.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system explicitly mentioned as being used to detect harmful online content that affects children and adolescents, a vulnerable group. The AI system's outputs lead to content removal and reporting of illegal content, directly addressing harm to youth development and safety. Since the AI system's use has already resulted in the identification and removal of harmful content, this constitutes a realized harm mitigation scenario, fitting the definition of an AI Incident. The article does not merely discuss potential risks or future harms but describes ongoing use and impact of the AI system in reducing harm, excluding classification as a hazard or complementary information.[AI generated]
Industries
Government, security, and defenceMedia, social platforms, and marketing

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Baden-Württemberg: Wie KI hilft, jugendgefährdende Inhalte im Netz zu entfernen

2026-03-05
N-tv
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly mentioned as being used to detect harmful online content that affects children and adolescents, a vulnerable group. The AI system's outputs lead to content removal and reporting of illegal content, directly addressing harm to youth development and safety. Since the AI system's use has already resulted in the identification and removal of harmful content, this constitutes a realized harm mitigation scenario, fitting the definition of an AI Incident. The article does not merely discuss potential risks or future harms but describes ongoing use and impact of the AI system in reducing harm, excluding classification as a hazard or complementary information.
Thumbnail Image

Wie KI hilft, jugendgefährdende Inhalte im Netz zu entfernen - WELT

2026-03-05
DIE WELT
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to automatically find potentially harmful content that could negatively affect children and adolescents. The harm in question is the exposure of minors to harmful media content, which is a recognized form of harm to health and development. The AI system's outputs lead to content removal or law enforcement action, directly mitigating this harm. Therefore, this event qualifies as an AI Incident because the AI system's use is directly linked to addressing and preventing harm to a vulnerable group, fulfilling the criteria of harm to persons (children and youth).
Thumbnail Image

Jugendschutz in den Medien: Wie KI hilft, jugendgefährdende Inhalte im Netz zu entfernen

2026-03-05
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of an AI system to detect harmful content online, indicating AI system involvement. However, the AI system is used as a tool to identify and remove harmful content, not causing harm itself. The harms described (hate speech, violent content, pornography) are existing societal issues, and the AI system's role is to help mitigate these harms. There is no indication that the AI system malfunctioned or caused any injury, rights violation, or other harm. Instead, the article focuses on the regulatory and operational context, the challenges faced, and complementary initiatives to protect youth. Thus, it fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Wie KI hilft, jugendgefährdende Inhalte im Netz zu entfernen

2026-03-05
stern.de
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems to detect harmful content online, which is a positive application aimed at preventing harm to youth. There is no indication that the AI system caused harm or malfunctioned; rather, it is used to mitigate harm. Therefore, this is not an AI Incident or AI Hazard. The article provides information about the use of AI in a societal/governance context to address harmful content, which fits the definition of Complementary Information as it supports understanding of AI's role in managing online harms.
Thumbnail Image

Landesanstalt für Kommunikation: Wie KI beim Jugendschutz im Internet helfen soll

2026-03-05
stuttgarter-nachrichten.de
Why's our monitor labelling this an incident or hazard?
The presence of an AI system is clear, as the article discusses an AI tool used to detect harmful online content. However, there is no indication that the AI system has caused any harm or malfunctioned. The AI's role is supportive and combined with human review, aiming to protect vulnerable groups. The article highlights legal and operational challenges but does not describe realized or imminent harm. Thus, it does not meet the criteria for an AI Incident or AI Hazard. Instead, it provides valuable information about AI's role in regulatory and protective functions, fitting the definition of Complementary Information.
Thumbnail Image

Wie KI hilft, jugendgefährdende Inhalte im Netz zu entfernen

2026-03-05
Reutlinger General-Anzeiger
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to detect harmful content online, which qualifies as an AI system under the definitions. However, the AI system is employed to prevent harm (youth-endangering content) rather than causing harm. There is no indication that the AI system malfunctioned or led to injury, rights violations, or other harms. The article focuses on the AI system's role in supporting regulatory efforts and the challenges faced, which fits the definition of Complementary Information. It provides context and updates on AI's societal use in content moderation and youth protection without describing a new incident or hazard.
Thumbnail Image

Landesanstalt für Kommunikation: Wie KI beim Jugendschutz im Internet helfen soll - Esslinger Zeitung

2026-03-05
Eßlinger Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system used to detect harmful content online, which is then reviewed and acted upon by human authorities. The AI system is used as a tool to prevent harm to children and adolescents, a vulnerable group, by identifying content that could impair their development. There is no report of harm caused by the AI system itself, nor any malfunction or misuse leading to harm. Instead, the article focuses on the regulatory and protective measures involving AI, including legal frameworks and educational initiatives. This aligns with the definition of Complementary Information, as it details societal and governance responses to AI use in youth protection, rather than describing an AI Incident or AI Hazard.
Thumbnail Image

Wie KI hilft, jugendgefährdende Inhalte im Netz zu entfernen

2026-03-05
Heidenheimer Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as used to detect harmful content online. The AI system's use directly leads to the removal of harmful content that could injure or harm children and adolescents (harm to health and development), which fits the definition of an AI Incident. The article details realized harm (presence of harmful content) and the AI's role in addressing it, not just potential harm. The involvement is in the use of the AI system for content moderation and youth protection. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Baden-Württemberg: Wer Tausende Jugendschutz-Verstöße im Internet verfolgt

2026-03-05
N-tv
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly mentioned as being used to automatically detect potential youth protection violations online. The AI's outputs lead to manual review and subsequent legal actions, including removal of content and forwarding cases to law enforcement. The AI system's use directly contributes to identifying harmful content that violates laws and youth protection regulations, thus playing a pivotal role in addressing violations of legal obligations and protection of fundamental rights. Since the AI system's use has directly led to the identification and removal of harmful content, this qualifies as an AI Incident under the framework, specifically under violations of human rights or breach of legal obligations (c).
Thumbnail Image

Wer Tausende Jugendschutz-Verstöße im Internet verfolgt - WELT

2026-03-05
DIE WELT
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved in scanning and identifying potential harmful content online, which relates to violations of protections intended for youth (a form of harm to communities and potentially human rights). However, the article does not describe any direct or indirect harm caused by the AI system itself; rather, the AI is used as a tool to detect and mitigate harm. Therefore, this event does not describe an AI Incident or AI Hazard but rather provides complementary information about AI's role in societal governance and enforcement related to youth protection online.
Thumbnail Image

Internet: Wer Tausende Jugendschutz-Verstöße im Internet verfolgt

2026-03-05
ZEIT ONLINE
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly used to scan and identify potential youth protection violations online, including violent, pornographic, and hateful content. The AI's outputs lead directly to harm mitigation by enabling the removal of harmful content and forwarding cases to law enforcement. The harms addressed include protection of youth from harmful content and prevention of hate speech and extremist propaganda, which are violations of rights and harm to communities. The AI system's involvement is in its use for content detection and classification, directly leading to harm reduction. Hence, this is an AI Incident as the AI system's use has directly led to addressing harms defined in the framework.
Thumbnail Image

KI durchsucht das Internet in Baden-Württemberg nach Gewalt und Pornos

2026-03-05
TAG24
Why's our monitor labelling this an incident or hazard?
The AI system is involved in automated detection of harmful content, which is then manually verified and acted upon. There is no indication that the AI system caused harm or that harm resulted from its malfunction or misuse. The event focuses on the AI system's use as a tool for identifying illegal content and supporting legal enforcement, which is a positive application rather than a source of harm. Therefore, this event is best classified as Complementary Information, as it provides context on AI use in content moderation and law enforcement without describing an AI Incident or Hazard.
Thumbnail Image

Wer Tausende Jugendschutz-Verstöße im Internet verfolgt

2026-03-05
Reutlinger General-Anzeiger
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to detect harmful online content related to youth protection, including violent, pornographic, and hate speech content. The AI system's outputs have directly led to the identification and removal of harmful content and the forwarding of cases to law enforcement, indicating realized harm to communities and violations of legal protections. Therefore, this qualifies as an AI Incident because the AI system's use has directly contributed to addressing harms related to youth protection and extremist content online.
Thumbnail Image

Wer Tausende Jugendschutz-Verstöße im Internet verfolgt

2026-03-05
Heidenheimer Zeitung
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system to detect and manage harmful online content that violates youth protection laws. The AI system's outputs lead directly to the identification and removal of harmful content, including hate speech and extremist material, which constitutes harm to communities and potentially violates rights. Therefore, the AI system's use has directly led to harm prevention and enforcement actions, qualifying this as an AI Incident under the framework.