EU Accuses Meta of Failing to Prevent Underage Access to Facebook and Instagram

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The European Commission found that Meta's AI-driven age verification systems on Facebook and Instagram are ineffective, allowing 10–12% of children under 13 to access the platforms. This violates the EU Digital Services Act and exposes minors to potential harm, highlighting failures in Meta's AI-based protections for children.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly involves AI systems in the form of age verification and content moderation mechanisms on Meta's platforms. These systems have failed to reliably prevent underage users from accessing the services, leading to exposure to potentially harmful content. This constitutes a violation of legal obligations under the Digital Services Act and results in harm to minors' health and well-being, fulfilling the criteria for an AI Incident. The harm is indirect but real, as the AI system's malfunction or inadequacy is a contributing factor to the exposure of minors to risks. The event is not merely a potential hazard or complementary information but a current issue with regulatory consequences and recognized harm.[AI generated]
AI principles
AccountabilitySafety

Industries
Media, social platforms, and marketing

Affected stakeholders
Children

Harm types
PsychologicalHuman or fundamental rights

Severity
AI incident

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Der Tag: EU: Facebook und Instagram schützen Kinder nicht ausreichend

2026-04-29
N-tv
Why's our monitor labelling this an incident or hazard?
The platforms use AI systems for age verification and content moderation, which are failing to enforce the minimum age limit effectively, allowing children under 13 to access the services. While no specific harm is reported as having occurred, the EU Commission's investigation and threat of fines indicate a credible risk that the AI systems' insufficient enforcement could lead to harm to children (health or well-being). Hence, this is an AI Hazard because the AI system's malfunction or inadequate use could plausibly lead to harm, but no direct or indirect harm has yet been confirmed.
Thumbnail Image

L'Europa accusa Meta di aver permesso ai minori di 13 anni di accedere a Instagram e Facebook

2026-04-29
La Repubblica.it
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of age verification and content moderation mechanisms on Meta's platforms. These systems have failed to reliably prevent underage users from accessing the services, leading to exposure to potentially harmful content. This constitutes a violation of legal obligations under the Digital Services Act and results in harm to minors' health and well-being, fulfilling the criteria for an AI Incident. The harm is indirect but real, as the AI system's malfunction or inadequacy is a contributing factor to the exposure of minors to risks. The event is not merely a potential hazard or complementary information but a current issue with regulatory consequences and recognized harm.
Thumbnail Image

Bruselas investiga a Meta por no impedir que los menores de 13 años utilicen Facebook e Instagram

2026-04-29
EL PAÍS
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation on Facebook and Instagram. The Commission's investigation highlights that these AI-driven mechanisms are ineffective at preventing underage access, which could plausibly lead to harms such as exposure to harmful content and addiction among minors. Since the article does not report a specific realized harm but focuses on the potential risks and regulatory non-compliance, this qualifies as an AI Hazard rather than an AI Incident. The investigation and regulatory scrutiny indicate credible concerns about future harm stemming from the AI systems' failure to enforce age restrictions effectively.
Thumbnail Image

L'UE estime que Meta enfreint les règles numériques sur les enfants sur Instagram et Facebook

2026-04-29
Yahoo actualités
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta to detect and remove underage users, which are failing to prevent children under 13 from accessing the platforms. This failure constitutes a violation of legal protections (DSA) and exposes children to harm, fulfilling the criteria for an AI Incident due to violation of rights and harm to vulnerable groups. Although the harm is not detailed as specific incidents, the regulatory findings and the nature of the failure imply ongoing harm or significant risk. Therefore, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

EU says Meta is failing to keep underage users off Facebook and Instagram

2026-04-29
The Independent
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as Meta's platforms use AI-based measures to identify users' ages and moderate content. The failure to prevent underage access and exposure to inappropriate content indicates shortcomings in these AI systems' use or effectiveness. However, no direct or indirect harm has been reported as having occurred; the piece centers on regulatory findings and potential future penalties. Therefore, this event is best classified as Complementary Information, as it provides context on governance and regulatory responses to AI-related platform issues without describing a specific AI Incident or AI Hazard.
Thumbnail Image

Europa acorrala a Meta: detectó que el 12% de los menores de 13 años usa Facebook e Instagram y amenaza con multas millonarias

2026-04-29
Clarin
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for user identification, content moderation, and risk assessment. The failure to diligently identify and mitigate risks for underage users indicates a malfunction or inadequate use of these AI systems, leading to a violation of legal obligations and potential harm to children's rights and safety. Since the event reports an ongoing violation with realized harm (underage access and associated risks), it qualifies as an AI Incident under violations of human rights and legal obligations.
Thumbnail Image

La Comisión Europea acusa a Meta de no proteger a los menores de 13 años en Facebook e Instagram: "Están haciendo muy poco"

2026-04-29
20 minutos
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI-related systems (age verification and account detection tools) used by Meta to protect minors, which are failing to prevent children under 13 from accessing Facebook and Instagram. This failure leads to harm to children, a vulnerable group, by exposing them to potential risks on social media, thus fulfilling the criteria for harm to persons. The AI systems' malfunction or insufficient effectiveness is directly linked to this harm. Hence, the event qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

La UE cambia las normas: Facebook e Instagram deberán "hacer más" para proteger a los menores de 13 años en sus apps

2026-04-29
El Español
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Facebook and Instagram to detect and remove underage users, which are currently insufficient, leading to violations of the Digital Services Act aimed at protecting minors. This constitutes a breach of legal obligations and a violation of rights, fulfilling the criteria for an AI Incident. The harm is realized as underage children are accessing the platforms despite restrictions, and the AI systems' failure to effectively enforce age limits is a contributing factor. Therefore, this is an AI Incident.
Thumbnail Image

Commissione Ue contro Meta: "Divieto social sotto i 13 anni? Facebook e Instagram fanno ben poco. Inadeguata la valutazione dei rischi per i minori"

2026-04-29
Il Fatto Quotidiano
Why's our monitor labelling this an incident or hazard?
Meta employs AI systems to detect and remove accounts of users under 13 years old and to enforce content moderation. The Commission's findings indicate that these AI systems are malfunctioning or inadequately designed, leading to millions of minors accessing the platforms and being exposed to potential harms. This constitutes a violation of legal obligations (Digital Services Act) and results in harm to minors' health and rights. The AI system's malfunction and inadequate use have directly and indirectly led to these harms, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'Ue: 'I sistemi di controllo per l'età di Facebook e Instagram inefficaci'. Meta: 'Le verifiche funzionano' - Notizie - Ansa.it

2026-04-29
ANSA.it
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation on Facebook and Instagram. The AI systems' failure to accurately verify age and prevent underage access constitutes a malfunction or ineffective use, leading to indirect harm to minors by exposing them to inappropriate content and risks. This is a violation of legal obligations under the Digital Services Act aimed at protecting minors, thus fitting the definition of an AI Incident due to violations of human rights and harm to health. The investigation and potential sanctions further confirm the seriousness of the harm and the AI system's pivotal role.
Thumbnail Image

EU wirft Meta unzureichenden Kinderschutz vor

2026-04-29
SRF News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems or algorithmic age verification tools used by Meta to detect and restrict underage users. The EU's criticism is about insufficient use or effectiveness of these AI systems, which could plausibly lead to harm to children (a vulnerable group) by exposing them to inappropriate content or risks on the platforms. Since no specific harm is reported as having occurred yet, but there is a credible risk and regulatory action underway, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely general AI news or a product update, but a regulatory investigation into AI system shortcomings with potential for harm.
Thumbnail Image

Facebook, Instagram Charged With Breaching Rules, Must Do More to Protect Kids Below 13, EU Says

2026-04-29
U.S. News & World Report
Why's our monitor labelling this an incident or hazard?
The platforms employ AI-based systems to enforce age restrictions and content moderation. The EU's charge highlights that these AI systems are insufficient in preventing underage access, leading to a breach of the Digital Services Act designed to protect children. Since the AI systems' malfunction or inadequate use has directly contributed to this regulatory breach and potential harm to children's rights and safety, this qualifies as an AI Incident under the framework.
Thumbnail Image

Facebook e Instagram deben hacer más para bloquear a menores de 13 años, advierte UE en demanda

2026-04-30
El Economista
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta to detect and block underage users on social media platforms. The failure of these AI systems to effectively enforce age restrictions has led to minors under 13 accessing the platforms, which is a violation of legal protections and poses harm to children. The European Commission's investigation and preliminary charges indicate that the AI system's malfunction or insufficient use has directly contributed to this harm. Therefore, this event meets the criteria for an AI Incident due to the direct link between AI system use and harm (violation of legal rights and potential harm to minors).
Thumbnail Image

EU-Vorwürfe gegen Meta: Facebook und Instagram tun zu wenig für den Kinderschutz

2026-04-29
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms employ AI systems for user management, content moderation, and risk assessment. The EU Commission's accusations focus on Meta's failure to enforce age restrictions and protect minors, which involves AI systems not effectively identifying or removing underage users. This failure leads to violations of children's rights and exposure to harmful content, constituting harm under the framework. The event involves the use and malfunction (or insufficient use) of AI systems leading to realized harm and legal breaches. Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Verstoß gegen EU-Recht: Facebook und Instagram schützen Kinder zu wenig

2026-04-29
Frankfurter Allgemeine
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's failure to adequately control access for underage users, despite having rules in place. The enforcement mechanisms likely involve AI systems for age verification, risk assessment, and content moderation. The failure of these AI systems to prevent underage access and to identify and remove underage users constitutes a breach of legal obligations under the DSA, which protects fundamental rights. This breach has already resulted in harm to children by exposing them to social media platforms prematurely, which is a violation of rights and thus an AI Incident. The EU Commission's investigation and potential sanctions further confirm the seriousness of the harm and the AI systems' involvement.
Thumbnail Image

Bruselas acusa a Facebook e Instagram de infringir la ley europea al permitir el acceso a menores de 13 años - ElNacional.cat

2026-04-29
ElNacional.cat
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the failure of Meta's age verification mechanisms, which likely involve AI or algorithmic systems, to effectively prevent under-13 users from accessing social media. This failure exposes minors to harmful content, constituting harm to health and violation of legal rights. The Commission's demand for improved AI-based detection and removal measures and the threat of fines indicate that the AI system's malfunction has directly led to harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

EU says Meta failed to stop under 13s accessing Facebook and Instagram

2026-04-29
Euronews English
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's AI-based age enforcement measures failing to prevent under-13 children from accessing Facebook and Instagram, which is a breach of legal obligations under the Digital Services Act. The AI system's malfunction (ineffective age verification) has directly led to children being exposed to potentially harmful content, violating their rights and safety. This fits the definition of an AI Incident because the AI system's use and malfunction have directly led to harm (violation of rights and potential harm to children's health and well-being).
Thumbnail Image

EU says Meta is failing to keep underage users off Facebook, Instagram

2026-04-29
ETTelecom.com
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for age verification and content moderation to enforce age restrictions and protect users from harmful content. The European Commission's findings indicate that these AI systems are not effectively preventing underage access or exposure to inappropriate content, leading to harm to children and violations of legal protections. Since the AI systems' malfunction or inadequate operation has directly led to these harms and legal breaches, this event meets the criteria for an AI Incident.
Thumbnail Image

Unter-13-Jährige auf Instagram und Facebook: Brüssel wirft Meta fehlende Alterchecks vor

2026-04-29
stern.de
Why's our monitor labelling this an incident or hazard?
The article discusses a planned age verification app as a response to concerns about under-13 users on social media platforms. While the app likely involves AI or automated processing to read identity documents, the event itself is about a governance and technical solution to an existing problem, not about an AI system causing harm or posing a plausible risk of harm. Therefore, it fits the category of Complementary Information, as it provides context on societal and governance responses to AI-related issues without describing a new AI Incident or Hazard.
Thumbnail Image

Instagram e Facebook sotto accusa Ue: mancata tutela dei minori di 13 anni

2026-04-29
IL TEMPO
Why's our monitor labelling this an incident or hazard?
The article involves AI systems insofar as Instagram and Facebook use AI-driven algorithms for content moderation, age verification, and risk assessment. The failure to effectively prevent underage access and exposure to harmful content is linked to inadequate AI system use or design. However, the article does not describe a specific realized harm or incident caused by AI malfunction or misuse but rather an ongoing regulatory investigation and preliminary findings about potential risks and violations. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on governance and regulatory responses to AI-related risks on social media platforms, without describing a concrete AI Incident or an immediate AI Hazard.
Thumbnail Image

UE ameaça multar Meta por não impedir que crianças usem redes sociais

2026-04-29
Notícias ao Minuto
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta to identify and restrict underage users on Facebook and Instagram. The failure of these AI systems to effectively verify age and remove underage users has directly led to violations of legal protections for minors, which is a breach of obligations under applicable law protecting fundamental rights. The harm is realized as children under 13 are accessing the platforms despite age restrictions, exposing them to privacy and safety risks. The European Commission's investigation and threat of fines confirm the seriousness of these harms. Hence, this is an AI Incident, not merely a hazard or complementary information, because the harm is occurring due to AI system inadequacies in use and enforcement.
Thumbnail Image

Bruselas concluye que Instagram y Facebook no protegen adecuadamente a los menores de 13 años

2026-04-29
El Diario Vasco
Why's our monitor labelling this an incident or hazard?
The event involves AI systems insofar as Instagram and Facebook use AI-based tools for age verification, content moderation, and risk assessment. The failure of these AI systems to effectively prevent underage access and to promptly remove such accounts has directly led to potential harm to minors (harm to health and well-being). The Commission's findings highlight that the AI systems' malfunction or inadequacy in this context contributes to the risk of harm. Therefore, this qualifies as an AI Incident because the AI systems' use and malfunction have directly led to harm or risk of harm to a vulnerable group (minors under 13).
Thumbnail Image

Bruselj: Meta ne zagotavlja ustrezne zaščite otrok na Instagramu in Facebooku | 24ur.com

2026-04-29
24ur.com
Why's our monitor labelling this an incident or hazard?
While Meta's platforms likely use AI systems for content moderation and age detection, the article does not provide evidence that AI system development, use, or malfunction has directly or indirectly caused harm or a plausible future harm. The European Commission's concerns relate to compliance and protection measures, not a specific AI failure or misuse event. The article mainly reports on regulatory scrutiny and policy developments, which fits the definition of Complementary Information as it provides context and updates on governance responses to AI-related issues without describing a new AI Incident or AI Hazard.
Thumbnail Image

Bruselas acusa a Meta de violar la ley UE por no impedir que menores de 13 años usen Instagram y Facebook

2026-04-29
Diario de Sevilla
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of automated detection and verification tools used by Meta to enforce age restrictions on Instagram and Facebook. The failure of these AI systems to prevent underage access leads to violations of legal rights and exposes minors to potential harm, fulfilling the criteria for an AI Incident. The harm is realized as children under 13 are accessing the platforms despite restrictions, and the AI systems' inadequacies contribute directly to this harm. The event is not merely a potential risk but an ongoing issue with regulatory consequences, distinguishing it from an AI Hazard or Complementary Information.
Thumbnail Image

Ue contro Meta: "Instagram e Facebook non tutelano gli under 13". La replica: "Non concordiamo"

2026-04-29
QuotidianoNet
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as Instagram and Facebook use AI for user age verification and content moderation. However, the event centers on regulatory scrutiny and alleged non-compliance with legal obligations rather than an AI system causing or enabling harm. No specific AI malfunction or misuse leading to injury, rights violation, or community harm is reported. The potential harm is regulatory non-compliance and exposure of minors to risks, but this is under investigation and not confirmed as an AI Incident. The article primarily reports on the regulatory process and Meta's response, fitting the definition of Complementary Information, as it provides context and updates on AI-related governance and compliance without describing a new AI Incident or AI Hazard.
Thumbnail Image

Bruselas acusa a Meta de violar la ley UE por no impedir que menores...

2026-04-29
europa press
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms, Instagram and Facebook, use AI systems for content moderation, user age verification, and access control. The failure to effectively prevent underage access constitutes a breach of legal obligations under the DSA, which is designed to protect fundamental rights, including child safety online. The AI systems' insufficient enforcement of age restrictions has directly led to a violation of these rights. Therefore, this event qualifies as an AI Incident due to the realized harm (violation of legal protections for minors) caused by the AI system's use and compliance failure.
Thumbnail Image

EU says Meta is failing to keep underage users off Facebook and Instagram - The Boston Globe

2026-04-29
The Boston Globe
Why's our monitor labelling this an incident or hazard?
The article focuses on regulatory investigation and Meta's response regarding platform safety and age verification measures, which likely involve AI systems. However, no specific harm event caused by AI is described, nor is there a clear imminent risk of harm detailed. The main narrative is about the regulatory process and Meta's planned measures, making this a societal and governance response to AI-related platform safety issues. Therefore, this fits the definition of Complementary Information rather than an AI Incident or AI Hazard.
Thumbnail Image

Acusan a Meta de no impedir el acceso de menores a Facebook e Instagram

2026-04-29
Revista Proceso
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems to detect user age and moderate content. The EU's accusation highlights that these AI systems are not effectively preventing underage users from accessing the platforms or being exposed to harmful content. This failure leads to violations of children's rights and protection laws, constituting harm to a vulnerable group. Since the AI systems' malfunction or inadequate use directly contributes to this harm, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Ue contro Meta: "Instagram e Facebook non proteggono i minori" | MilanoFinanza News

2026-04-29
Milano Finanza
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to verify user age and manage content risks on social media platforms. The failure of these AI systems to effectively prevent underage access and protect minors constitutes a violation of legal obligations designed to protect fundamental rights, specifically children's rights to privacy and safety. This failure has directly led to harm or risk of harm to minors, qualifying as an AI Incident under the framework. The article reports on regulatory findings and potential sanctions, indicating realized or ongoing harm rather than mere potential risk.
Thumbnail Image

EU wirft Meta mangelhaften Kinderschutz vor

2026-04-29
onvista.de
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content risk assessment, which are alleged to be inadequate, leading to underage children accessing platforms. This implies indirect harm to children's rights and safety, fitting the definition of an AI Incident due to violation of rights and potential harm. Although the article focuses on the EU's accusation and investigation rather than a specific incident of harm, the ongoing failure to protect children and the presence of underage users on the platforms indicate realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bruselas amenaza a Meta con multas de hasta el 6% de su facturación por no proteger a los menores

2026-04-29
Diario de Noticias
Why's our monitor labelling this an incident or hazard?
The event involves AI systems insofar as Meta's platforms use automated or algorithmic age verification and content moderation tools to detect and manage user age and content exposure. The failure of these AI-based verification systems to effectively prevent underage access and exposure to harmful content constitutes a direct or indirect harm to the health and well-being of minors (harm category a). The regulatory threat of fines is a response to these harms. Therefore, this situation qualifies as an AI Incident because the AI system's malfunction or inadequacy in age verification and content control has led to realized harm to minors. The article focuses on the harm caused by the AI system's failure rather than just regulatory or policy updates, so it is not merely complementary information.
Thumbnail Image

Bruxelles accuse Meta de laisser les moins de 13 ans accéder à Instagram et Facebook

2026-04-29
Mediapart
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation that have failed to prevent underage users from accessing the platforms, leading to exposure to harmful content. This constitutes a violation of legal obligations under the EU DSA and results in harm to minors' health and rights. The AI system's malfunction or ineffective use is directly linked to the harm. Therefore, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

L'Europa avverte Meta: Facebook e Instagram non proteggono abbastanza i minori

2026-04-29
Multiplayer.it
Why's our monitor labelling this an incident or hazard?
The article involves AI systems insofar as Meta uses automated or algorithmic methods to detect and remove underage accounts, which is a typical AI application in content moderation and user management. The regulators' findings indicate these AI systems are currently inadequate, which could lead to harm to minors (a vulnerable group) by allowing their access to inappropriate content or exposure to risks on the platforms. However, the article only reports preliminary findings and regulatory demands, not a specific incident of harm that has already occurred. Therefore, this is best classified as Complementary Information, as it provides important context on AI system performance and regulatory oversight but does not describe a concrete AI Incident or an AI Hazard event causing or plausibly leading to harm yet.
Thumbnail Image

Єврокомісія висунула претензії Meta через доступ дітей до соцмереж

2026-04-29
Європейська правда
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation, which are failing to prevent underage access, potentially causing harm to children. However, the article reports accusations and preliminary findings without confirmed actual harm or incident occurrence. Therefore, it represents a plausible risk of harm due to AI system shortcomings but not a confirmed AI Incident. It is more accurately classified as an AI Hazard, as the AI systems' malfunction or inadequacy could plausibly lead to harm to children's health and rights if unaddressed.
Thumbnail Image

Facebook and Instagram Must Do More To Block Under-13s, EU Warns in Meta Charges

2026-04-29
Republic World
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used by Meta to detect and remove underage users, which is a use of AI in content moderation and user management. The EU's charges highlight that these AI systems are currently inadequate, posing risks to children's safety and rights. However, the article does not report a specific AI Incident where harm has already occurred due to AI malfunction or misuse, nor does it describe a plausible future harm event that is imminent or demonstrated. Instead, it reports on regulatory scrutiny and preliminary charges, which are governance responses to potential AI-related harms. This fits the definition of Complementary Information, as it updates on societal and governance responses to AI-related issues without describing a new incident or hazard.
Thumbnail Image

Sur Instagram et Facebook, les moins de 13 ans ne sont pas assez protégés : Bruxelles épingle Meta qui risque une amende salée

2026-04-29
01net
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for user management, content recommendation, and age verification processes. The Commission's findings indicate that these AI systems and related mechanisms have failed to prevent underage users from accessing the services, leading to violations of the Digital Services Act and exposing children to harm. The harm is realized as a breach of legal protections for minors, a form of violation of rights under applicable law. The event is not merely a potential risk but an ongoing issue with documented failures and regulatory action, thus constituting an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facebook und Instagram schützen Kinder nicht ausreichend

2026-04-29
wallstreet:online
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as social media platforms like Facebook and Instagram use AI for content moderation, age verification, and risk assessment. The concerns raised relate to the platforms' failure to effectively protect children, which involves AI system use and policy enforcement. However, no specific AI malfunction or misuse causing realized harm is described. The article mainly reports on regulatory findings, company responses, and potential future actions, which fits the definition of Complementary Information. It does not report a concrete AI Incident (harm realized) or AI Hazard (plausible future harm) but rather governance and societal responses to existing concerns about AI system impacts on child safety.
Thumbnail Image

EU: Facebook und Instagram schützen Kinder nicht ausreichend

2026-04-29
GMX News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as Facebook and Instagram use AI-based algorithms for user account management, content moderation, and age verification processes. The failure to effectively enforce age restrictions and protect children from harmful content constitutes indirect harm to children's health and rights, fulfilling the criteria for an AI Incident. The harm is realized as children under 13 are accessing the platforms and potentially exposed to risks. The EU's investigation and potential sanctions relate directly to the AI systems' inadequate performance in safeguarding children, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UE aperta cerco contra Meta e afirma que plataforma falha no bloqueio de menores de 13 anos no Instagram e no Facebook - Revista Fórum

2026-04-29
Revista Fórum
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's failure to enforce age restrictions on its platforms, which rely on AI or algorithmic systems to detect and remove underage users. The harm involves exposure of children under 13 to inappropriate content and risks such as bullying and harassment, which are direct harms to health and well-being. The investigation and potential fines are based on these failures. Since the AI systems' malfunction or inadequate use has directly or indirectly led to harm, this qualifies as an AI Incident under the OECD framework.
Thumbnail Image

La Commissione Ue: Instagram e Facebook non tutelano i minori di 13 anni

2026-04-29
Avvenire
Why's our monitor labelling this an incident or hazard?
The event involves AI systems insofar as Instagram and Facebook use AI-based algorithms and automated systems to enforce age restrictions and moderate content. The failure to effectively identify and remove underage users indicates a malfunction or inadequacy in these AI systems' use. This has led to a violation of legal obligations intended to protect children's rights and safety, which constitutes a breach of applicable law and a violation of fundamental rights. Therefore, this qualifies as an AI Incident because the AI system's malfunction or inadequate use has directly led to harm in terms of legal violations and potential risks to minors' safety.
Thumbnail Image

La UE dice que Meta no impide el acceso de menores a Facebook e Instagram

2026-04-29
Houston Chronicle
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for user age verification and content moderation. The EU's accusation that Meta does not effectively prevent underage access or exposure to inappropriate content indicates a failure or insufficient use of these AI systems, leading to harm in the form of violation of children's rights and legal obligations under the Digital Services Act. The harm is realized, not just potential, as minors are accessing the platforms and being exposed to risks. Hence, this is an AI Incident involving AI system use leading to violation of rights and harm to a vulnerable group.
Thumbnail Image

Facebook, Instagram charged with breaching rules, must do more to protect kids below 13: EU

2026-04-29
@businessline
Why's our monitor labelling this an incident or hazard?
The platforms employ AI-based systems to detect and remove underage users and harmful content. The EU investigation found these AI-driven measures insufficient, resulting in children under 13 accessing the services, which constitutes harm to minors and a breach of legal protections. Since the AI systems' malfunction or inadequate use directly contributed to this harm, this qualifies as an AI Incident under the framework, specifically a violation of rights and harm to a vulnerable group.
Thumbnail Image

Meta Charged by EU Over Failure to Stop Children From Using Instagram and Facebook

2026-04-29
PetaPixel
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for age verification, content moderation, and recommender systems. The EU's charge highlights that these AI systems failed to effectively prevent children under 13 from accessing the platforms, violating legal requirements designed to protect minors. The harm is realized as children are accessing platforms they are legally restricted from using, which is a breach of rights and safety obligations. The AI systems' malfunction or inadequate deployment is a direct contributing factor to this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

EU crackdown on Meta: Facebook, Instagram charged over under-13 safety gaps

2026-04-29
The News International
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Facebook and Instagram to manage user access and content moderation. The EU's charges highlight that these AI systems have failed to prevent under-13 children from accessing the platforms and being exposed to harmful content, which constitutes harm to a vulnerable group and a violation of legal protections. The harm is realized, not just potential, as underage children are already accessing the platforms and facing risks. Hence, this meets the criteria for an AI Incident because the AI systems' use and malfunction (or insufficient effectiveness) have directly or indirectly led to harm and legal violations.
Thumbnail Image

Brussels warns Meta over failure to keep kids off Instagram, Facebook - UPI.com

2026-04-29
UPI
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for age verification, content moderation, and user identification. The investigation highlights that these AI systems are failing to diligently identify and block underage users, allowing children under 13 to access the platforms despite terms of service prohibiting this. This failure leads to harm to children, a vulnerable group, by exposing them to age-inappropriate content and risks, which is a violation of rights and protection laws. The AI systems' malfunction or insufficient implementation is directly linked to this harm, meeting the criteria for an AI Incident.
Thumbnail Image

Meta rechazó acusaciones de la Comisión Europea sobre la ineficacia en control de la edad

2026-04-29
El Nacional
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses the failure of Meta's age verification mechanisms, which are presumably AI-based or algorithmic systems, to prevent underage users from accessing social media platforms. This failure has led to exposure of minors to serious harms like cyberbullying and predatory behavior, constituting harm to groups of people and violations of rights. The European Commission's investigation and threat of sanctions further confirm the recognized harm. Thus, the event meets the criteria for an AI Incident due to the AI system's malfunction or ineffective use causing direct harm.
Thumbnail Image

Réseaux sociaux : l'UE accuse Meta de manquements sur la protection des moins de 13 ans

2026-04-29
Boursier.com
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly, as Facebook and Instagram rely on AI for age verification and content moderation. The European Commission's investigation and accusations relate to the use and effectiveness of these AI systems in protecting minors. However, the article does not describe a specific AI Incident where harm has directly or indirectly occurred due to AI malfunction or misuse. Nor does it describe a plausible future harm scenario without current harm. Instead, it reports on regulatory actions and demands for improved AI system performance and compliance, which fits the definition of Complementary Information as it provides governance response and context to ongoing AI-related issues.
Thumbnail Image

Facebook et Instagram sommés par l'UE de protéger les enfants de moins de 13 ans

2026-04-29
Boursier.com
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems can be reasonably inferred as Facebook and Instagram use AI-based age verification and content moderation tools. The event concerns the use of these AI systems and their failure to adequately protect children under 13, which could lead to harm to children (health, safety, or well-being). Since no specific harm has been reported yet but there is a credible risk of harm due to insufficient AI-based protections, this qualifies as an AI Hazard. The article mainly reports on regulatory findings and potential sanctions, not on an actual incident of harm.
Thumbnail Image

Meta-Konzern: EU: Facebook und Instagram schützen Kinder nicht ausreichend - Wirtschaft - Rhein-Zeitung

2026-04-29
Rhein-Zeitung
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of automated age verification and content moderation on Facebook and Instagram. The failure of these AI systems to effectively enforce the minimum age and protect children from harmful content has led to violations of the Digital Services Act and exposes children to risks, which is a form of harm to a vulnerable group. This meets the criteria for an AI Incident because the AI system's use has directly or indirectly led to harm and legal violations. The event is not merely a potential risk or a governance update but a report of insufficient protection causing harm, thus not a hazard or complementary information.
Thumbnail Image

Un enfant sur dix de moins de 13 ans accède à Instagram ou Facebook, selon l'UE

2026-04-29
Tribune de Genève
Why's our monitor labelling this an incident or hazard?
Meta's platforms rely on AI systems to verify user age and moderate content. The Commission's findings indicate these AI systems are ineffective in preventing underage access, leading to exposure of minors to harmful content. This failure to comply with legal frameworks and protect vulnerable users constitutes a violation of rights and harm to a group of people (minors). Therefore, this event qualifies as an AI Incident because the AI systems' malfunction or inadequate use has indirectly led to harm as defined by the framework.
Thumbnail Image

'Not Doing Enough to Remove Underage Users': EU Says Meta Fails to Prevent Children on Facebook, Instagram | 📲 LatestLY

2026-04-29
LatestLY
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems to detect user age and moderate content to protect minors. The EU's accusation that Meta fails to prevent underage users from accessing the platforms and inadequately assesses risks to children indicates that the AI systems' malfunction or insufficient effectiveness has directly led to harm (exposure of children to inappropriate content and violation of legal protections). This fits the definition of an AI Incident because the AI system's use and malfunction have directly led to violations of rights and harm to a vulnerable group (children). The event is not merely a potential risk or a governance update but describes an ongoing failure causing harm, thus qualifying as an AI Incident.
Thumbnail Image

Meta non ha impedito l'uso di Facebook e Instagram ai minori

2026-04-29
Punto Informatico
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of algorithms and verification tools used by Meta to enforce age restrictions. The failure to effectively prevent underage access is a regulatory violation but does not describe a specific AI Incident causing direct or indirect harm such as injury, rights violations, or community harm. Nor does it describe a plausible future harm scenario beyond regulatory non-compliance. Instead, it reports on the investigation results and the regulatory process, which fits the definition of Complementary Information as it provides updates on governance and societal responses to AI-related issues.
Thumbnail Image

La Unión Europea acusa a Meta de no impedir el acceso de menores a sus redes sociales

2026-04-29
Diario Popular
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation on social media platforms. The European Commission's accusation that these systems failed to prevent underage access and exposure to harmful content indicates that the AI systems' use has indirectly led to harm to children (health and well-being). This fits the definition of an AI Incident, as the AI system's malfunction or inadequacy in fulfilling its protective role has contributed to violations of protections for minors, a form of harm to groups of people. The potential for fines and regulatory action further supports the seriousness of the incident.
Thumbnail Image

Facebook und Instagram sollen härter gegen Kinder-Accounts vorgehen

2026-04-29
netzpolitik.org
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems in the form of automated age verification and content moderation tools used by Meta on Facebook and Instagram. The failure to effectively prevent underage access and to properly identify and remove such accounts constitutes a breach of legal obligations protecting minors, which is a violation of rights under applicable law. The harm is indirect but real, as children under 13 are accessing platforms meant for older users, exposing them to risks. The European Commission's investigation and potential fines underscore the seriousness of the issue. Hence, this is an AI Incident rather than a hazard or complementary information, as harm and legal violations are already occurring due to AI system shortcomings.
Thumbnail Image

EU Commission says Meta's age restriction systems are inadequate

2026-04-29
Social Media Today | A business community for the web's best thinkers on Social Media
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's age checking and detection systems, which are AI systems designed to enforce age restrictions. The European Commission's findings indicate these systems are inadequate, allowing minors under 13 to access the platforms by entering false birth dates and due to ineffective reporting mechanisms. This failure directly leads to a breach of legal obligations intended to protect minors, a violation of rights under applicable law. Therefore, the event meets the criteria for an AI Incident because the AI system's malfunction has directly led to harm in the form of legal rights violations and potential exposure of minors to harm.
Thumbnail Image

European Commission formally charges Meta

2026-04-29
The Next Web
Why's our monitor labelling this an incident or hazard?
The event involves an AI system component (AI-based age estimation) used by Meta to enforce age restrictions, which has failed to prevent underage access to social media platforms. This failure has led to violations of children's rights under the DSA, a legal framework protecting fundamental rights. The European Commission's formal charge indicates that harm (violation of rights and exposure of minors to potentially harmful content) has occurred due to the AI system's inadequacy and platform policies. The involvement of AI in the age verification process and the resulting regulatory action for non-compliance with child protection obligations meet the criteria for an AI Incident, as the AI system's use has directly or indirectly led to harm (rights violations and insufficient protection of minors).
Thumbnail Image

EU finds Meta in breach over child safety failures

2026-04-29
Helsinki Times
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta for age verification and content moderation, which have failed to prevent children under 13 from accessing the platforms, violating legal requirements and children's rights. The harm is realized as underage users are accessing the platforms, exposing them to potential risks. The investigation under the Digital Services Act focuses on these AI-related failures. Since the harm is occurring and linked directly to AI system shortcomings, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Facebook, Instagram face EU child safety probe

2026-04-29
Daily Times
Why's our monitor labelling this an incident or hazard?
The platforms employ AI-based detection and removal systems to enforce age restrictions, which are central to the investigation. The regulators' concern that these systems are insufficient implies a malfunction or inadequacy in the AI systems' use, which could plausibly lead to harm to children accessing the platforms. Since the article does not report actual harm but focuses on the potential for harm and regulatory action, this qualifies as an AI Hazard rather than an AI Incident. The event is not merely complementary information because it centers on the potential for harm and regulatory charges related to AI system performance in child safety enforcement.
Thumbnail Image

EU: Meta Breaches Digital Law Over Underage Instagram Use

2026-04-29
Mirage News
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through Instagram and Facebook's use of AI for age verification and risk assessment. The European Commission's findings highlight failures in these AI-driven processes to prevent underage access and protect minors, which relates to violations of rights and potential harm. However, the article does not report a specific realized harm incident caused by AI malfunction or misuse but rather a regulatory investigation and preliminary breach findings. This fits the definition of Complementary Information, as it details governance and regulatory responses to AI-related risks and ongoing investigations rather than a direct AI Incident or AI Hazard.
Thumbnail Image

Bruselas acusa a Meta por menores de 13 años en Instagram y Facebook

2026-04-29
El Output
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (Meta's platforms use AI algorithms for content recommendation and user management) whose use has indirectly led to harm to minors by failing to prevent underage access and exposure to harmful content, constituting violations of rights and harm to health. The European Commission's investigation and potential sanctions confirm the seriousness of these harms. Therefore, this qualifies as an AI Incident because the AI system's use has directly or indirectly led to significant harm as defined in the framework.
Thumbnail Image

Meta Accused of Failing to Keep Children Off Instagram and Facebook in Europe

2026-04-29
GV Wire
Why's our monitor labelling this an incident or hazard?
The article involves AI systems implicitly through Meta's use of technologies for age verification and content moderation, which are AI-driven. The failure to effectively prevent underage access is a breach of legal obligations protecting children's rights, which is a form of harm. However, the article focuses on the regulatory investigation and preliminary ruling rather than a confirmed incident of harm caused directly by AI malfunction or misuse. There is no direct report of injury or harm caused by AI, only regulatory findings and potential future penalties. Thus, it fits the definition of Complementary Information, as it details governance responses and regulatory scrutiny related to AI system shortcomings, rather than a direct AI Incident or Hazard.
Thumbnail Image

Bruselj: Meta ne zagotavlja ustrezne zaščite otrok na Instagramu in Facebooku - Lokalec.si

2026-04-29
Lokalec.si
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for user identification, content moderation, and enforcement of age restrictions. The European Commission's findings indicate that these AI systems are not effectively preventing children under 13 from accessing the platforms, leading to exposure to inappropriate content and violation of children's rights. The harm is realized as children are currently accessing these services despite age restrictions, and the AI systems' failure to enforce these restrictions is a contributing factor. This meets the criteria for an AI Incident because the AI system's use and malfunction have directly or indirectly led to harm (violation of rights and exposure to harmful content).
Thumbnail Image

EU says Meta hasn't done enough to prevent minors under 13 from using Instagram and Facebook

2026-04-29
Mashable SEA
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI or algorithmic systems to enforce age restrictions and content filtering. The failure of these systems to effectively prevent minors under 13 from accessing the services has led to a breach of the Digital Services Act, which is a legal obligation protecting minors' rights and safety. The harm is realized as minors are able to access platforms they are legally restricted from using, exposing them to potential risks. The European Commission's findings highlight that the AI systems' inadequacies in detection and mitigation are directly linked to this harm. Hence, this event meets the criteria for an AI Incident due to the direct or indirect harm caused by the AI system's malfunction or insufficient use in protecting minors.
Thumbnail Image

EU finds Meta not doing enough to keep underage users at bay

2026-04-29
Silicon Republic
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems in the form of algorithms used by Instagram and Facebook to identify users and manage content, which are failing to prevent underage access and mitigate risks such as addictive behavior. The harm is indirect but real, affecting children's health and well-being, which fits the definition of harm to a group of people. The EU's findings indicate that Meta's AI-driven risk assessments and enforcement measures are inadequate, leading to violations of the Digital Services Act and potential harm. Thus, this is an AI Incident rather than a hazard or complementary information, as harm is occurring due to the AI systems' use and malfunction in risk mitigation.
Thumbnail Image

EU: Meta lässt unter 13-Jährige auf Facebook und Instagram zu

2026-04-29
Euronews Deutsch
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and user identification on social media platforms. The failure of these AI systems to effectively prevent underage users from accessing Facebook and Instagram has led to children under thirteen using these platforms, which is against Meta's policies and EU law (Digital Services Act). This situation constitutes a violation of legal obligations and poses harm to children's health and rights, fulfilling the criteria for an AI Incident. The ongoing investigation and potential sanctions further confirm the seriousness of the harm and the AI system's role in it.
Thumbnail Image

EU wirft Meta vor: Kinder unter dreizehn auf Facebook und Instagram

2026-04-29
Euronews Deutsch
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's AI-based age verification systems failing to prevent underage children from accessing Facebook and Instagram, which is a violation of the Digital Services Act. The harm involves violations of children's rights and potential harm to their well-being by exposure to inappropriate content. The AI systems' malfunction or inadequacy in enforcing age restrictions is a direct factor in this harm. The EU's regulatory response and potential sanctions further confirm the seriousness of the issue. Hence, this is an AI Incident involving violations of legal protections and harm to vulnerable groups caused by AI system failures.
Thumbnail Image

La UE acusa a Meta de no impedir que menores usen Facebook e Instagram

2026-04-29
Euronews Español
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta to detect and remove underage users, which are failing to prevent children under 13 from accessing the platforms. This failure leads to harm to minors, a vulnerable group, by exposing them to potentially harmful online environments, which is a violation of regulatory protections and children's rights. The harm is realized, not just potential, as the Commission estimates 10-12% of under-13 children use these platforms despite age restrictions. Therefore, this event meets the criteria for an AI Incident due to the AI system's malfunction and its direct role in causing harm.
Thumbnail Image

UE: Meta no impide el acceso de menores de 13 a Facebook e Instagram

2026-04-29
Euronews Español
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta to detect and prevent underage users, which are failing to effectively enforce age restrictions, leading to minors accessing the platforms. This implicates AI system use and potential indirect harm to children's safety and rights. However, the article does not report a specific AI Incident causing realized harm but rather a regulatory investigation and preliminary findings about systemic failures and risks. It also discusses Meta's planned responses and broader regulatory context. Thus, it fits the definition of Complementary Information, as it updates on governance and societal responses to AI-related risks without describing a new AI Incident or AI Hazard.
Thumbnail Image

L'UE accuse Meta de laisser moins de 13 ans sur Facebook, Instagram

2026-04-29
euronews
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used by Meta for age verification and content moderation, which are central to the regulatory concerns. However, it does not report a specific AI incident causing direct or indirect harm, nor does it describe a plausible future harm event caused by AI malfunction or misuse. Instead, it details the European Commission's preliminary findings, Meta's contestation, and the regulatory process underway, which fits the definition of Complementary Information. The focus is on governance and societal response to AI-related challenges rather than a new incident or hazard.
Thumbnail Image

UE: Meta laisse des moins de 13 ans sur Facebook et Instagram

2026-04-29
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems used by Meta for age verification on social media platforms. The Commission's findings indicate these AI systems fail to effectively prevent underage users, leading to children under 13 accessing platforms meant for older users. This failure constitutes a malfunction or inadequate use of AI, resulting in harm to vulnerable children and violations of legal obligations under the DSA. The harm is realized (children are using the platforms despite age restrictions), and the AI system's role is pivotal in this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UE: Meta non ha fermato gli under 13 su Facebook e Instagram

2026-04-29
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's failure to prevent under-13 users from accessing Facebook and Instagram, despite age restrictions. The age verification process relies on AI or algorithmic systems to detect and remove underage users, which are ineffective, allowing harm to occur. This constitutes a violation of legal obligations under the Digital Services Act and exposes vulnerable children to risks, fulfilling the criteria for harm to rights and health. The AI system's malfunction or inadequate use is a contributing factor to this harm. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

UE acusa Meta de não impedir menores de 13 no Facebook e Instagram

2026-04-29
euronews
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses AI systems used by Meta to detect and remove underage users, which are failing to prevent children under 13 from accessing Facebook and Instagram. This failure leads to harm to children, a vulnerable group, and breaches legal protections under the EU Digital Services Act. The harm is indirect but real, as underage access to social media can cause psychological and social harm. The event involves the use and malfunction of AI systems in age verification, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bruselj: Meta ne zagotavlja ustrezne zaščite otrok na Instagramu in Facebooku

2026-04-29
STA d.o.o.
Why's our monitor labelling this an incident or hazard?
The event describes a failure by Meta's AI-based systems to effectively prevent underage children from accessing social media platforms, resulting in potential exposure to harmful content. This constitutes a violation of rights and harm to a vulnerable group, fitting the definition of an AI Incident. The AI system's malfunction or inadequacy in enforcing age restrictions and content moderation directly leads to harm, thus qualifying as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bruselj: Meta ne zagotavlja ustrezne zaščite otrok na Instagramu in Facebooku

2026-04-29
STA d.o.o.
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for content moderation and age verification. The failure to prevent underage children from accessing these platforms exposes them to harmful content, which is a violation of children's rights and can cause harm to their health and well-being. Since the AI systems' malfunction or inadequate implementation directly leads to this harm, it qualifies as an AI Incident under the framework's criteria.
Thumbnail Image

EC accuses Meta of DSA breach over child protections

2026-04-29
Mobile World Live
Why's our monitor labelling this an incident or hazard?
Meta Platforms uses AI systems for age verification, content moderation, and risk assessment on Instagram and Facebook. The European Commission's findings highlight that these AI systems failed to prevent underage children from accessing the platforms and being exposed to harmful content. The harm to children (health and well-being) is a recognized form of injury under the AI Incident definition. The event involves the use and malfunction (ineffectiveness) of AI systems leading to harm, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

EU charges Meta with failure to protect minors on Instagram, Facebook

2026-04-29
Telecompaper
Why's our monitor labelling this an incident or hazard?
The article discusses regulatory charges against Meta for inadequate protection of minors, which involves AI systems used for risk assessment and age verification. While the AI systems' failure to protect minors is central, the article does not report a specific AI-driven harm or incident occurring, nor does it describe a plausible future harm event. Instead, it focuses on the regulatory response and potential penalties, which fits the definition of Complementary Information as it informs about governance and societal responses to AI-related issues without describing a new AI Incident or Hazard.
Thumbnail Image

Minori, Meta nel mirino Ue: perchè e cosa rischia

2026-04-29
La Stampa
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to identify and manage user age verification and content moderation. The failure of these AI systems to effectively prevent underage access and to promptly remove underage users constitutes a breach of legal obligations protecting minors, which is a form of harm under the AI Incident definition (violation of applicable law and protection of fundamental rights). Although the investigation is preliminary and Meta disputes the findings, the described failures have already led to minors accessing the platforms, implying realized harm. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta : l'Europe accuse Facebook et Instagram de laisser passer les enfants

2026-04-29
KultureGeek
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta for age verification and content moderation on Facebook and Instagram. The Commission's accusation highlights that these AI systems' malfunction or insufficient effectiveness has directly led to children under 13 accessing harmful content, which is a harm to health and safety (a form of harm to persons). The failure to comply with legal obligations under the DSA further supports classification as an AI Incident. The harm is ongoing and documented, not merely potential, and the AI systems' role is pivotal in enabling this harm. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

EU finds Meta failing to stop under-13s accessing Facebook, Instagram | THE DAILY TRIBUNE | KINGDOM OF BAHRAIN

2026-04-29
DT News
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI or algorithmic systems used by Meta for age verification and content moderation. The failure of these systems to effectively prevent underage access has led to harm to children, including exposure to inappropriate content and addictive platform designs, which affect their mental and physical health. This constitutes indirect harm caused by the AI system's malfunction or inadequate use. The EU's formal warning and potential fines underscore the seriousness of the issue. Hence, the event meets the criteria for an AI Incident as the AI system's use has directly or indirectly led to harm.
Thumbnail Image

Ue, Meta colpevole di non aver impedito agli under13 di accedere alle sue piattaforme - Primaonline - Ultime notizie

2026-04-29
Prima Comunicazione
Why's our monitor labelling this an incident or hazard?
The platforms use AI systems for age verification and content moderation, which are explicitly mentioned as ineffective in preventing under-13 users from accessing the services. This failure has directly led to harm risks such as exposure to cyberbullying and inappropriate experiences, which are harms to health and rights of minors. The event involves the use and malfunction of AI systems in enforcing age restrictions, leading to violations of legal obligations under the Digital Services Act. Since harm is occurring and the AI systems' inadequacy is central to the issue, this is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Meta sotto accusa, non tutela i minori sui social. Rischia una multa

2026-04-30
LMF La mia finanza
Why's our monitor labelling this an incident or hazard?
The platforms use AI or algorithmic systems to verify user age and manage content and user access. The failure to effectively identify and remove underage users has led to potential harm to minors, including exposure to risks associated with social media use, which can be considered harm to health and safety of a group of people (minors). The event describes ongoing harm and regulatory action based on these failures. Therefore, this qualifies as an AI Incident because the AI systems' malfunction or inadequate use has directly or indirectly led to harm to a vulnerable group (minors) and violation of legal obligations under the Digital Services Act.
Thumbnail Image

Bruselas acusa a Meta de violar la ley UE por no impedir que menores de 13 años usen Instagram y Facebook

2026-04-29
Teleprensa
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms use AI systems for user management and content moderation. The European Commission's accusation highlights that these AI systems have failed to enforce age restrictions effectively, allowing minors under 13 to access the services. This failure constitutes a breach of the EU Digital Services Act, which is designed to protect users' rights, including those of minors. The harm here is a violation of legal obligations and potentially the rights of children, which fits the definition of an AI Incident under violations of human rights or breach of legal protections. The event is not merely a potential risk but an ongoing issue under formal investigation, indicating realized harm rather than just plausible future harm. Hence, it is classified as an AI Incident.
Thumbnail Image

Bruxelas ameaça multar Meta por não impedir que menores de 13 anos utilizem Facebook e Instagram

2026-04-29
Executive Digest
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as Facebook and Instagram use automated and algorithmic mechanisms (likely AI-based) for age verification, content moderation, and user management. The failure of these AI systems to effectively prevent underage access and to promptly remove underage users constitutes a malfunction or inadequate use leading to violations of legal obligations protecting minors (a breach of applicable law and fundamental rights). The harm is indirect but material, as minors are exposed to risks on these platforms. Therefore, this qualifies as an AI Incident due to the realized harm and legal violations stemming from AI system inadequacies in use and enforcement.
Thumbnail Image

Bruxelles accusa Meta di consentire ai minori di 13 anni di usare i social network | Messaggero Veneto

2026-04-29
Messaggero Veneto
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for user verification, content moderation, and risk management. The accusation that Meta is not effectively preventing under-13 users from accessing these platforms indicates a failure in the AI system's use or design, leading to indirect harm to minors by exposing them to risks. This constitutes a violation of applicable laws protecting minors and their rights, fitting the definition of an AI Incident due to the realized harm and legal breaches linked to AI system use.
Thumbnail Image

ЄС готує штрафи для Meta через доступ дітей до Facebook та Instagram | УНН

2026-04-29
Українські Національні Новини (УНН)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect and remove underage accounts, which are central to the alleged failure. The European Commission's investigation and potential fines relate to the use and malfunction (ineffectiveness) of these AI systems in protecting children, which is a violation of legal obligations and children's rights. Although the harm is not explicitly confirmed as realized, the ongoing investigation and potential for fines indicate a serious issue linked to AI system use. Since the article focuses on the regulatory accusation and potential penalties rather than a confirmed harm event, this is best classified as Complementary Information providing context on an ongoing AI-related regulatory matter rather than a confirmed AI Incident or Hazard.
Thumbnail Image

Bruselas acusa a Meta de no impedir el acceso de menores de 13 años a Facebook e Instagram

2026-04-29
Diari de Tarragona
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's inadequate age verification mechanisms, which likely involve AI or algorithmic systems to detect and prevent underage access. The failure of these systems has directly led to minors under 13 accessing social media platforms, exposing them to potential harm. This constitutes a violation of legal protections for minors and a breach of obligations under applicable law, fitting the definition of an AI Incident. The involvement of AI is reasonably inferred from the context of automated age verification and content moderation systems. The harm is ongoing and realized, not merely potential, as minors are currently able to use the platforms despite restrictions.
Thumbnail Image

UE acusa Meta de falhas na proteção de crianças em Facebook e Instagram | CNN Brasil

2026-04-30
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta to detect and remove underage users, but these systems have failed or are inadequate, leading to violations of the Digital Services Act and insufficient protection of children under 13. This is a direct or indirect harm related to the use and malfunction of AI systems in enforcing age restrictions and content moderation, which are fundamental rights protections. Therefore, it qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Bruselas investiga a Meta por no impedir el acceso a Facebook e Instagram a los menores de 13 años

2026-04-29
Ara en Castellano
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or algorithmic systems by Meta to evaluate user age and enforce age restrictions on social media platforms. The Commission's concern is that these AI-based measures are ineffective, leading to minors under 13 gaining access, which violates legal protections for children and potentially breaches the Digital Services Act. Although no direct harm such as injury or rights violation has been explicitly reported yet, the failure to prevent underage access constitutes a breach of obligations intended to protect fundamental rights (child protection laws). Therefore, this event represents an AI Incident due to the AI system's failure to comply with legal frameworks and its direct role in enabling underage access, which is a violation of rights.
Thumbnail Image

UE acusa Meta de permitir menores de 13 no Instagram

2026-04-30
Jornal Correio de Santa Maria
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI or algorithmic technologies by Meta to enforce age restrictions and identify underage users. The failure of these AI-based measures to effectively prevent minors from accessing the platforms has led to potential harm to children, such as exposure to inappropriate content or privacy risks. Although no specific incident of harm is detailed, the ongoing access of underage users constitutes a violation of legal obligations and poses risks to minors' rights and safety. Therefore, this situation qualifies as an AI Incident due to the direct or indirect harm caused by the ineffective AI system in place and the breach of regulatory requirements.
Thumbnail Image

Meta Accused of Failing to Keep Children Off Instagram and Facebook in Europe

2026-04-29
DNYUZ
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI-related systems (age verification and content moderation algorithms) that are failing to prevent underage access, which is a violation of legal obligations and can cause harm to children's health and rights. However, the article focuses on the regulatory investigation and preliminary ruling rather than a specific incident of harm caused by the AI system. There is no direct report of injury or harm caused by the AI system's malfunction or misuse, only the identification of inadequate safeguards and potential future penalties. This fits the definition of Complementary Information, as it details governance responses and regulatory actions concerning AI systems and their societal impacts, rather than reporting a new AI Incident or AI Hazard.
Thumbnail Image

EU fordert strengere Kinderschutzmaßnahmen von Meta

2026-04-29
IT BOLTWISE® x Artificial Intelligence
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as platforms like Facebook and Instagram use AI for content moderation, age verification, and user account management. However, the article does not describe a specific incident where AI malfunction or misuse directly caused harm, nor does it report a realized harm caused by AI. Instead, it discusses regulatory concerns and potential future improvements to AI-based age verification systems. Therefore, this is best classified as Complementary Information, as it provides context on governance responses and potential future AI system improvements related to child protection, without describing a concrete AI Incident or AI Hazard.
Thumbnail Image

L'Europa accusa Meta di aver permesso ai minori di 13 anni di accedere a Instagram e Facebook

2026-04-29
Italian Tech
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation on social media platforms. The failure of these AI systems to reliably prevent underage access has directly led to harm by exposing minors to inappropriate or harmful content, violating their rights and legal protections. Therefore, this qualifies as an AI Incident due to the realized harm and legal violations stemming from the AI system's malfunction or inadequacy.
Thumbnail Image

Facebook et Instagram doivent sévir davantage contre les comptes d'enfants ! | LesNews

2026-04-29
LesNews
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used by Meta for age verification and risk assessment on Facebook and Instagram, which are central to the Commission's findings. The failure to adequately prevent underage access and the complicated reporting procedures imply indirect harm to children's safety and rights, fitting the definition of an AI Incident. However, since the article focuses on the investigation's preliminary findings and regulatory responses rather than a concrete incident of harm caused by AI malfunction or misuse, it aligns more with Complementary Information. The mention of vulnerabilities in the age verification app and potential privacy risks also suggests plausible future hazards, but these are secondary to the main narrative about regulatory scrutiny. Hence, the event is best classified as Complementary Information, as it provides important context and updates on AI system governance and child protection without reporting a new AI Incident or AI Hazard.
Thumbnail Image

La Unión Europea acusó a Meta de no frenar el acceso de menores a Facebook e Instagram

2026-04-29
ifm noticias
Why's our monitor labelling this an incident or hazard?
The involvement of AI systems is reasonably inferred as Meta uses AI technologies for age verification and content moderation. The event details a failure in these AI systems' use to prevent underage access, leading to direct harm to minors by exposing them to inappropriate or harmful content. This meets the criteria for an AI Incident because the AI system's use (or misuse/failure) has directly led to harm to a vulnerable group, violating protections intended for minors. The event is not merely a potential risk but describes realized harm and regulatory action based on it.
Thumbnail Image

La Comisión Europea estima preliminarmente que Instagram y Facebook infringen la Ley de Servicios Digitales por no proteger a los menores - Audiovisual451

2026-04-29
Audiovisual451
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta for age verification and risk assessment to protect minors on Instagram and Facebook. The failure of these AI systems to accurately identify and prevent underage access has directly led to harm by exposing minors to inappropriate content and privacy risks, violating legal protections under the DSA. The Commission's findings indicate that these AI systems' malfunction or inadequate design is a contributing factor to the harm. Therefore, this is an AI Incident because the AI system's use and malfunction have directly led to violations of rights and potential harm to a vulnerable group (minors).
Thumbnail Image

DOB Check Ineffective: Meta Gets EU Rap For Failing To Keep Kids Away From Facebook, Instagram

2026-04-29
NDTV Profit
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Meta's age verification and risk assessment tools, which are part of the platforms' AI-driven content moderation and user management. The European Commission's findings indicate these AI systems' inadequacies could plausibly lead to harm to minors (a vulnerable group) by allowing underage access and exposure to harmful content. Since no specific harm has been reported yet but the risk is credible and regulatory action is underway, this qualifies as an AI Hazard rather than an AI Incident. The article focuses on the regulatory assessment and potential future harm rather than a concrete incident of harm caused by AI malfunction or misuse.
Thumbnail Image

Meta in breach of EU law over failing to keep children off platforms

2026-04-29
Capital Brief
Why's our monitor labelling this an incident or hazard?
Meta's platforms rely on AI systems to enforce age restrictions and moderate content. The European Commission's finding that Meta does not have effective measures to stop under 13-year-olds from accessing the services indicates a failure in the use of AI systems to mitigate risks to minors. This failure has directly led to a violation of legal obligations intended to protect children's rights and safety, which constitutes harm under the framework. Therefore, this event qualifies as an AI Incident due to the realized harm linked to AI system use and compliance failure.
Thumbnail Image

Europa acorrala a Meta por el uso de Instagram y Facebook en menores

2026-04-29
Los Primeros TV
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation, which are failing to prevent underage access and exposure to harmful content. This failure directly leads to harm to the mental health of minors, a recognized form of injury to health. Therefore, this constitutes an AI Incident because the AI system's malfunction or inadequate implementation has directly led to harm. The article details ongoing harm and regulatory action, not just potential risk or complementary information.
Thumbnail Image

EU finds Meta failure keeping under-13s off Facebook, Instagram

2026-04-29
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's failure to keep under-13s off its platforms due to ineffective AI-driven age verification and reporting tools. This failure has allowed children to access age-inappropriate content, posing harm to their health and rights. The involvement of AI systems in account verification and content moderation is reasonably inferred. The harm is realized (children accessing platforms despite age restrictions), and the EU is investigating under legal frameworks protecting fundamental rights. Hence, this qualifies as an AI Incident.
Thumbnail Image

EU finds Meta failing to keep under-13s off Facebook, Instagram

2026-04-29
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses Meta's failure to effectively use AI or algorithmic tools to detect and remove underage users, resulting in children under 13 accessing social media platforms and being exposed to harmful content. This is a direct harm to children's health and wellbeing and a violation of protective regulations. The AI system's malfunction or inadequate deployment is a contributing factor to this harm. Therefore, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

ЄС бачить Meta нездатною тримати дітей віком до 13 років подалі від Facebook та Instagram | УНН

2026-04-29
Українські Національні Новини (УНН)
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of Facebook and Instagram's content recommendation and user management algorithms, which are central to the platforms' operation. The EU's findings indicate that Meta's AI-driven systems have failed to enforce age restrictions effectively, allowing children under 13 to access the platforms and be exposed to harmful content, which is a violation of rights and a harm to health. This harm is realized and ongoing, not merely potential. The regulatory investigation and threat of fines further confirm the seriousness of the incident. Thus, the event meets the criteria for an AI Incident due to direct or indirect harm caused by AI system use and failure.
Thumbnail Image

Facebook e Instagram devem bloquear acesso de menores de 13 anos, diz UE | CNN Brasil

2026-04-29
CNN Brasil
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used by Facebook and Instagram to detect and remove underage users, which is a use of AI in content moderation and user management. The regulators' findings indicate these AI systems are insufficient, implying a risk of harm to children (a vulnerable group) through exposure to inappropriate content or platform use. However, the article does not describe a specific incident where harm has occurred due to AI malfunction or misuse, but rather a regulatory assessment and potential future enforcement. Therefore, this event is best classified as Complementary Information, as it provides important context and updates on AI system performance and regulatory responses without reporting a concrete AI Incident or an immediate AI Hazard.
Thumbnail Image

EK: Meta na svojich platformách Facebook a Instagram nedostatočne chráni... (2) - Index SME

2026-04-29
www.sme.sk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for age verification and content moderation on Facebook and Instagram. The European Commission's findings indicate that these AI systems fail to adequately prevent children under 13 from accessing the platforms, exposing them to potentially harmful content. This failure constitutes a breach of legal obligations under the EU Digital Services Act, which is designed to protect fundamental rights, including the rights of children to safe online environments. The harm is realized as children are currently exposed to inappropriate content due to insufficient AI safeguards. Therefore, this is an AI Incident because the AI systems' malfunction or inadequacy has directly led to violations of rights and potential harm to children.
Thumbnail Image

EÚ: Napriek vlastným pravidlám Meta nedokáže udržať deti mimo Facebooku a Instagramu, tvrdí Brusel - Index SME

2026-04-29
www.sme.sk
Why's our monitor labelling this an incident or hazard?
The social media platforms use AI systems to detect and remove accounts of children under 13, but the EU investigation found these mechanisms ineffective, leading to children accessing platforms they are not legally allowed to use. This failure directly results in potential harm to children's health and well-being by exposure to inappropriate content, which fits the definition of an AI Incident. The event involves the use and malfunction of AI systems in content moderation and age verification, causing a breach of protective regulations and harm to a vulnerable group.
Thumbnail Image

Európska komisia kritizuje spoločnosť Meta: Facebook a Instagram nedostatočne chránia deti pod 13 rokov

2026-04-29
Aktuality.sk
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms use AI systems for content moderation and age verification. The Commission's findings indicate these AI systems are insufficiently effective in preventing underage children from accessing harmful content, which is a direct violation of legal protections for children. This failure has led to potential harm to children's health and well-being, qualifying as an AI Incident under the framework because the AI system's malfunction or inadequate use has directly led to harm (exposure to inappropriate content). The event is not merely a potential risk but a realized issue as children are currently exposed, and regulatory action is being considered.
Thumbnail Image

Napriek vlastným pravidlám Meta nedokáže udržať deti mimo Facebooku a Instagramu, tvrdí Brusel -

2026-04-29
sita.sk
Why's our monitor labelling this an incident or hazard?
Meta's social media platforms employ AI systems for content moderation and user account verification, including detecting underage users. The EU's findings indicate these AI systems are not effectively preventing children under 13 from accessing the platforms, which exposes them to potential harm from inappropriate content. This failure in AI system use and enforcement directly relates to harm to children's health and safety. Therefore, this event qualifies as an AI Incident due to the realized harm stemming from the AI systems' inadequate performance in enforcing age restrictions and protecting vulnerable users.
Thumbnail Image

Meta v hľadáčiku Bruselu: Facebook a Instagram podľa EK nedostatočne chránia deti | Info.sk

2026-04-29
info.sk
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used by Meta for content moderation and age verification, which are central to the concerns raised by the European Commission. However, the article only discusses preliminary findings and potential future penalties without reporting any actual harm or confirmed violations that have already occurred. Therefore, this situation represents a plausible risk of harm or legal violation due to AI system shortcomings but does not yet constitute an AI Incident. It is best classified as Complementary Information because it provides context on regulatory scrutiny and potential governance responses related to AI systems, rather than reporting a realized AI Incident or an imminent hazard.
Thumbnail Image

EK: Meta na svojich platformách Facebook a Instagram nedostatočne chráni deti

2026-04-29
EuropskeNoviny.sk
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems for content moderation, age verification, and user management. The European Commission's findings indicate that these AI systems or their implementations fail to prevent underage children from accessing the platforms, thereby exposing them to potentially harmful content. This constitutes a violation of legal protections intended to safeguard children's rights and well-being, which falls under harm to groups of people (children). Since the harm is ongoing and the violation of law is established by the Commission's investigation, this qualifies as an AI Incident. The AI system's use and malfunction (insufficient age verification and content protection) directly contribute to the harm and legal breach.
Thumbnail Image

Evropská komise: Meta kvůli slabé ochraně dětí na sítích porušuje pravidla EU

2026-04-29
Seznam Zprávy
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems used by Meta to enforce age restrictions and manage risks related to child access on Instagram and Facebook. The European Commission's findings indicate that these AI systems fail to effectively prevent underage access, which is a violation of legal obligations protecting children's rights. This failure has directly led to harm by exposing children under 13 to potentially unsafe online environments. The involvement of AI in the development and use stages, combined with the resulting violation of rights and potential harm to children, meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Evropská komise předběžně shledala

2026-04-29
Deník N
Why's our monitor labelling this an incident or hazard?
The event involves AI systems as Instagram and Facebook use AI for content moderation and risk assessment. The European Commission's finding relates to the platforms' failure to mitigate risks to minors, which is a regulatory compliance issue indicating potential harm. However, no specific harm or incident is described as having occurred yet; the focus is on the preliminary regulatory assessment of risk management failures. Therefore, this is best classified as Complementary Information, as it provides governance and regulatory response context rather than reporting a concrete AI Incident or an AI Hazard with imminent harm.
Thumbnail Image

Komisioni Evropian: Meta ka shkelur rregullat për sigurinë e fëmijëve

2026-04-29
Anadolu Ajansı
Why's our monitor labelling this an incident or hazard?
The event involves AI systems or algorithmic tools used by Meta to enforce age restrictions and protect minors on social media platforms. The failure of these systems to effectively prevent underage access and to properly handle reports constitutes a breach of legal obligations under the Digital Services Act, which protects fundamental rights and safety of children. Since the harm (exposure of children under 13 to potentially harmful content and privacy risks) has already occurred due to the AI system's inadequate functioning, this qualifies as an AI Incident under the framework, specifically a violation of human rights and breach of legal obligations (point c).
Thumbnail Image

BE akuzon kompaninë Meta për shkelje të rregullave: Duhet të bëjë më shumë për të mbrojtur fëmijët nën 13 vjeç nga mediat sociale

2026-04-29
Gazeta Panorama Online
Why's our monitor labelling this an incident or hazard?
Meta's platforms Facebook and Instagram employ AI systems for content moderation and user management. The European Commission's findings indicate that these AI systems are not effectively preventing children under 13 from accessing the platforms, violating the Digital Services Act and failing to protect children's rights. This failure directly leads to harm by exposing minors to inappropriate content and risks on social media. Since the AI systems' inadequacy in enforcing age restrictions is a contributing factor to this harm and legal violation, the event meets the criteria for an AI Incident under the OECD framework.
Thumbnail Image

BE zbardh shkeljet e Metës: Nuk po bën mjaftueshëm për të mbajtur fëmijët larg Instagramit dhe Facebookut

2026-04-29
Indeksonline.net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in Meta's platforms used to enforce age restrictions and protect children. The failure or insufficiency of these AI systems to prevent underage access has indirectly led to harm by exposing children to potentially harmful content and interactions, violating protections under applicable laws like the Digital Services Act. The European Commission's decision highlights this harm and regulatory breach. Therefore, this qualifies as an AI Incident because the AI system's malfunction or inadequate use has directly or indirectly led to harm to a vulnerable group (children) and a breach of legal obligations.
Thumbnail Image

BE akuzon kompaninë "Meta" për dështim në mbrojtjen e fëmijëve nën 13 vjeç në rrjetet sociale - RTSH

2026-04-29
RTSH
Why's our monitor labelling this an incident or hazard?
Meta's platforms use AI systems to detect underage users and inappropriate content. The EU's accusation that Meta's measures are ineffective implies that the AI systems or their deployment have failed to prevent harm to children under 13. This constitutes an AI Incident because the AI system's use or malfunction has directly or indirectly led to harm (exposure of children to inappropriate content and violation of digital content rules). The event is not merely a policy or governance update but concerns a failure in AI system effectiveness causing harm, meeting the criteria for an AI Incident.