Tech Giants Continue AI-Based CSAM Scanning in EU Despite Legal Expiry

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Major tech companies, including Google, Meta, Microsoft, and Snapchat, have pledged to continue using AI-powered tools to scan for child sexual abuse material (CSAM) in the EU, despite the expiration of the legal framework permitting such scanning. This raises privacy concerns and potential legal violations under EU law.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems implicitly, as the scanning for CSAM material on platforms like Google, Meta, Microsoft, and Snap typically relies on AI technologies for detection. The expiration of the legal framework means these AI systems cannot be used as before, which could plausibly lead to increased harm (child sexual abuse material spreading undetected). Since no actual harm is reported yet but the risk is credible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The companies' joint statement highlights the potential for harm, confirming the plausible future risk. Therefore, the event is best classified as an AI Hazard.[AI generated]
AI principles
Privacy & data governanceAccountability

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

Major social platforms issue joint warning about child safety in EU

2026-04-06
Social Media Today | A business community for the web's best thinkers on Social Media
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as the scanning for CSAM material on platforms like Google, Meta, Microsoft, and Snap typically relies on AI technologies for detection. The expiration of the legal framework means these AI systems cannot be used as before, which could plausibly lead to increased harm (child sexual abuse material spreading undetected). Since no actual harm is reported yet but the risk is credible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The companies' joint statement highlights the potential for harm, confirming the plausible future risk. Therefore, the event is best classified as an AI Hazard.
Thumbnail Image

EU has lost vital tool in fight against child sexual abuse, ombudsman warns

2026-04-08
Luxembourg Times
Why's our monitor labelling this an incident or hazard?
The chat monitoring scheme involved AI algorithms to detect CSAM in private messages, which is an AI system as it automatically infers from input data to generate outputs (detection flags). Its use directly contributed to identifying and mitigating harm related to child sexual abuse, a serious violation of rights and harm to communities. The discontinuation of this AI system removes a direct protective measure, thus the event concerns an AI Incident because the AI system's use was directly linked to harm prevention and its removal increases risk. Although concerns about privacy and false positives are noted, the primary harm context is the loss of an AI-enabled protective tool, which qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google, Meta, Microsoft and Snapchat to continue CSAM screening after EU legal loophole expires

2026-04-07
Telecompaper
Why's our monitor labelling this an incident or hazard?
The article focuses on the continued use of AI systems for CSAM detection and blocking by major companies after a legal derogation expired. There is no indication of harm occurring or plausible harm arising from the AI systems themselves; rather, it is an update on the companies' operational decisions in response to legal changes. This fits the definition of Complementary Information, as it provides context and updates on AI system use and governance without describing a new AI Incident or AI Hazard.
Thumbnail Image

Big tech vows to continue CSAM scanning in Europe despite expiration of law allowing it

2026-04-06
therecord.media
Why's our monitor labelling this an incident or hazard?
The AI system (CSAM scanning technology using hash matching) is explicitly involved in the event. Its use without a current legal basis in the EU constitutes a breach of obligations under applicable law intended to protect fundamental rights, specifically privacy rights. The scanning directly impacts individuals' communications and privacy, which are protected rights. The event describes ongoing scanning despite legal expiration, indicating the AI system's use is causing a violation of rights. This fits the definition of an AI Incident under category (c) violations of human rights or breach of legal obligations. Although the scanning aims to prevent harm to children, the legal and privacy violations are realized harms. Therefore, the event is best classified as an AI Incident.
Thumbnail Image

Europe Risks Child Safety as CSAM Detection Derogation Expires - News Directory 3

2026-04-07
News Directory 3
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of automated tools (hash-matching technology) to detect CSAM, which qualifies as an AI system under the framework. The expiration of the legal derogation means these AI systems may no longer be legally used, creating a plausible risk that detection and reporting of CSAM will decrease significantly, leading to harm to children (harm to communities and individuals). Since the harm is not yet realized but is a credible and foreseeable consequence of the regulatory gap, this event is best classified as an AI Hazard. There is no indication of an actual incident or malfunction causing harm at this time, nor is the article primarily about responses or updates, so it is not Complementary Information.
Thumbnail Image

EU CSAM Law Lapse Puts Child Safety At Online Risks

2026-04-07
The Cyber Express
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-enabled detection tools (hash-matching technology) used by major tech companies to identify CSAM. The expiration of the legal framework threatens the continued use of these AI systems, which have been crucial in preventing harm to children by detecting and reporting abusive content. Although the article does not report a current increase in harm, it warns of a likely drop in detection and reporting, which would plausibly lead to increased circulation of CSAM and associated harms. This fits the definition of an AI Hazard, as the AI system's reduced use could plausibly lead to significant harm. It is not an AI Incident because the harm is not described as currently occurring due to AI system failure or misuse, but rather as a potential consequence of legal and regulatory changes affecting AI system deployment.
Thumbnail Image

Despite expiry of ePrivacy derogation, four internet giants will continue to voluntarily detect child sexual abuse material

2026-04-07
agenceurope.eu
Why's our monitor labelling this an incident or hazard?
The AI systems used for scanning and detecting CSAM are clearly involved, as these platforms employ AI to identify illegal content. However, the article does not report any new harm caused by these AI systems, nor does it describe a plausible future harm arising from their use. Instead, it discusses the continuation of existing AI-based detection practices and the platforms' criticism of regulatory inaction. This fits the definition of Complementary Information, as it provides context and updates on AI system use and governance without reporting a new AI Incident or AI Hazard.