
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Major tech companies, including Google, Meta, Microsoft, and Snapchat, have pledged to continue using AI-powered tools to scan for child sexual abuse material (CSAM) in the EU, despite the expiration of the legal framework permitting such scanning. This raises privacy concerns and potential legal violations under EU law.[AI generated]
Why's our monitor labelling this an incident or hazard?
The event involves AI systems implicitly, as the scanning for CSAM material on platforms like Google, Meta, Microsoft, and Snap typically relies on AI technologies for detection. The expiration of the legal framework means these AI systems cannot be used as before, which could plausibly lead to increased harm (child sexual abuse material spreading undetected). Since no actual harm is reported yet but the risk is credible and significant, this fits the definition of an AI Hazard rather than an AI Incident. The companies' joint statement highlights the potential for harm, confirming the plausible future risk. Therefore, the event is best classified as an AI Hazard.[AI generated]