
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The EU's proposed Chat Control legislation would mandate AI-driven scanning of all private messages, including encrypted ones, to detect child sexual abuse material. Critics, including several EU member states and digital rights advocates, warn this could undermine encryption, enable mass surveillance, and threaten citizens' privacy if implemented.[AI generated]
Why's our monitor labelling this an incident or hazard?
The proposal involves the use of AI or automated scanning systems to detect CSAM in encrypted communications, which is an AI system use case. The law is not yet in effect, so no direct harm has occurred, but the article highlights credible expert concerns that the scanning could weaken encryption and privacy protections, plausibly leading to violations of human rights and increased cybersecurity risks. This fits the definition of an AI Hazard, as the development and potential use of these AI systems could plausibly lead to significant harms in the future. The article does not describe an actual incident or realized harm, nor is it primarily about responses or updates, so it is not an AI Incident or Complementary Information.[AI generated]