Researchers Warn of Privacy Risks in AI-Based Age Verification Systems

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Over 370 security and privacy experts from 29 countries have urged governments to pause the rollout of AI-driven age verification systems on social media. They warn these systems, already used or planned in countries like France and Australia, pose significant privacy, security, and autonomy risks without sufficient safeguards or understanding.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves AI systems used for age verification and estimation, which are explicitly described as using biometric data, behavior analysis, and identity verification—tasks indicative of AI. The concerns raised relate to potential harms including privacy violations, security risks, and discrimination, which align with the definitions of harm in the framework. However, the article focuses on warnings and potential risks rather than reporting actual incidents of harm caused by these AI systems. Thus, the event fits the definition of an AI Hazard, where the development and use of AI systems could plausibly lead to harm but such harm has not yet been realized or documented at scale.[AI generated]
AI principles
Privacy & data governanceRobustness & digital security

Industries
Media, social platforms, and marketing

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

'More harm than good': Why hundreds of researchers want a pause on online age verification

2026-03-03
The Indian Express
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for age verification and estimation, which are explicitly described as using biometric data, behavior analysis, and identity verification—tasks indicative of AI. The concerns raised relate to potential harms including privacy violations, security risks, and discrimination, which align with the definitions of harm in the framework. However, the article focuses on warnings and potential risks rather than reporting actual incidents of harm caused by these AI systems. Thus, the event fits the definition of an AI Hazard, where the development and use of AI systems could plausibly lead to harm but such harm has not yet been realized or documented at scale.
Thumbnail Image

'Dangerous and unacceptable:' Privacy experts warn against age checks

2026-03-02
Euronews English
Why's our monitor labelling this an incident or hazard?
The article discusses AI systems used for age verification and highlights plausible future harms related to privacy breaches, security vulnerabilities, and social inequality. Since no actual harm has been reported but credible risks are identified, this qualifies as an AI Hazard. The letter from experts urging governments to delay implementation until safety and privacy concerns are resolved further supports this classification. There is no indication of realized harm or incident, so it is not an AI Incident. It is more than just complementary information because the main focus is on the potential risks and warnings about the technology's deployment, not on responses or ecosystem updates.
Thumbnail Image

Resist 'dangerous and socially unacceptable' age checks for social media, scientists warn

2026-03-02
POLITICO
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-related age verification technologies being developed and deployed by companies like OpenAI, Roblox, and Discord. The concerns focus on the potential harms to privacy, security, and autonomy if such systems are implemented without full understanding. Since no actual harm or incident has been reported, and the focus is on the potential risks and calls for caution, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because AI systems are central to the age verification mechanisms discussed.
Thumbnail Image

Security Researchers Warn Age Verification Laws Are Building a Global Surveillance System

2026-03-03
Reclaim The Net
Why's our monitor labelling this an incident or hazard?
The event involves AI systems used for age verification that infer user identity and behavior to grant or restrict access, which fits the definition of an AI system. The concerns raised are about the potential misuse and societal harms that could arise from these systems' deployment, including surveillance and censorship, which are plausible future harms. Since the harms are potential and the letter calls for a pause before further rollout, this constitutes an AI Hazard rather than an AI Incident. The event does not describe realized harm but warns of credible risks associated with AI system use in identity verification at scale.