
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
Worldcoin, created by Sam Altman’s Tools for Humanity, uses AI-powered iris scans to issue digital IDs and tokens. Launched in 2019, the system is now banned or under investigation in multiple countries amid allegations of biometric data misuse, privacy violations and potential human rights risks.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (biometric iris scanning and AI-based human verification) and its deployment, which has led to regulatory scrutiny and suspension in some countries due to concerns about privacy and data protection. These concerns indicate plausible risks of harm (privacy violations), but no direct or indirect harm has been reported as having occurred. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm, but no incident has been confirmed. It is not Complementary Information because the main focus is not on responses to a past incident but on the current regulatory concerns and suspensions. It is not an AI Incident because no actual harm has been documented yet.[AI generated]