Worldcoin’s AI Iris-Scan Crypto Faces Global Bans Over Privacy Breaches

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Worldcoin, created by Sam Altman’s Tools for Humanity, uses AI-powered iris scans to issue digital IDs and tokens. Launched in 2019, the system is now banned or under investigation in multiple countries amid allegations of biometric data misuse, privacy violations and potential human rights risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article involves an AI system (biometric iris scanning and AI-based human verification) and its deployment, which has led to regulatory scrutiny and suspension in some countries due to concerns about privacy and data protection. These concerns indicate plausible risks of harm (privacy violations), but no direct or indirect harm has been reported as having occurred. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm, but no incident has been confirmed. It is not Complementary Information because the main focus is not on responses to a past incident but on the current regulatory concerns and suspensions. It is not an AI Incident because no actual harm has been documented yet.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital security

Industries
Digital securityFinancial and insurance servicesGovernment, security, and defenceIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsEconomic/PropertyReputationalPublic interest

Severity
AI hazard

Business function:
Other

AI system task:
Recognition/object detectionGoal-driven organisation


Articles about this incident or hazard

Thumbnail Image

"Inutile, inefficace et dangereux"... Qu'est-ce que "Worldcoin", le projet de cryptomonnaie controversé du fondateur d'OpenAI ?

2025-05-06
Le Figaro.fr
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric identity verification systems to collect sensitive personal data. The controversy and bans in multiple countries suggest that the AI system's use has led to or is causing violations of privacy and possibly other human rights. Since the AI system's use has directly led to these legal and ethical harms, this qualifies as an AI Incident under the framework's category of violations of human rights or breach of applicable law.
Thumbnail Image

Qu'est-ce que le worldcoin, la cryptomonnaie, qui doit prouver que nous ne sommes pas des robots ?

2025-05-07
SudOuest.fr
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (biometric iris scanning and AI-based human verification) and its deployment, which has led to regulatory scrutiny and suspension in some countries due to concerns about privacy and data protection. These concerns indicate plausible risks of harm (privacy violations), but no direct or indirect harm has been reported as having occurred. Therefore, this situation fits the definition of an AI Hazard, as the development and use of the AI system could plausibly lead to harm, but no incident has been confirmed. It is not Complementary Information because the main focus is not on responses to a past incident but on the current regulatory concerns and suspensions. It is not an AI Incident because no actual harm has been documented yet.
Thumbnail Image

C'est quoi le Worldcoin ?

2025-05-07
lejdd.fr
Why's our monitor labelling this an incident or hazard?
The Worldcoin system uses AI-based biometric scanning and identity verification, which qualifies it as an AI system. The article reports regulatory bans and investigations due to concerns about data protection and identity theft risks, indicating plausible risks of harm to individuals' rights and privacy. Since no actual harm is described as having occurred yet, but the risks are credible and recognized by authorities and experts, this event fits the definition of an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential risks and regulatory responses to the AI system's use, not on a past incident or a general update.