INAI launches probe into Worldcoin’s biometric data practices

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Mexico’s National Institute for Transparency (INAI) has opened an official investigation into Worldcoin, the iris-scanning cryptocurrency project by Tools for Humanity (founded by Sam Altman and Alex Blania), over alleged breaches of personal data protection and misuse of biometric AI systems.[AI generated]

Why's our monitor labelling this an incident or hazard?

Worldcoin uses AI-based biometric iris scanning technology to collect and analyze personal data. The investigation by the INAI into possible data breaches indicates that the AI system's use may have led to violations of personal data protection rights, which is a breach of applicable law protecting fundamental rights. The involvement of AI in processing biometric data and the potential harm to individuals' privacy rights meets the criteria for an AI Incident. The article describes an ongoing investigation into realized or strongly suspected harm rather than just a potential risk, so it is not merely an AI Hazard or Complementary Information.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityRobustness & digital securityDemocracy & human autonomy

Industries
Financial and insurance servicesDigital securityGovernment, security, and defenceIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputational

Severity
AI incident

Business function:
Citizen/customer serviceICT management and information security

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

México: INAI abrió investigación contra Worldcoin por posible vulneración de datos

2024-09-10
Cointelegraph
Why's our monitor labelling this an incident or hazard?
Worldcoin's biometric data collection system involves AI technologies for processing and verifying personal data. The investigation by INAI concerns possible violations of data protection laws, which relate to breaches of fundamental rights. Since the investigation is about potential or alleged violations, and no confirmed harm or breach has been established yet, this event represents a plausible risk of harm rather than a realized harm. Therefore, it qualifies as an AI Hazard because the AI system's use could plausibly lead to violations of human rights (data privacy). It is not an AI Incident because no actual harm has been confirmed or reported at this stage. It is not Complementary Information because the article focuses on the investigation initiation, not on responses or updates to a prior incident. It is not Unrelated because AI systems are involved in biometric data processing and the investigation concerns their potential misuse.
Thumbnail Image

INAI inicia investigación de oficio contra la empresa de criptomonedas Worldcoin

2024-09-09
El Economista
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-enabled biometric data collection systems (Orb device scanning iris and face) to issue cryptocurrency. The investigation by INAI is due to possible breaches of personal data protection laws, which constitute a violation of rights if proven. Although no confirmed harm is reported, the potential for harm to individuals' privacy and data rights is credible and under official scrutiny. This fits the definition of an AI Hazard, as the AI system's use could plausibly lead to an AI Incident (violation of rights). There is no indication that harm has already occurred or been confirmed, so it is not an AI Incident. The article is not merely complementary information since it reports the start of an official investigation into possible violations, indicating a credible risk of harm.
Thumbnail Image

INAI investiga posible vulneración de datos personales de Worldcoin

2024-09-07
sipse.com
Why's our monitor labelling this an incident or hazard?
Worldcoin uses AI-based biometric iris scanning technology to collect and analyze personal data. The investigation by the INAI into possible data breaches indicates that the AI system's use may have led to violations of personal data protection rights, which is a breach of applicable law protecting fundamental rights. The involvement of AI in processing biometric data and the potential harm to individuals' privacy rights meets the criteria for an AI Incident. The article describes an ongoing investigation into realized or strongly suspected harm rather than just a potential risk, so it is not merely an AI Hazard or Complementary Information.
Thumbnail Image

INAI investiga a Worldcoin por posible vulneración de datos

2024-09-07
NTR Zacatecas
Why's our monitor labelling this an incident or hazard?
Worldcoin is an AI-enabled system that uses biometric iris scanning to create digital identities. The investigation by INAI into possible data breaches and misuse of sensitive biometric data directly relates to violations of fundamental rights and data protection laws, which fits the definition of an AI Incident involving violations of human rights or breaches of applicable law. Although the article does not confirm realized harm, the investigation implies that harm may have occurred or is ongoing, and the use of biometric AI systems for identity verification is central to the event. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Daño irreparable

2024-09-09
Perspectivas
Why's our monitor labelling this an incident or hazard?
The presence of AI systems is reasonably inferred from the use of biometric iris scanning by Worldcoin, which involves AI technologies for biometric recognition, and the management of large datasets by telecom and electoral institutions likely involves AI or advanced algorithmic systems. The incidents involve the misuse or failure to protect data processed or collected by these AI systems, leading to violations of privacy and data protection rights, which are harms under the framework. The article reports actual data breaches and exposures, not just potential risks, thus constituting AI Incidents rather than hazards or complementary information.