Philippines Halts AI Iris-Scanning Over Data Privacy Violations

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The Philippine National Privacy Commission ordered Tools for Humanity to stop collecting and processing biometric data via its AI-powered Orb iris-scanning system, citing violations of data privacy laws. The system's practices, including invalid consent and excessive data collection, exposed individuals to risks like identity theft and fraud.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event explicitly involves an AI system used for biometric verification to distinguish humans from bots or AI accounts, which falls under AI systems as it processes biometric data with AI techniques. The NPC's cease and desist order is due to violations of data privacy laws and risks of identity theft and fraud, which are harms to individuals' rights and security. The AI system's use and data processing practices have directly led to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a regulatory action in response to realized or ongoing harm related to AI system use.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Digital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsEconomic/Property

Severity
AI incident

Business function:
ICT management and information security

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

NPC orders Tools for Humanity to stop operations over data privacy concerns

2025-10-09
GMA Network
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used for biometric verification to distinguish humans from bots or AI accounts, which falls under AI systems as it processes biometric data with AI techniques. The NPC's cease and desist order is due to violations of data privacy laws and risks of identity theft and fraud, which are harms to individuals' rights and security. The AI system's use and data processing practices have directly led to these harms, fulfilling the criteria for an AI Incident. The event is not merely a potential risk or a complementary update but a regulatory action in response to realized or ongoing harm related to AI system use.
Thumbnail Image

NPC stops iris-scan tech, but DICT eyes it for use

2025-10-08
Rappler
Why's our monitor labelling this an incident or hazard?
The iris-scanning technology used by Worldcoin involves AI systems for biometric data processing and identity verification. The NPC's cease and desist order is based on violations of the Data Privacy Act, indicating harm to individuals' privacy rights has occurred. Biometric data misuse or unauthorized processing constitutes a violation of fundamental rights and legal obligations, fitting the definition of an AI Incident under violations of human rights and applicable law. The event describes direct harm through unauthorized data collection and processing, and the regulatory response confirms the seriousness of the issue. Therefore, this qualifies as an AI Incident.
Thumbnail Image

Philippine privacy body orders World App to halt biometric data collection - Manila Standard

2025-10-09
Manila Standard
Why's our monitor labelling this an incident or hazard?
The World App and Orb verification system use AI-enabled biometric data processing for identity verification. The NPC's investigation revealed that the system's data practices violated privacy laws and compromised individuals' rights, leading to potential and actual harms such as identity theft and fraud. These harms are directly linked to the AI system's use and data processing practices. Therefore, this event qualifies as an AI Incident because the AI system's use has directly led to violations of fundamental rights and risks of harm to individuals.
Thumbnail Image

NPC orders 'Tools for Humanity' to halt data processing over privacy violations

2025-10-07
Newsbytes.PH
Why's our monitor labelling this an incident or hazard?
The Orb biometric verification system qualifies as an AI system because it processes biometric data for identity verification, a task involving sophisticated data processing and inference. The NPC's order arises from the use of this AI system in a manner that violates privacy rights and data protection laws, leading to direct harm to individuals' rights and potential identity fraud. Therefore, this event meets the criteria of an AI Incident due to the realized harm (privacy violations and risks of identity fraud) directly linked to the AI system's use.
Thumbnail Image

ICT groups back NPC order, seek transparency audit of World.Org

2025-10-09
Newsbytes.PH
Why's our monitor labelling this an incident or hazard?
The event describes a concrete regulatory action against an AI-enabled biometric verification system that has already led to violations of the Data Privacy Act, specifically coercive consent and excessive data collection. These constitute breaches of fundamental rights protected by law, fulfilling the criteria for an AI Incident. The AI system's role in processing biometric data is central to the incident. The proposed independent audit and support from ICT groups are complementary responses but do not change the classification of the core event as an AI Incident.
Thumbnail Image

NPC halts iris scanning operations over data privacy violations | Back End News

2025-10-09
Back End News
Why's our monitor labelling this an incident or hazard?
The AI system in question is the Orb verification system, which uses biometric iris scanning, an AI-related technology for identity verification. The NPC's findings highlight invalid consent, lack of transparency, and excessive data collection, which are direct violations of data privacy rights. The harms include potential identity theft, fraud, and irreversible damage due to misuse of permanent biometric identifiers. Since these harms have been identified and the NPC has issued a Cease and Desist Order to prevent further harm, this qualifies as an AI Incident involving violations of human rights and data privacy laws.
Thumbnail Image

Philippines privacy authority orders halt to World biometrics processing | Biometric Update

2025-10-09
Biometric Update
Why's our monitor labelling this an incident or hazard?
An AI system is involved as the World App and its iris biometric orbs constitute biometric data processing systems that rely on AI technologies for biometric recognition and verification. The NPC's order is a regulatory response to the misuse and mishandling of biometric data, which constitutes a violation of fundamental data privacy rights and poses harm to individuals' privacy and security. Since the event describes realized harm in terms of violation of rights and risk of injury due to AI system misuse, it qualifies as an AI Incident under the category of violations of human rights or breach of obligations under applicable law protecting fundamental rights.
Thumbnail Image

Tools for Humanity asked to stop operations - BusinessWorld Online

2025-10-09
BusinessWorld
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system used for biometric verification and AI-driven fraud detection. The National Privacy Commission's cease-and-desist order is based on violations of data privacy rights, including invalid consent and excessive biometric data collection, which constitute a breach of fundamental rights. The harm is realized as the privacy and integrity of individuals' biometric data are compromised, fulfilling the criteria for an AI Incident under violations of human rights and breach of legal obligations. The AI system's development and use directly led to these harms, justifying classification as an AI Incident.
Thumbnail Image

Tech firm to appeal NPC privacy order

2025-10-09
Daily Tribune
Why's our monitor labelling this an incident or hazard?
The article describes a regulatory action against an AI-related system (World ID) due to privacy violations and potential risks such as identity theft and fraud. Although these risks are serious, the article does not report any actual harm or incident caused by the AI system. The NPC's cease and desist order and the company's appeal indicate a dispute over compliance and potential future harm. Since the AI system's use could plausibly lead to harm if the issues are not resolved, this qualifies as an AI Hazard. It is not Complementary Information because the main focus is not on updates or responses to a past incident, nor is it unrelated as the AI system and its risks are central to the event.
Thumbnail Image

World PH disputes NPC ruling, defends data privacy practices

2025-10-10
Inquirer.net
Why's our monitor labelling this an incident or hazard?
The event explicitly involves an AI system that processes biometric data for identity verification. The NPC's cease and desist order is based on violations of the Data Privacy Act, highlighting risks such as identity theft, fraud, and reputational harm, which are harms to individuals and breaches of legal obligations protecting fundamental rights. The AI system's use has directly led to these harms or risks, fulfilling the criteria for an AI Incident. The company's dispute and intention to file a motion for reconsideration do not negate the fact that the AI system's operation has caused or is causing harm as defined. Hence, the classification as AI Incident is appropriate.
Thumbnail Image

Philippines orders halt to biometric data collection by Global Identity Platform

2025-10-10
Lexology
Why's our monitor labelling this an incident or hazard?
The event involves an AI-related system (a global digital identity platform using biometric data for proof-of-human tools, which is AI-relevant technology). The platform's unauthorized processing and inadequate consent mechanisms have directly led to violations of fundamental privacy rights, which are human rights under applicable law. The NPC's findings highlight real risks of harm to individuals, including permanent exposure to identity fraud and misuse due to compromised biometric data. Since harm to rights has occurred and regulatory action is taken to stop ongoing harm, this qualifies as an AI Incident under the framework, specifically under violations of human rights and breach of legal obligations protecting fundamental rights.