HYBE's Face-Scanning System Sparks Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

HYBE's introduction of a face-scanning entry system for K-pop concerts has raised privacy concerns. The system, developed with Toss and InterPark Triple, requires fans to upload biometric data for entry. Despite assurances of data security, the collection and handling of biometric information have sparked cybersecurity and privacy debates.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article describes the introduction of an AI-driven face-recognition system for event access, with no actual data breach or misuse reported yet. However, storing and processing biometric data on a third-party server creates a credible risk of privacy violations or security incidents in the future. Because the harm is potential rather than realized, this constitutes an AI Hazard.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityRobustness & digital securityAccountabilityRespect of human rights

Industries
Arts, entertainment, and recreationDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsReputational

Severity
AI hazard

Business function:
Citizen/customer serviceICT management and information security

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

HYBE faces the music over face-scanning entry system

2024-12-25
중앙일보
Why's our monitor labelling this an incident or hazard?
The article describes the introduction of an AI-driven face-recognition system for event access, with no actual data breach or misuse reported yet. However, storing and processing biometric data on a third-party server creates a credible risk of privacy violations or security incidents in the future. Because the harm is potential rather than realized, this constitutes an AI Hazard.
Thumbnail Image

Hybe to introduce facial recognition entry at concerts and fan meets in South Korea

2024-12-27
The Straits Times
Why's our monitor labelling this an incident or hazard?
The article details the deployment of an AI system (facial recognition) for event entry but does not describe any realized harm or incident caused by the system. There is no indication of injury, rights violations, or other harms occurring or plausibly imminent. The system is being introduced with user consent options and traditional alternatives remain available. Therefore, this is not an AI Incident or AI Hazard. It is best classified as Complementary Information as it provides context on AI adoption and deployment in a real-world setting without reporting harm or risk of harm.
Thumbnail Image

HYBE launches Face Pass for concerts, sparks privacy concerns

2024-12-26
The Korea Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) in a real-world application. Although no direct harm has been reported yet, the article emphasizes credible concerns about privacy and data security risks related to the collection and processing of biometric data. These concerns represent plausible future harms, such as data breaches or misuse of sensitive personal information, which align with the definition of an AI Hazard. Since no actual harm or incident has occurred yet, and the main focus is on potential risks and privacy concerns, the event is best classified as an AI Hazard.
Thumbnail Image

HYBE to Launch Facial Recognition Entry System 'Face Pass' at K-pop Concerts in 2025

2024-12-24
idtechwire.com
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions the use of facial recognition AI systems for identity verification at concerts, which qualifies as an AI system. However, no direct or indirect harm has occurred yet; the system is planned for future deployment. The concerns raised about privacy and data protection reflect potential risks but are not evidence of realized harm. Hence, this event fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harms such as privacy violations or data misuse in the future, but no incident has yet materialized.