FTC Rejects AI Facial Age Estimation for Parental Consent Over Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

The US Federal Trade Commission unanimously rejected a proposal by ESRB, Yoti, and SuperAwesome to use AI-powered facial recognition for age verification under COPPA, citing concerns about privacy, accuracy, and parental consent. The decision follows public feedback and awaits further evaluation by NIST.[AI generated]

Why's our monitor labelling this an incident or hazard?

The article explicitly discusses an AI system (facial age estimation) proposed for use in age verification, which involves AI development and intended use. However, the FTC denied the proposal because the technology's effectiveness is unproven, and no actual harm or violation has occurred. The denial is without prejudice, indicating potential future use. Since no harm has yet occurred but the AI system could plausibly lead to harms such as privacy violations, bias, or incorrect age estimation if deployed, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its regulatory context are central to the event.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityTransparency & explainabilityAccountabilityRespect of human rightsDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital securityGovernment, security, and defence

Affected stakeholders
Children

Harm types
Human or fundamental rights

Severity
AI hazard

Business function:
Compliance and justice

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

US government denies ESRB's AI-powered face-scanning 'age estimation' proposal, but it's probably not gone for good

2024-04-03
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (facial age estimation) proposed for use in age verification, which involves AI development and intended use. However, the FTC denied the proposal because the technology's effectiveness is unproven, and no actual harm or violation has occurred. The denial is without prejudice, indicating potential future use. Since no harm has yet occurred but the AI system could plausibly lead to harms such as privacy violations, bias, or incorrect age estimation if deployed, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its regulatory context are central to the event.
Thumbnail Image

FTC Rejects ESRB's Proposal to Use Facial Recognition Age Verification Tool - IGN

2024-04-03
IGN
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions a facial recognition AI system designed to estimate user age, which is an AI system by definition. The FTC's rejection prevents the system's deployment, so no direct harm has occurred yet. The concerns raised (privacy, accuracy, deepfakes) indicate plausible risks of harm if the system were used, such as privacy violations or wrongful data collection from minors. Since the event centers on the potential for harm from the AI system's use rather than an actual incident of harm, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its regulatory evaluation are central to the event.
Thumbnail Image

FTC Rejects ESRB's Proposal to Use Facial Recognition Age Verification Tool

2024-04-03
IGN Southeast Asia
Why's our monitor labelling this an incident or hazard?
The facial recognition age verification tool involves an AI system that estimates age from images. The FTC's rejection prevents its deployment, so no direct harm has occurred. The concerns raised (privacy, accuracy, deepfakes) indicate plausible future harms if the system were used, fitting the definition of an AI Hazard. Since the event focuses on the regulatory decision and potential risks rather than an actual incident of harm, it is best classified as an AI Hazard.
Thumbnail Image

FTC Temporarily Denies ESRB Application For Face Scan Tech

2024-04-04
GameSpot
Why's our monitor labelling this an incident or hazard?
The facial age estimation technology is an AI system as it uses facial scanning and estimation models for age verification. However, the FTC's denial is a procedural step awaiting more information and does not indicate any harm or malfunction. No injury, rights violation, or other harm has occurred or been reported. The event highlights a regulatory process and potential future use of AI for age verification, which could plausibly lead to harm if misused, but currently no harm is realized. Therefore, this is best classified as Complementary Information, providing context on governance and regulatory review of an AI system rather than an incident or hazard.
Thumbnail Image

FTC Declines to Approve Face-Scanning Age-Verification Tool for Games (for Now)

2024-04-04
PCMag Australia
Why's our monitor labelling this an incident or hazard?
The facial age estimation tool involves AI systems (facial recognition and age estimation). The FTC's denial is based on awaiting further data to assess the technology's accuracy and privacy implications. No harm such as privacy violations, discrimination, or misuse has been reported as having occurred. The event is about regulatory review and potential future risks rather than an actual incident or realized harm. Therefore, it fits the category of Complementary Information, as it provides context on governance and regulatory responses to AI technology without describing an AI Incident or AI Hazard.
Thumbnail Image

FTC Declines to Approve Face-Scanning Age-Verification Tool for Games (for Now)

2024-04-04
PC Magazine
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (facial age estimation using AI-based facial recognition) proposed for age verification. The FTC's decision to delay approval pending further evaluation indicates a governance response to potential privacy, accuracy, and misuse concerns. No direct or indirect harm has occurred, nor is there a plausible imminent harm described. The event is about regulatory review and public consultation, which fits the definition of Complementary Information as it provides context and updates on AI system governance and societal responses without reporting an AI Incident or AI Hazard.
Thumbnail Image

US government denies ESRB's AI-powered face-scanning 'age estimation' proposal, but it's probably not gone for good

2024-04-02
pcgamer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (facial age estimation using AI) proposed for use in verifying age for compliance with COPPA. The FTC's denial is based on lack of sufficient evidence of the system's accuracy and reliability, not on any realized harm. The system's use could plausibly lead to harms such as privacy violations, bias, or wrongful denial of access, but these harms have not materialized. Therefore, this event is best classified as an AI Hazard, reflecting a credible potential for harm if the system were deployed without adequate validation and safeguards. It is not an AI Incident because no harm has occurred, nor is it Complementary Information or Unrelated.
Thumbnail Image

FTC denies ESRB petition suggesting facial age estimation technology as another method of COPPA compliance

2024-04-03
TechSpot
Why's our monitor labelling this an incident or hazard?
The facial age estimation technology qualifies as an AI system because it uses AI to analyze facial features and estimate age. The petition's denial means the technology is not currently in use for COPPA compliance, so no direct or indirect harm has occurred. The article mainly discusses the regulatory decision and public misunderstanding, which are governance and societal responses to a potential AI application. Therefore, this event is best classified as Complementary Information, as it provides context and updates on AI governance and public reaction without describing an AI Incident or AI Hazard.
Thumbnail Image

US Government Rejects ESRB's Age Estimation Tool

2024-04-03
Game Rant
Why's our monitor labelling this an incident or hazard?
The facial-age estimation tool is an AI system used for age verification. The event involves the use and development of this AI system. However, no direct or indirect harm has occurred yet; the tool was rejected before deployment due to concerns about accuracy, privacy, and consent verification. These concerns imply plausible future harm if the system were deployed without addressing them, such as privacy violations or inaccurate age estimation leading to unauthorized access. Therefore, this event constitutes an AI Hazard, as the AI system's use could plausibly lead to harm, but no incident has occurred yet.
Thumbnail Image

FTC denies facial recognition as an age verification method for games purchases | Al Bawaba

2024-04-03
Al Bawaba
Why's our monitor labelling this an incident or hazard?
The article involves an AI system (facial recognition for age estimation) proposed for use in verifying age for game purchases. The FTC's denial means the system was not deployed, so no direct harm has occurred. However, the proposal and public concerns indicate plausible risks related to privacy, potential misuse, and effectiveness of the AI system. Since the event concerns a credible potential for harm if the system were used, it fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its potential harms are central to the event.
Thumbnail Image

FTC rejects software companies' bid to use facial recognition to verify user age

2024-04-02
Nextgov
Why's our monitor labelling this an incident or hazard?
The software in question is an AI system using biometric facial recognition to estimate age, which is explicitly mentioned. The FTC's rejection is due to concerns about privacy violations and potential misuse, including deepfake content generation, which are plausible harms that could arise from the system's deployment. Since the system was not approved and no harm has yet occurred, but the potential for harm is credible and central to the decision, this event fits the definition of an AI Hazard rather than an AI Incident. The event is not merely complementary information because it focuses on the decision about the AI system's approval and the associated risks, not on a broader ecosystem update or response.
Thumbnail Image

FTC Rejects ESRB's Proposal Regarding Age Estimation Technology

2024-04-05
RTTNews
Why's our monitor labelling this an incident or hazard?
The facial age estimation technology is an AI system used for verifying user age. The FTC's rejection is based on privacy and legal compliance concerns, reflecting a governance response to potential harms related to children's privacy. Since no direct or indirect harm has occurred yet, and the focus is on regulatory decision-making and future considerations, this event is best classified as Complementary Information.
Thumbnail Image

FTC Denies Parental Consent Request Pending NIST Report - ExBulletin

2024-04-02
ExBulletin
Why's our monitor labelling this an incident or hazard?
The facial age estimation mechanism is an AI system used for age verification. The FTC's denial and reference to the NIST report indicate concerns about the technology's reliability and implications for privacy and legal compliance. Since no harm has yet occurred but the technology's deployment could plausibly lead to privacy or rights-related harms, this constitutes an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

FTC rejects bid from software company to use facial recognition to verify user age - ExBulletin

2024-04-03
ExBulletin
Why's our monitor labelling this an incident or hazard?
The facial age estimation system is an AI system using biometric facial analysis to infer age. The FTC's rejection is based on concerns about privacy violations and potential misuse, which could lead to harms such as privacy breaches and generation of deepfake content. Since no harm has yet occurred and the decision is preventive, this constitutes an AI Hazard rather than an AI Incident. The event is not merely complementary information because it reports a regulatory decision directly addressing potential AI-related harm. Hence, the classification is AI Hazard.