
The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.
The US Federal Trade Commission unanimously rejected a proposal by ESRB, Yoti, and SuperAwesome to use AI-powered facial recognition for age verification under COPPA, citing concerns about privacy, accuracy, and parental consent. The decision follows public feedback and awaits further evaluation by NIST.[AI generated]
Why's our monitor labelling this an incident or hazard?
The article explicitly discusses an AI system (facial age estimation) proposed for use in age verification, which involves AI development and intended use. However, the FTC denied the proposal because the technology's effectiveness is unproven, and no actual harm or violation has occurred. The denial is without prejudice, indicating potential future use. Since no harm has yet occurred but the AI system could plausibly lead to harms such as privacy violations, bias, or incorrect age estimation if deployed, this fits the definition of an AI Hazard rather than an AI Incident or Complementary Information. It is not unrelated because the AI system and its regulatory context are central to the event.[AI generated]