Google Settles Illinois Biometric Privacy Lawsuit Over AI Face Recognition

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google agreed to a $100 million settlement after its Google Photos AI facial recognition tool processed biometric data of Illinois residents without consent, violating the Illinois Biometric Information Privacy Act. Over 687,000 affected users will receive about $95 each as compensation for the privacy breach.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system (Google's face grouping tool) that processes biometric data (faces) without proper consent, violating privacy laws. This led to a class-action lawsuit and a settlement. The harm here is a violation of biometric privacy rights, which falls under violations of human rights or breach of legal obligations protecting fundamental rights. Since the harm has already occurred and the settlement is a response to that harm, this qualifies as an AI Incident.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountability

Industries
Consumer servicesDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Other

AI system task:
Recognition/object detection


Articles about this incident or hazard

Thumbnail Image

This is the amount Illinois residents receive - ExBulletin

2023-06-06
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's face grouping tool) that processes biometric data (faces) without proper consent, violating privacy laws. This led to a class-action lawsuit and a settlement. The harm here is a violation of biometric privacy rights, which falls under violations of human rights or breach of legal obligations protecting fundamental rights. Since the harm has already occurred and the settlement is a response to that harm, this qualifies as an AI Incident.
Thumbnail Image

Illinois Google users to receive about $95 each as part of settlement

2023-06-06
Daily Herald
Why's our monitor labelling this an incident or hazard?
The event involves the use of AI systems (Google Photos' facial recognition technology) that processed biometric data without proper consent, leading to a violation of privacy rights. This constitutes a breach of obligations under applicable law protecting fundamental rights. Since the harm (privacy violation) has already occurred and a settlement has been reached, this qualifies as an AI Incident.
Thumbnail Image

Google users in Illinois receive about $95 in privacy lawsuit settlement - ExBulletin

2023-06-06
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system—Google's facial recognition technology—that analyzed photos to identify faces, which qualifies as an AI system under the definitions. The lawsuit and settlement arise from the use of this AI system leading to a violation of biometric privacy rights, a breach of applicable law protecting fundamental rights. Since the AI system's use directly led to a legal finding of violation and harm to users' privacy rights, this constitutes an AI Incident under the framework.
Thumbnail Image

Google lawsuit payout: Find out how much Illinois residents could qualify for

2023-06-08
Washington Examiner
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google's face grouping tool) that processes biometric data to group photos by similarity. The lawsuit settlement addresses violations of biometric privacy rights, which falls under harm category (c) - violations of human rights or breach of obligations under applicable law protecting fundamental rights. Since the harm has already occurred and the settlement is a response to that harm, this qualifies as an AI Incident. The article focuses on the payout distribution as a consequence of the incident, not just a complementary update or unrelated news.
Thumbnail Image

Illinois Google users to receive around $95 as part of privacy lawsuit settlement

2023-06-05
Chicago Sun-Times
Why's our monitor labelling this an incident or hazard?
The event involves the use of an AI system (facial recognition technology) whose deployment led to a violation of biometric privacy rights, a breach of applicable law protecting fundamental rights. The harm (violation of privacy rights) has already occurred, and the settlement is a response to that harm. Therefore, this qualifies as an AI Incident because the AI system's use directly led to a breach of legal obligations and human rights protections.
Thumbnail Image

Google users in Illinois get paid $95 in privacy settlement - ExBulletin

2023-06-05
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Google Photos' face regrouping tool) that processes biometric data without consent, leading to a violation of privacy rights protected under law. This constitutes a violation of human rights and legal obligations (privacy rights), which is a form of harm as defined under AI Incident criteria. Since the harm has occurred and a settlement is reached, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Google users in Illinois will receive about $95 per person in settlement money - ExBulletin

2023-06-07
ExBulletin
Why's our monitor labelling this an incident or hazard?
The event involves a class action lawsuit settlement over Google's use of biometric data, which relates to privacy and data protection issues. However, the article does not describe an AI system's development, use, or malfunction causing direct or indirect harm, nor does it indicate plausible future harm from AI. The biometric data usage may involve AI technologies, but the event focuses on the legal settlement and compensation rather than an AI incident or hazard. Therefore, it is best classified as Complementary Information as it provides context on societal and legal responses to AI-related privacy concerns.