Google Chrome's AI-Powered 'Privacy Sandbox' Raises Privacy Concerns

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Google has integrated the AI-driven 'Privacy Sandbox' ad platform into Chrome, tracking users' browsing to generate advertising profiles shared with advertisers. Despite widespread opposition and privacy concerns, the system is now widely deployed, raising significant issues over user privacy and potential violations of fundamental rights.[AI generated]

Why's our monitor labelling this an incident or hazard?

The Privacy Sandbox is an AI-driven system integrated into Chrome that tracks user behavior to generate advertising topics shared with advertisers. This tracking and profiling without explicit user consent constitutes a violation of privacy rights, a fundamental human right. The article indicates that this system is now widely deployed and actively tracking users, thus causing direct harm. Therefore, this qualifies as an AI Incident due to violations of human rights (privacy) caused by the AI system's use.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsTransparency & explainabilityAccountabilityDemocracy & human autonomy

Industries
Media, social platforms, and marketingIT infrastructure and hosting

Affected stakeholders
Consumers

Harm types
Human or fundamental rights

Severity
AI incident

Business function:
Marketing and advertisement

AI system task:
Organisation/recommendersForecasting/prediction


Articles about this incident or hazard

Thumbnail Image

Google's widely opposed ad platform, the "Privacy Sandbox," launches in Chrome

2023-09-07
Ars Technica
Why's our monitor labelling this an incident or hazard?
The Privacy Sandbox is an AI-driven system integrated into Chrome that tracks user behavior to generate advertising topics shared with advertisers. This tracking and profiling without explicit user consent constitutes a violation of privacy rights, a fundamental human right. The article indicates that this system is now widely deployed and actively tracking users, thus causing direct harm. Therefore, this qualifies as an AI Incident due to violations of human rights (privacy) caused by the AI system's use.
Thumbnail Image

Google Chrome users receiving Privacy Sandbox pop-up: How it works

2023-09-08
NewsBytes
Why's our monitor labelling this an incident or hazard?
The article discusses a new AI-related feature (Privacy Sandbox) that involves tracking user interests via browsing data, which implies AI or algorithmic processing for ad targeting. However, the article focuses on user concerns and advocacy group warnings about privacy and data collection, without reporting any actual harm or incident caused by the system. Therefore, this is best classified as Complementary Information, as it provides context and societal response to an AI system's deployment rather than describing an AI Incident or Hazard.
Thumbnail Image

Google Chrome's 'Privacy Sandbox' Is a Joke, and Users Should Switch Browsers

2023-09-08
WebProNews
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Chrome's Privacy Sandbox uses AI or algorithmic profiling to infer user interests for targeted ads). The use of this AI system leads to violations of user privacy, which is a breach of fundamental rights. The harm is ongoing as the system is in use and affects users' privacy rights. Therefore, this qualifies as an AI Incident due to violation of rights caused by the AI system's use.
Thumbnail Image

Google gets its way, bakes a user-tracking ad platform directly into Chrome

2023-09-08
OSNews: Exploring the Future of Computing
Why's our monitor labelling this an incident or hazard?
The described ad platform uses AI-like profiling to track users and share data for advertising, which fits the definition of an AI system. The rollout is complete, but the article focuses on concerns and opposition rather than reporting actual harm or incidents. Since no direct or indirect harm has been reported yet, but plausible future harm to privacy and rights exists, this qualifies as an AI Hazard rather than an AI Incident. It is not merely complementary information because the main focus is on the potential privacy risks and the system's deployment, not on responses or updates to past incidents.