Instagram’s Default ‘Maps of Friends’ Feature Raises Privacy Fears

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Meta’s Instagram rollout of a default-enabled “Maps of Friends” feature automatically shares real-time user locations with contacts without prior notification, prompting Latin American users and soon Spanish users to report privacy breaches, lack of consent and potential stalking risks. The unwanted updates can be disabled in app settings.[AI generated]

Why's our monitor labelling this an incident or hazard?

The event involves an AI system that automatically updates and shares users' real-time location data with contacts, which can be inferred as AI-enabled processing of location and social data. While no direct harm is reported, the article emphasizes significant privacy concerns and potential misuse that could lead to violations of privacy rights and harm to users. Since the harm is plausible but not yet realized, and the AI system's role is central to the feature's operation and potential risks, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.[AI generated]
AI principles
Privacy & data governanceTransparency & explainabilityAccountabilitySafetyRespect of human rightsRobustness & digital securityDemocracy & human autonomy

Industries
Media, social platforms, and marketingDigital security

Affected stakeholders
Consumers

Harm types
Human or fundamental rightsPsychological

Severity
AI hazard


Articles about this incident or hazard

Thumbnail Image

Así puedes desactivar la ubicación en tiempo real en Instagram

2025-04-14
El Periódico
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Instagram's real-time location sharing feature) that automatically shares sensitive location data without explicit user consent or prior notification, leading to privacy violations. Users have reported this as a breach of their privacy rights, which constitutes harm under the framework's category of violations of human rights or legal obligations. The harm is realized and ongoing, not just potential, so this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Instagram comienza a compartir tu ubicación con contactos por defecto en un mapa: así puedes desactivarlo

2025-04-14
Applesfera
Why's our monitor labelling this an incident or hazard?
The event involves an AI system that automatically updates and shares users' real-time location data with contacts, which can be inferred as AI-enabled processing of location and social data. While no direct harm is reported, the article emphasizes significant privacy concerns and potential misuse that could lead to violations of privacy rights and harm to users. Since the harm is plausible but not yet realized, and the AI system's role is central to the feature's operation and potential risks, this event fits the definition of an AI Hazard rather than an Incident or Complementary Information.
Thumbnail Image

Instagram ahora comparte tu ubicación: desactiva esta función para que no sepa dónde estás

2025-04-14
Computer Hoy
Why's our monitor labelling this an incident or hazard?
The event involves an AI system component (location sharing with real-time updates likely powered by AI algorithms) whose use (automatic activation without explicit user consent) plausibly leads to harm, such as privacy violations and potential physical safety risks (e.g., stalking). Since the article does not report actual harm occurring but highlights credible risks and user concerns about the feature's default activation, this qualifies as an AI Hazard. The feature's deployment and automatic enabling create a plausible scenario for future harm, meeting the criteria for an AI Hazard rather than an Incident or Complementary Information.