Period Tracking Apps' AI Systems Cause Privacy and Safety Risks for Women

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

A University of Cambridge report reveals that period tracking apps, which use AI to process sensitive health data, are selling users' personal information at scale. This exposes women to privacy violations, discrimination in employment and healthcare, targeted advertising, and potential reproductive control, posing significant safety and human rights risks.[AI generated]

Why's our monitor labelling this an incident or hazard?

Period tracking apps use AI systems to collect and analyze sensitive reproductive and health data. The report documents real harms resulting from the use and misuse of this data, including discrimination, privacy violations, and reproductive control, which are direct consequences of the AI systems' data processing and sharing. The harms are materialized and significant, meeting the criteria for an AI Incident under violations of human rights and harm to communities.[AI generated]
AI principles
Privacy & data governanceRespect of human rightsFairnessTransparency & explainabilityAccountabilityRobustness & digital securitySafetyDemocracy & human autonomyHuman wellbeing

Industries
Healthcare, drugs, and biotechnologyConsumer servicesDigital securityMedia, social platforms, and marketing

Affected stakeholders
Women

Harm types
Human or fundamental rightsPsychologicalEconomic/PropertyReputationalPublic interest

Severity
AI incident

Business function:
Marketing and advertisementCitizen/customer service

AI system task:
Forecasting/predictionOrganisation/recommenders


Articles about this incident or hazard

Thumbnail Image

Do you also use period tracking apps? You might want to reconsider

2025-06-11
WION
Why's our monitor labelling this an incident or hazard?
Period tracking apps use AI systems to collect and analyze sensitive reproductive and health data. The report documents real harms resulting from the use and misuse of this data, including discrimination, privacy violations, and reproductive control, which are direct consequences of the AI systems' data processing and sharing. The harms are materialized and significant, meeting the criteria for an AI Incident under violations of human rights and harm to communities.
Thumbnail Image

Personal Data From Period Tracking Apps Being "Sold at Scale"

2025-06-11
Digit
Why's our monitor labelling this an incident or hazard?
Period tracking apps qualify as AI systems because they process and infer from user data to provide personalized tracking and predictions. The report documents that the development and use of these AI systems have directly led to harms such as privacy violations, potential discrimination in insurance and employment, and safety risks from data misuse. These harms fall under violations of human rights and harm to communities. The commercial sale and sharing of sensitive data with third parties exacerbate these harms. Hence, the event meets the criteria for an AI Incident.
Thumbnail Image

Your Period Tracking App Data Is Being Sold and Used Against You

2025-06-12
VICE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems in the form of period tracking apps that collect and process sensitive health data. The misuse of this data has directly led to violations of human rights and privacy, such as government surveillance targeting abortion providers and potential discrimination by employers and insurers. These harms fall under violations of human rights and breach of obligations to protect fundamental rights. Therefore, this qualifies as an AI Incident due to the realized harm caused by the AI system's use and misuse.
Thumbnail Image

Apps menstruales: mina de oro para anunciantes y riesgo para la seguridad de las mujeres

2025-06-11
LaSexta
Why's our monitor labelling this an incident or hazard?
The menstrual tracking apps use AI systems to collect and analyze intimate data, which has directly led to harms such as privacy violations and potential discrimination, including legal actions against women. The article references actual cases where such data has been used in judicial investigations and to restrict access to abortion, indicating realized harm. This fits the definition of an AI Incident because the AI system's use has directly or indirectly led to violations of human rights and breaches of privacy, which are harms under the framework. The article also discusses the broader societal implications and calls for regulatory responses, but the core event is the realized harm from AI system use in these apps.
Thumbnail Image

Apps menstruales: datos valiosos, riesgo para mujeres - EcoAvant.com

2025-06-11
EcoAvant.com
Why's our monitor labelling this an incident or hazard?
The event involves AI systems insofar as menstrual tracking apps use AI or algorithmic data processing to monitor and analyze user health data. The harms described include violations of privacy and potential breaches of fundamental rights (human rights and reproductive rights), which have already occurred or are ongoing. Therefore, the event describes realized harms directly linked to the use of AI systems in these apps, qualifying it as an AI Incident rather than a hazard or complementary information. The article does not merely warn of potential harm but documents actual risks and cases where data have been used in harmful ways.
Thumbnail Image

Las app para ciclo menstrual son una mina de oro que presenta graves riesgos para la salud y seguridad

2025-06-13
Gizmodo en Español
Why's our monitor labelling this an incident or hazard?
The menstrual tracking apps described involve AI systems that process sensitive health and behavioral data to generate user profiles and targeted advertising. The article documents actual harms resulting from the use and misuse of these AI systems, including privacy violations, discrimination, surveillance, and restrictions on abortion access, which constitute violations of human rights and harm to communities. The involvement of AI in data collection, profiling, and sharing is central to these harms. Hence, this qualifies as an AI Incident under the OECD framework because the AI system's use has directly and indirectly led to significant harm.