US School AI Surveillance Sparks Privacy Breach

Thumbnail Image

The information displayed in the AIM should not be reported as representing the official views of the OECD or of its member countries.

Vancouver Public Schools and other US districts have deployed AI-powered monitoring tools to track student online activities amid safety concerns. However, the system's implementation led to an inadvertent breach, exposing nearly 3,500 sensitive student records and sparking significant privacy and security concerns.[AI generated]

Why's our monitor labelling this an incident or hazard?

The AI system is explicitly involved as it monitors and analyzes student online activity using machine learning algorithms to detect risks. The inadvertent release of unredacted sensitive student documents due to poor security measures directly led to privacy violations and potential harm to students, including outing LGBTQ+ students and eroding trust. These harms fall under violations of rights and harm to communities. The AI system's use and malfunction (security lapses) are central to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.[AI generated]
AI principles
Privacy & data governanceRobustness & digital securityAccountabilityTransparency & explainabilityRespect of human rights

Industries
Education and trainingGovernment, security, and defenceDigital security

Affected stakeholders
Children

Harm types
Human or fundamental rightsReputationalPsychological

Severity
AI incident

Business function:
Monitoring and quality control

AI system task:
Event/anomaly detection


Articles about this incident or hazard

Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
AP News
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly involved as it monitors and analyzes student online activity using machine learning algorithms to detect risks. The inadvertent release of unredacted sensitive student documents due to poor security measures directly led to privacy violations and potential harm to students, including outing LGBTQ+ students and eroding trust. These harms fall under violations of rights and harm to communities. The AI system's use and malfunction (security lapses) are central to these harms. Hence, the event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-13
Market Beat
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Gaggle) used for monitoring students' online activity. The AI's use has directly led to harms including privacy breaches (exposure of sensitive student data), violations of students' rights (outing LGBTQ+ students without consent), and harm to community trust. These harms are materialized and documented, not merely potential. The AI system's malfunction or design (e.g., inadequate data protection, over-surveillance) contributed to these harms. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

AI Surveillance Is Being Installed In Schools To Keep Kids Safe. But That's Not All

2025-03-12
Mic
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gaggle's surveillance software) used to monitor students, which is an AI system by definition as it analyzes online activities to generate alerts. The system's use has directly led to a major privacy breach where sensitive student documents were inadvertently accessed by reporters, constituting harm to individuals' rights and privacy. This is a clear violation of human rights and privacy protections, fitting the criteria for an AI Incident. The harm is realized, not just potential, and the AI system's role is pivotal as it is the source of the data collection and monitoring that led to the breach. Although the system aims to prevent harm, the actual outcome includes significant privacy violations, making this an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
Sun Sentinel
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Gaggle) that monitors students' online activity using machine learning algorithms. The system's use has directly led to harms such as privacy breaches (release of unprotected sensitive data), outing of vulnerable students, and psychological impacts on students who feel surveilled. These are clear violations of rights and harm to communities and individuals. The security risks and privacy violations are not hypothetical but have occurred, as evidenced by the inadvertent release of thousands of sensitive documents. Although the system aims to prevent violence and self-harm, the realized harms and risks are significant and documented. Hence, this event meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered surveillance tools (e.g., Gaggle) that use machine learning algorithms to detect potential risks among students. The misuse or malfunction here is the failure to secure sensitive data, leading to unauthorized access to private student information. This has caused direct harm by violating students' privacy and potentially exposing vulnerable groups, such as LGBTQ+ students, to harm. The AI system's role in monitoring and flagging content is pivotal to the incident, and the resulting harms align with violations of rights and harm to communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Share your experience with student surveillance

2025-03-12
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions AI-powered surveillance systems used to monitor students, which qualifies as AI systems. The use of these systems can lead to harms such as violations of privacy and potentially human rights if students are unfairly flagged and referred to law enforcement. However, the article itself is a call for information and does not report a specific incident of harm or a concrete event where harm has occurred or is imminent. Therefore, it does not describe an AI Incident or AI Hazard but provides complementary information about ongoing societal and governance concerns related to AI surveillance in education.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
Seymour Tribune
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Gaggle) used for monitoring students' online activity, which is a clear AI system by definition. The AI's use has directly led to harms including privacy violations, exposure of sensitive personal information, and breaches of trust, which fall under violations of human rights and harm to communities. The inadvertent release of unprotected sensitive data is a malfunction or failure in the AI system's deployment and data management. The outing of LGBTQ+ students and erosion of trust are indirect harms caused by the AI system's surveillance outputs and their handling by school staff. Therefore, this event meets the criteria for an AI Incident due to realized harms directly and indirectly caused by the AI system's use and malfunction.
Thumbnail Image

AI surveillance in US schools: Thousands of sensitive student documents exposed in surveillance breach, fueling privacy fears - The Times of India

2025-03-12
The Times of India
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using machine-learning algorithms to monitor student communications and flag potential risks. The breach of sensitive data directly results from the use and malfunction (security failure) of this AI system, leading to a violation of privacy rights and potential harm to students' well-being and trust. This fits the definition of an AI Incident because the AI system's use and malfunction have directly led to harm (violation of rights and harm to vulnerable communities).
Thumbnail Image

Student privacy vs. safety: The AI surveillance dilemma in WA schools

2025-03-12
The Seattle Times
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as a machine learning algorithm scanning student online activity to detect risks. The system's use has directly led to realized harms: the inadvertent release of sensitive, unredacted student documents (privacy and security harm), outing of LGBTQ+ students (violation of rights and harm to vulnerable communities), and erosion of trust between students and school staff. These harms fall under violations of human rights and harm to communities. The AI system's role is pivotal as it is the mechanism by which surveillance and data collection occur, leading to these harms. Although the system aims to prevent harm, the documented negative consequences and data breach constitute an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance - World News

2025-03-12
Castanet
Why's our monitor labelling this an incident or hazard?
The article explicitly describes AI systems (machine-learning algorithms in surveillance software like Gaggle) used to monitor students and detect risks. The misuse or malfunction (inadequate security leading to exposure of sensitive data) and the use of these systems have directly led to harms including privacy breaches, potential outing of LGBTQ+ students, and emotional harm, which are violations of rights and harm to communities. The involvement of AI in these harms is clear and direct, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Schools Use AI to Monitor Kids, Hoping to Prevent Violence. Our Investigation Found Security Risks

2025-03-12
US News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gaggle's machine-learning algorithm) used to monitor students' online activity and detect risks. The AI's use has directly led to harms including privacy violations (release of sensitive student data), potential psychological harm (surveillance causing fear and loss of trust), and violations of rights (outing LGBTQ+ students without consent). These constitute violations of human rights and harm to communities. The inadvertent data leak is a clear security harm. Therefore, this qualifies as an AI Incident because the AI system's use has directly led to realized harms as defined in the framework.
Thumbnail Image

Takeaways From Our Investigation on AI-Powered School Surveillance

2025-03-12
US News & World Report
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (machine-learning algorithms used for surveillance and alerting school officials) whose use has directly led to harm in terms of privacy violations and potential breaches of students' rights, including exposure of sensitive personal information and risks of outing LGBTQ+ students. The inadvertent exposure of unprotected sensitive data constitutes a realized harm to individuals' privacy and rights, fulfilling the criteria for an AI Incident. The article also notes systemic issues with the deployment and oversight of these AI surveillance tools, reinforcing the classification as an AI Incident rather than a mere hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
Newsday
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (machine-learning algorithms used for monitoring student digital activity) whose use has directly led to harm in the form of privacy breaches (exposure of sensitive student data) and violations of rights (outing of LGBTQ+ students without consent). The inadvertent release of unprotected sensitive documents is a direct consequence of the AI surveillance system's deployment and management. Additionally, the surveillance raises broader concerns about harm to students' privacy and mental health, fulfilling criteria for an AI Incident under violations of human rights and harm to communities. Therefore, this qualifies as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance | FOX 28 Spokane

2025-03-12
FOX 28 Spokane
Why's our monitor labelling this an incident or hazard?
The article explicitly involves AI systems (machine learning surveillance tools) used in schools to monitor students. The investigation uncovered direct harms: sensitive personal data was exposed due to security lapses, violating privacy rights and potentially causing psychological harm, especially to vulnerable students. The AI system's use and the security failure directly led to these harms. The article also discusses broader concerns about privacy and the impact on student well-being, confirming realized harm rather than just potential risk. Hence, this event meets the criteria for an AI Incident.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
Winston-Salem Journal
Why's our monitor labelling this an incident or hazard?
The AI system (surveillance software like Gaggle) is explicitly mentioned and used for monitoring student digital activity. The incident of sensitive student documents being exposed due to insecure storage and access is a direct harm to privacy and potentially a violation of rights, fulfilling the criteria for an AI Incident. Additionally, the system's use has led to indirect harms such as outing LGBTQ+ students and eroding trust, which are violations of rights and harm to communities. The article reports actual realized harms, not just potential risks, so it is not merely a hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
Market Beat
Why's our monitor labelling this an incident or hazard?
The AI system involved is explicitly mentioned (Gaggle and similar AI surveillance tools) and is used to monitor student digital activity. The incident where reporters accessed thousands of unprotected sensitive student documents due to the system's inadequate security directly harms students' privacy and potentially violates their rights. This fits the definition of an AI Incident because the AI system's use and malfunction (insecure data handling) directly led to harm (privacy breach and potential rights violations). The article also notes updates to the system to mitigate this issue, but the primary event is the realized harm from the exposure. Therefore, the event is classified as an AI Incident.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our...

2025-03-12
Daily Mail Online
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system (Gaggle and similar software) used to monitor students' online activity, which is a clear AI system by definition. The system's use has directly led to harms including privacy violations (release of sensitive data), outing of vulnerable students, and erosion of trust between students and school staff, which are violations of rights and harm to communities. The AI system's role is pivotal as it is the mechanism through which surveillance and data collection occur, leading to these harms. Although the system aims to prevent physical harm (suicide, violence), the documented privacy breaches and psychological impacts are realized harms. Hence, this is an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
SFGATE
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (machine-learning algorithms monitoring student digital activity) whose use has directly led to realized harms: privacy violations through exposure of sensitive data, potential psychological harm to students (e.g., outing LGBTQ+ students), and erosion of trust in educational environments. The security lapse in storing unprotected screenshots is a malfunction contributing to harm. These harms fall under violations of human rights and harm to communities, meeting the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The event explicitly involves AI systems (machine-learning algorithms used for surveillance and alerting) whose use led to the exposure of sensitive student data, constituting a violation of privacy rights (a breach of obligations under applicable law protecting fundamental rights). The harm is realized through the direct exposure of intimate student information due to the system's insecure data handling. This meets the criteria for an AI Incident because the AI system's use directly led to harm (privacy violation and potential psychological harm to students). The article also discusses the system's updates as a response, but the primary focus is on the incident of data exposure and its implications.
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance - WTOP News

2025-03-12
WTOP News
Why's our monitor labelling this an incident or hazard?
The event involves AI systems (machine-learning algorithms used for surveillance and alerting) whose use has directly led to harm: privacy breaches exposing sensitive student data, potential violations of students' rights (especially LGBTQ+ students), and erosion of trust between students and adults. The inadvertent exposure of unprotected sensitive data is a direct consequence of the AI surveillance system's deployment and management. These harms fall under violations of human rights and harm to communities, qualifying this as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
DNyuz
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gaggle) used in the development and use phases to monitor students' online activity. The AI system's outputs have directly led to harms including violations of privacy and human rights (e.g., outing LGBTQ+ students, exposing sensitive personal information), which are breaches of fundamental rights and obligations. The inadvertent release of unprotected sensitive data constitutes harm to individuals' privacy and security. Additionally, the surveillance has caused erosion of trust and psychological harm to students. These harms are materialized and directly linked to the AI system's use and malfunction (security lapses). Therefore, this qualifies as an AI Incident under the framework.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks - WTOP News

2025-03-12
WTOP News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gaggle's machine-learning algorithm) used to monitor students' online activity for signs of harm, which fits the definition of an AI system. The AI's use has directly led to harms: privacy breaches due to unsecured data release, outing of vulnerable students, and erosion of trust, which are violations of rights and harm to communities. At the same time, the AI system has helped identify students at risk of suicide or violence, potentially preventing physical harm. Therefore, the event meets the criteria for an AI Incident because the AI system's use has directly and indirectly caused harm to individuals' rights and well-being.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
The Independent
Why's our monitor labelling this an incident or hazard?
The AI system (Gaggle) is explicitly described as scanning student online activity and generating alerts that lead to interventions, showing direct use of AI. The inadvertent release of unredacted sensitive student data due to poor security practices linked to the AI system's data handling constitutes harm to privacy and communities. The outing of LGBTQ+ students and erosion of trust are violations of rights and harm to communities. These harms have materialized, not just potential, making this an AI Incident. The event also discusses the AI system's malfunction in terms of security risks and false positives, further supporting the classification as an incident rather than a hazard or complementary information.
Thumbnail Image

AP Technology SummaryBrief at 7:01 a.m. EDT

2025-03-12
Owensboro Messenger-Inquirer
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used to monitor students and flag potential issues, indicating AI system involvement. The security lapse exposing thousands of unredacted student documents constitutes a violation of privacy rights, a breach of obligations intended to protect fundamental rights. This harm has already occurred due to the AI system's use and the associated data management failures. Therefore, this event qualifies as an AI Incident due to realized harm linked to the AI system's use and its consequences.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
financialpost
Why's our monitor labelling this an incident or hazard?
An AI system is explicitly involved as the schools use AI-powered surveillance technology to monitor students. The event involves the use of the AI system and its malfunction or mismanagement (lack of adequate security measures) leading to unauthorized access to sensitive personal data. This results in a violation of privacy rights and potentially breaches legal obligations protecting student information, which fits the definition of an AI Incident under violations of human rights or breach of applicable law. Therefore, this event is classified as an AI Incident.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
winnipegsun
Why's our monitor labelling this an incident or hazard?
The AI system is explicitly mentioned as being used for surveillance and monitoring of students, which qualifies as AI system involvement. The incident of accidental exposure of sensitive student documents due to the surveillance technology's records request demonstrates a direct harm related to privacy and security breaches. This constitutes a violation of rights and harm to communities, fulfilling the criteria for an AI Incident. Therefore, the event is classified as an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Schools are surveilling kids to prevent gun violence or suicide. The lack of privacy comes at a cost

2025-03-12
The Hechinger Report
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gaggle) actively used to monitor students, which has directly led to harms including privacy violations (exposure of sensitive student data), breaches of rights (outing LGBTQ+ students without consent), and psychological impacts (students feeling surveilled and restricted). The inadvertent release of unprotected sensitive documents is a malfunction or failure in the AI system's deployment and data management. The harms are realized and significant, meeting the criteria for an AI Incident rather than a hazard or complementary information. The article does not merely discuss potential risks or responses but documents actual harms caused by the AI system's use and malfunction.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Report finds security risks

2025-03-13
Dallas News
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using machine learning algorithms to monitor student online activity and generate alerts. The AI system's use has directly led to harms such as privacy breaches (release of unredacted sensitive data), violations of students' rights (outing LGBTQ+ students without consent), and harm to communities (eroding trust and psychological safety). These harms are realized and documented, not merely potential. Therefore, this qualifies as an AI Incident under the framework, as the AI system's use has directly and indirectly caused significant harms including violations of rights and harm to communities.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-15
The Star
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gaggle) explicitly described as using machine learning algorithms to monitor students' online activity and flag potential risks. The system's use has directly led to harms including privacy violations (exposure of sensitive student data), outing of vulnerable students (LGBTQ+ youth), erosion of trust, and psychological impacts on students. The inadvertent release of unprotected sensitive data is a clear security failure linked to the AI system's deployment. These harms fall under violations of rights and harm to communities as defined in the framework. The article documents realized harms, not just potential risks, so the classification is AI Incident rather than AI Hazard. The article also discusses responses and concerns but the primary focus is on the harms caused by the AI system's use and malfunction (security lapses).
Thumbnail Image

Takeaways from our investigation on AI-powered school surveillance

2025-03-12
Financial Post
Why's our monitor labelling this an incident or hazard?
The article involves AI systems used for surveillance in schools, which monitor students and generate alerts. The concerns raised include potential violations of privacy and harm to vulnerable groups, which align with violations of rights and harm to communities. However, the article does not describe a specific event where harm has already occurred due to the AI system's use, but rather discusses potential risks and the lack of evidence for benefits. Therefore, this situation fits the definition of an AI Hazard, as the AI system's use could plausibly lead to harm, especially to vulnerable students, but no concrete incident is reported.
Thumbnail Image

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks

2025-03-12
Yahoo Finance
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Gaggle's machine-learning algorithm) used to monitor students' online activity. The system's use has directly led to harms including privacy violations (inadvertent data release), emotional distress from false alarms, and outing of vulnerable students, which can be considered violations of rights and harm to communities. These harms are realized and ongoing, not merely potential. The AI system's development and use are central to these harms, fulfilling the criteria for an AI Incident. Although the system aims to prevent violence and self-harm, the documented negative consequences and security risks outweigh mere potential hazards or complementary information. Hence, the classification is AI Incident.
Thumbnail Image

Las escuelas están vigilando a los niños para prevenir la violencia armada o el suicidio. La falta de privacidad tiene un costo.

2025-03-12
hoy.com.ni
Why's our monitor labelling this an incident or hazard?
The event involves an AI system explicitly described as using machine learning algorithms to monitor student online activity and generate alerts for potential harm. The system's use has directly led to harms including violations of privacy and human rights (students' confidential information was exposed, and identities revealed without consent), as well as psychological and community harms (loss of trust, emotional distress). The accidental release of sensitive data and the systemic surveillance of students constitute realized harms linked to the AI system's deployment and malfunction (security failures). Hence, it meets the criteria for an AI Incident rather than a hazard or complementary information.
Thumbnail Image

Escuelas de EUA usaron IA para prevenir la violencia infantil: el problema fue la falta de protección con la información

2025-03-12
xataka.com.mx
Why's our monitor labelling this an incident or hazard?
The article explicitly describes an AI system deployed in schools that monitors student online activity and generates alerts for potential harm. The system's malfunction or poor design (lack of adequate data protection) has directly led to harm by exposing sensitive student information, which is a violation of privacy and could harm students' safety and well-being. Additionally, the system's false positives and potential exposure of sensitive topics (e.g., gender identity) to school officials could cause harm to students' rights and mental health. These factors meet the criteria for an AI Incident, as the AI system's use and malfunction have directly led to harm.
Thumbnail Image

Programas de IA para monitorear a estudiantes tienen riesgos de seguridad

2025-03-12
Yahoo News
Why's our monitor labelling this an incident or hazard?
The article explicitly involves an AI system (Gaggle) used for monitoring students' online behavior. The AI system's use directly led to the collection and storage of sensitive personal data, which was then inadvertently exposed due to inadequate security measures. This exposure constitutes a violation of privacy and security, harming the students and breaching their rights. The harm is realized, not just potential, as confidential information was accessed by unauthorized parties. Hence, this event meets the criteria for an AI Incident because the AI system's use directly caused harm to individuals' rights and security.
Thumbnail Image

Programas de IA para monitorear a estudiantes tienen riesgos de seguridad

2025-03-12
Seattle Post-Intelligencer
Why's our monitor labelling this an incident or hazard?
The event involves an AI system (Gaggle) that monitors students' online activity using machine learning algorithms to detect risks. The system's use has directly led to harms including privacy violations, exposure of sensitive personal information, and negative social consequences for students, especially vulnerable groups like LGBTQ+ youth. The inadvertent disclosure of confidential data due to poor security practices further constitutes harm. These harms fall under violations of rights and harm to communities, meeting the criteria for an AI Incident. The article also discusses the system's use and malfunction (security lapses), confirming direct involvement of AI in causing harm.
Thumbnail Image

Programas de IA para monitorear a estudiantes tienen riesgos de seguridad

2025-03-12
El Vocero de Puerto Rico
Why's our monitor labelling this an incident or hazard?
The article explicitly mentions an AI system (Gaggle) that monitors students' online activity using machine learning algorithms to detect risks such as suicidal ideation or violence. The system's use has directly caused harm by exposing confidential student information due to inadequate security, violating privacy rights and causing distress among students. Additionally, the system has indirectly caused harm by outing LGBTQ+ students without consent, damaging trust and potentially causing psychological harm. These harms fall under violations of human rights and harm to communities, meeting the criteria for an AI Incident. The article also discusses the system's false positives and the complex trade-offs between safety and privacy, but the realized harms are clear and significant.